Nick Fisher's tech blog

DevOps

How to do a Rolling Upgrade of Multiple Logstash Instances Using Ansible

You can see the source code for this post on GitHub.

In a previous post on How to Provision Multiple Logstash Hosts Using Ansible, we saw that provisioning logstash is pretty straightforward. However, what do we do with it after it’s been out there transforming messages this entire time? Given that elastic comes out with a new version of Logstash every fifteen or twenty minutes, a wise person would look to automate the upgrade process as soon as possible.

How to do a Rolling Upgrade of an Elasticsearch Cluster Using Ansible

You can see the source code for this blog post on GitHub.

In a previous post, we saw how to provision a multi-node elasticsearch cluster using ansible. The problem with that post is that, by the time I was done writing it, Elastic had already come out with a new version of elasticsearch. I’m being mildly facetious, but not really. They release new versions very quickly, even by the standards of modern software engineering.

How to Provision Multiple Logstash Hosts Using Ansible

The source code for this post can be found on GitHub.

Logstash primarily exists to extract useful information out of plain-text logs. Most applications have custom logs which are in whatever format the person writing them thought would look reasonable…usually to a human, and not to a machine. While countless future developer hours would be preserved if everything were just in JSON, that is sadly not even remotely the case, and in particular it’s not the case for log files. Logstash aims to be the intermediary between the various log formats and Elasticsearch, which is the document database provided by Elastic as well.

How to Provision a Multi Node Elasticsearch Cluster Using Ansible

You can see the sample code for this tutorial on GitHub.

Elasticsearch is a distributed, NoSQL, document database, built on top of Lucene. There are so many things I could say about Elasticsearch, but instead I’ll focus on how to install a simple 3-node cluster with an Ansible role. The following example will not have any security baked into it, so it’s really just a starting point to get you up and running.

How to do Test Driven Development on Your Ansible Roles Using Molecule

You can see the sample code for this tutorial on GitHub.

 Molecule is primarily a way to manage the testing of infrastructure automation code. At its core, it wraps around various providers like Vagrant, Docker, or VMWare, and provides relatively simple integration with testing providers, notably TestInfra. Molecule is a great tool, but in my opinion there are not enough resources, by way of examples, to provide an adequate getting started guide. This post is meant to help fill that void.

How to run a SQL Script Against a Postgres Database Using Ansible

The source code for this post can be found on GitHub.

Managing a live database, and in particular dealing with database migrations without allowing for any downtime in your application, is typically the most challenging part of any automated deployment strategy. Services can be spun up and down with impunity because their state at the beginning and at the end are exactly the same, but databases store data–their state is always changing.

A Simple Zero Downtime Continuous Integration Pipeline for Spring MVC

The sample code associated with what follows can be found on GitHub.

One of the biggest paradigm shifts in software engineering, since the invention of the computer and software that would run on it, was the idea of a MVR (minimum viable release) or MVP (minimum viable product). With the lack of internet access becoming the exception in developed countries, it becomes more and more powerful to put your product out there on display, and to design a way to continuously make improvements to it. In the most aggressive of circumstances, you want to be able to push something up to a source control server, then let an automated process perform the various steps required to actually deploy it in the real world. In the best case, you can achieve all of this with zero downtime–basically, the users of your service are never inconvenienced by your decision to make a change. Setting up one very simple example of that is the subject of this post.

How to Use Spring's Dependency Injection in Setup And Teardown Code For Integration Tests With Maven

You can view the sample code for this repository on GitHub.

In our last post on Using Maven to Setup and Teardown Integration Tests, we saw how to run Java code before and after our integration tests to setup and teardown any data that our tests depended on. What if we are using Spring, and we want to use our ApplicationContext, and its dependency injection/property injection features? After all, we would be testing the configuration for our specific application more than anything else, so we should be certain to use it in our setup and teardown code.

How to Run Integration Tests with Setup and Teardown Code in Maven Build

The sample code for this post can be found on GitHub.

Unit testing with Maven is built in, and is the preferred way of validating code is performing correctly. However, sometimes you need integration testing, and most non-trivial applications built in the 21st century are reliant on network connections and databases–that is, things which are inherently third party to your application. If you don’t adequately take that to account in your CI/CD pipeline, you might end up discovering that something very bad has happened after damage has already been done.

Newer Posts