The source code for this post can be found on GitHub.
Managing a live database, and in particular dealing with database migrations without allowing for any downtime in your application, is typically the most challenging part of any automated deployment strategy. Services can be spun up and down with impunity because their state at the beginning and at the end are exactly the same, but databases store data--their state is always changing.
From where I sit, there are two good options for dealing with database migrations: at an application level (e.g. a startup script when you connect with a service) or using a tool like Ansible. Either one of them allow you to write automation for the migrations, which is (in my opinion) non-negotiable for maintaining any non-trivial software project. While I lean towards the application owning the migrations, Ansible or another idempotent management tool is a somewhat close second, and may be better for your use case.
There are a few different ways to run a SQL script against Postgres using Ansible. The first is to take a sql file and dump it on the server you're managing, then run the sql script using a psql command. First we'll install Postgres for a Debian distribution with apt:
--- # tasks file for run-sql-postgres - name: install postgres apt: update_cache: yes name: ['postgresql', 'postgresql-contrib'] state: present
Next I'll create a testing database to run the scripts against with the built in postgresql_db ansible module:
- name: ensure psycopg2 apt: name: python-psycopg2 - name: ensure testing database created postgresql_db: name: testdb # required. name of the database to add or remove become_user: postgres
In our ansible role directory, we'll create a files/migrate.sql file with the following contents:
CREATE TABLE IF NOT EXISTS products ( product_id serial PRIMARY KEY, name varchar(100), price numeric );
Nick Fisher is a software engineer in the Pacific Northwest. He focuses on building highly scalable and maintainable backend systems.