- Right off the bat it gets you thinking about interacting with a database that is not co-located with the application server. Down the road, if you want to cut over to Amazon RDS or something similar, many of the related issues have already been dealt with.
- Right off the bat it gets you thinking about programatically provisioning the database. If you can start up new database instances in a second, it sure would be handy to be able to get them set up so your app could use them automatically…
- It’s completely trivial to try different versions of PostgreSQL. You can easily run version 9.2 all the way through to 9.6 side by side, which is much more of a pain to do if installing them all natively.
- It gets you thinking about automated build and testing infrastructure – starting, provisioning, and upgrading Docker containers is generally easier than installing and upgrading native installations.
- It puts you in position to deploy not just your dependency on PostgreSQL, but also deploy your configuration exactly the way you want it with no extra work.
If you’re using Linux, getting set up with Docker is completely trivial. Follow the plentiful instructions available on the web, and you’re good to go.
I’m not sure how it plays out on MacOS, but on Windows it’s a bit of a nightmare. If you have particularly old hardware, you may as well just give up. Start VirtualBox and run Docker in there, I guess? If you have new enough hardware but not a pro version of Windows it’s doable, but is buggy and sucks as you’ll have to go the Docker Toolbox route. If you have new hardware AND a pro version of Windows, then you can use Docker with the Hyper-V, but it means you’re out of luck when it comes to running other virtual machines (or you have to reboot in between, which is basically a non-starter). On Windows, I had the most luck uninstalling everything (Git, VirtualBox etc.) then installing Docker Toolbox, then upgrading after Docker Toolbox is installed. On Linux I had no problems at all with anything. Just saying…
Once you have Docker installed, it’s time to start up PostgreSQL. When I was first looking at Docker and trying to figure my way around, I looked at this vey helpful blog post. The short answer to get Docker to run *almost* as though you had installed it natively is with the following command (change
POSTGRES_PASSWORD to virtually anything else):
docker run --restart=always -p 5432:5432 --name postgres -e POSTGRES_PASSWORD=password -d postgres:alpine
Definitely worth looking at it and understanding it a bit more. One of the important quirks of using PostgreSQL this way is that it doesn’t work by default to connect with a local
psql installation i.e.
psql -U postgres won’t be able to find the server, despite port 5432 being forwarded correctly. On Linux, the default connection is binding to a socket (which is not forwarded from the Docker container), so to connect to PostgreSQL you’ll need to specify the host –
psql -U postgres -h localhost or similar. There’s a performance cost here, but it’ll be reasonably inconsequential for the beginnings of the app. Alternatively, you can jump into your running Docker container and connect to PostgreSQL directly
And that’s all there is to that – PostgreSQL is running and usable. If you want a specific version, you can get a specific version. If you want to see what happens with your server/app when the PostgreSQL server goes down, or what fails when the “machine” running PostgreSQL runs out of resources, Docker gives you an excellent avenue to test and play around without causing any real harm or botching your host machine up.
You’ll need to get the PostgreSQL client set up to be able to connect from your machine. Alternatively, you could jump into the Docker container itself to execute, or you could run psql from yet another Docker container…