As our backend is going to be Django-based, we will need a relational database to store the data. Most Django tutorials assume
sqlite which is a simple, one-file database which works just fine for the tutorials. The problem, however, begins when you want to deploy your application to production: because of its limitations, it might not be such a good idea. PostgreSQL (a.k.a. Postgres) is a production grade database, with a widespread use. Now, Heroku supports Postgres, but what about our dev box? It is highly recommended to use the same database during the development as you plan to use in production. There are plenty of ways to install Postgres on your machine, but I found the most simple is to install it as a Docker container. This will work perfectly regardless of what system you have, Windows, Mac or Linux.
Instead of manually passing flags into docker when starting a container, we will use a tool called
docker-compose which greatly simplifies managing containers for your app, especially when the need to reach each other over the network or share the data persisted in some storage.
docker-compose uses a file, called
docker-compose.yml which describes the desired container setup for your app.
Go to your backend directory and create this file:
cd pixies/backend touch docker-compose.yml
In this module, we are going to use only one container, namely Postgres container.
So let’s input this into the
version: '3' services: db: image: postgres:12-alpine container_name: db environment: - POSTGRES_USER=pixies - POSTGRES_PASSWORD=pixies - POSTGRES_DB=pixies
As you see:
docker-compose.ymlis written in yml, a human-readable file format commonly used for the configuration files. to edit it all you need is a simple text editor (Visual Studio Code will work just fine). Be careful about the spacing though, whitespaces are semantically significant for yml.
docker-compose.ymlfile format is versioned, We use version 3 which is the latest at the moment this is written.
- It has a
servicessection which describes the containers we need. In our case it contains only
db, our Postgres database
- For each service(container) we can specify which image to use. We use the standard, alpine-based docker image.
- We name the container, in our case it simply called
- we provide environment variables to be passed to the container, in this case we pass the user name, password and database name so Postgres will be able to create initial setup for us.
Let’s launch whatever we specified:
docker-compose up -d
Docker may need to pull the Postgres image but, if everything goes well, the Postgres container should be started. Verify it:
docker ps | grep db
once it is started let’s connect to it:
docker exec -it db sh
We should end up inside the docker container. Let’s use the
psql utility to actually connect to the Postgres from within the container:
psql -U pixies
Now issue a couple of Postgres commands, We are going to list all databases, connect to the
pixis database, run some simple SQL and, finally, exit
\l \c pixies select 1; \q
These should work. Are we done with the Postgres setup? Not quite.
Exposing Postgres ports
We want to be able to access our Postgres over the network connection from our host but, most importantly, from our future Django container. In the
docker-compose.yml, just after the
environment section, add the following:
ports: - 5432:5432
This will expose 5432, the default Postgres port to the host and other containers
We want our Postgres data across multiple runs, Otherwise every time we start the Postgres container it will create the database
pixies again and again. Here where Docker’s persistent volumes come into the picture. Very roughly, one can think of the persistent volume as some storage (virtual disk), which exists independently of the running containers, but where the running containers can map certain directories to.
We are going create a single volume, called
data and map it to the default directory where Postgres stores its data.
docker-compose.yml so it does exactly that:
version: '3' services: db: image: postgres:12-alpine container_name: db environment: - POSTGRES_USER=pixies - POSTGRES_PASSWORD=pixies - POSTGRES_DB=pixies ports: - 5432:5432 # new volumes: - data:/var/lib/postgresql/data # new volumes: data:
Observe that we create the
data volume in the
volumes section, which is on the same level as the
services section and map it to the
/var/lib/postgresql/data directory in the Postgres container.
In case you are curious, the colon in for the
data: in the
volumes section is required, as it is yml’s way to indicate that this is a dictionary, albeit an empty one in our case.
All right, we have a Postgres set up for our dev box, so we are ready to build our Django backend. This is in the next module.