Remember when we create a neat compact Docker image for the frontend? Now we are going to do the same thing for the backend.
Before we start working on the Dockerfile for the backend, we need to take care of one thing.
Because we are not going to run the backend app in the Docker in the
DEBUG mode, Django will require that the ALLOWED_HOSTS setting is set. This setting is a security measure which prevents certain types of the security attacks. Since we have already converted our settings reading such dynamic configuration from the environment variables, The following entry in the
settings.py will handle that
# ... # new ALLOWED_HOSTS = env.list('ALLOWED_HOSTS', default=) # ...
Let’s think a little what we need to do to build our Docker image for the backend.
- We need derive it from some adequate base image
- We need somehow install all Python packages our app depends on.
- We need to copy the Python code needed to run our application
- Are we still going to use
python manage.py runserverto run our app? Let’s address the last item first.
python manage.py runserveris fine for the development, but it is definitely not production ready, and probably not very secure too. Instead, we are going to use
gunicorn, a popular HTTP server specifically designed to run applications such as our Django backend. Install it:
pip install gunicorn
If you are running Mac or Linux computer you can try the gunicorn right now. Unfortunately, at the time of this being written, it does not work on Windows. Not a big deal though, it is going to work perfectly well in the Docker container, even on Windows. Now we want to capture all Python packages our backend app depends on. The simple way to do it is:
pip freeze > requirements.txt
If you look at the contents of that file you’ll see all Python dependencies we have installed.
Similarly to what we did for the frontend image, we want to create a
.dockerignore file which contains files we don’t want to copy to the image. Here’s mine:
.vscode .env .venv __pycache__ Dockerfile .git .gitignore
Make sure that the Docker ignores the
.env file, as it contains the dev configuration we are not going to use in Docker.
The way the gunicorn serves our Django app is the following:
gunicorn pixies.wsgi:application --bind 0.0.0.0:$PORT
$PORT is the port number provided with the environment, for Heroku compatibility.
Let’s wrap that call into a single file
boot.sh which will serve as an CMD in our Dockerfile:
boot.sh with the following:
#!/bin/sh exec gunicorn pixies.wsgi:application --bind 0.0.0.0:$PORT --log-file -
The reason why we are wrapping into a separate script file instead of calling gunicorn directly will be clear soon.
--log-file - indicates that the log generated by the gunicorn should go to the standard output. This is one of the common practices for the Docker images as it gives an universal way to access the logs, instead of digging for them somewhere in the container’s file system.
Now the Dockerfile.
Next we need to copy the `requirements.txt so during the image build process, pip command can install the Python packages which our Django app depends on. While we at it, let’s bake few environment variables in the image useful for the Python-based images:
# ... # new ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 COPY requirements.txt /
ok, in theory, the next command should be something like
RUN pip install -r requirements.txt
This won’t work in our case though. The reason is, that the Postgres package for Django (psycopg2-binary) has to be built from the source for the alpine-based images. So we need to install some necessary build environment, run
pip install and, finally, we can purge the build environment. The following does that
RUN apk add --virtual .build-deps --no-cache postgresql-dev gcc python3-dev musl-dev && \ pip install --no-cache-dir -r requirements.txt && \ apk --purge del .build-deps
Notice that this joins multiple RUN directives together, in order to achieve a smaller image size. Small is good! In the above, we had the necessary Postgres libraries while installing the dependencies, but we purged them in the end. So we will need to install them again, this time the release version only. Here is the complete Dockerfile
FROM python:3.8-alpine ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 COPY requirements.txt / RUN apk add --virtual .build-deps --no-cache postgresql-dev gcc python3-dev musl-dev && \ pip install --no-cache-dir -r requirements.txt && \ apk --purge del .build-deps COPY . . RUN apk add --no-cache libpq && \ chmod +x /boot.sh && \ dos2unix /boot.sh CMD ["/boot.sh"]
dos2unix thing is helpful if we build the image on a Windows machine. For Linux or Mac it won’t hurt, so we will keep it here.
adding the backend to the docker-compose.yml
Now we have two containers for our backend, the Postgres one and the Django app. We can modify our
docker-compose.yml so it handles both them together. Modify it so it looks like this:
version: '3' services: db: image: postgres:12-alpine container_name: db environment: - POSTGRES_USER=pixies - POSTGRES_PASSWORD=pixies - POSTGRES_DB=pixies ports: - 5432:5432 # new volumes: - data:/var/lib/postgresql/data # new backend: build: . environment: - PORT=80 - SECRET_KEY=hushush - DATABASE_URL=postgresql://pixies:pixies@db:5432/pixies - ALLOWED_HOSTS=* ports: - 9090:80 depends_on: - db volumes: data:
We simply adding another service to this file and configure it. As our Django app currently expects a few environment values, we specify here PORT, SECRET_KEY, DATABASE_URL and ALLOWED_HOSTS
But wait a minute. when we ran the Django app in the development mode, we specified the localhost as the host name for the database. Now we are changing it to
db. What gives? From the point of view of the backend container,
localhost means the container itself (not your dev machine) and, since we haven’t installed Postgres in it,
localhost won’t workThe way docker-compose works is that it creates a virtual network when each service is given the network name the same as its service name. This way service can reach one another over that virtual network. So the Django container will see the Postgres container as
Finally, since we are running the gunicorn inside the container on port 80, we map that port to the port 9090 on our dev machine so we can see how we can access it with the browser. Let’s rebuild the images, just in case:
and start the whole setup:
docker-compose up -d
If everything goes well, the containers should be started an linked together. Let’s try to see if it works. Remember we don’t have a root endpoint anymore, so we need to access either api or admin endpoint: http://localhost:9090/api Does it work? Well, sort of. It returns some data, but the web page looks weird. If we dig into the problem, we will soon realize that the Django app fails to serve the css files and other static assets. We are going to fix that in the next module.