Pixies, part 2: What you wanted to know about Docker but were afraid to ask


Docker in a nutshell

Docker allows you to create self-contained turnkey images which contain everything one needs to run your application. These can be copied, shared, uploaded and deleted, when not needed anymore. When I need to run PostgreSQL or, say, Redis on my box I do not bother anymore installing this software and pollute my hard-drive. I just pull corresponding image and run it on my box. When I don’t need the image anymore I simply delete it and my hard-disk is back to the state it was before.
We will build Docker images for frontend, backend and we will also use prebuilt database image during the backend development. Note on the terminology: when you read about the Docker you will encounter the term containers often. Some are confused what is the difference between the image and the container. Quite simply, the container is the running instance of the image.

If you are familiar with VMs (virtual machines), conceptually Docker images/containers are something like that. The difference is that the Docker containers are more lightweight: much smaller, much faster and consume way less resources. Technically they are implemented also differently, but we are not going to drill into this. There are plenty of resources on the Web if you are interested

Dockerfile

Dockerfile is pretty much a set of instructions Docker executes to produce an image. Go to the pixies/frontend directory and create an empty file:

touch Dockerfile

Enter a single line to that file:

FROM nginx:alpine

Although it is possible to build Docker images starting from scratch, it is barely feasible, so usually we start from some base image. In our case it is going to be nginx, a very popular and relatively lightweight web server which is going to serve our frontend files. alpine indicates particular flavor of the nginx image which is very small and often used for the Docker images.

Theoretically, we could try to build the image now. But before we do that, let’s make sure we optimize a little the build process so Docker does not copy unnecessary stuff during the build. Create the file .dockerignore with the files and directories you want to be ignored by the Docker:

node_modules
.git
.vscode

Let’s build the image:

docker build -t pixies_frontend .

Notice the dot at the end, it indicates the current directory, a hint to the Docker where to look for the Dockerfile. The image will be called pixies_fronted. If you have a reasonably good Internet connection it will take a few seconds to build the image. Check that the image is built:

docker images | grep pixies_frontend

The image should be there. Let’s run it:

docker run -d --name frontend -p 8080:80 pixies_frontend

We ask the docker to run the image named pixies_frontend in a detached mode (-d), give the name frontend to the resulting container and map the port 80 in the container to the port 8080 on the machine, as nginx in the container starts listening on the port 80 inside the container, by default.

Verify, that the container is running:

docker ps | grep frontend

Now, in your browser, navigate to http://localhost:8080 You should be greeted by the nginx welcome page.

At the moment, our container does not have our app, it just runs bare nginx installation

Peek into the nginx container

Let’s try to connect to our container and see what is there:

docker exec -it frontend sh

Believe or not, you are inside the container now! By default, the nginx configuration is stored in the file /etc/nginx/conf.d/default.conf Let’s look at it:

cat  /etc/nginx/conf.d/default.conf

There you will see some default nginx configuration settings, But, for now, the key for us is this:

location / {
    root   /usr/share/nginx/html;
    index  index.html index.htm;
}

This shows where nginx looks for the web content. It looks like the location is /usr/share/nginx/html Indeed when we do

ls /usr/share/nginx/html

we will see that it contains the nginx welcome index.html file.

It is very important to realize the following:

  • the directory /usr/share/nginx/html is in the container, not on your machine!

So our plan is this: we will replace the default nginx content with our app. Exit the container and rebuild the frontend:

exit
npm run build

Modify your Dockerfile so Docker will copy the files from your build directory into the desired destination:

FROM nginx:alpine
#new
COPY build/* /usr/share/nginx/html/

Let’s try it: stop the container, rebuild the image and start again:

docker rm -f frontend
docker build -t pixies_frontend .
docker run -d --name frontend -p 8080:80 pixies_frontend

When you go to the http://localhost:8080 now, you should see your simple webpack app proudly responding with the “Pixies” title, instead of the default nginx prompt. You may need to reload the page in the browser

When we change anything in our app, we will have to rebuild the webpack app and copy the files over again. That is not ver optimal workflow, let’s optimize it.

Multistage Docker builds

It would be better if our Dockerfile would just copy the source files into image and rebuild our webpack right there. But our base image is nginx which does not have npm or anything like that to make webpack run. We could install the npm on top of the nginx, but that would make the resulting image larger as it now contains npm, webpack and other libraries which are needed only during the build. So what do we do? Docker has this nice feature which, instead of copying files from your machine, you can copy them from another image, possibly unrelated to the one you are building.

  • In the first (or the build) stage we would build our final webpack app from the source files.
  • In the second (or the final) stage we would copy the resulting web app, but not from the machine but from the build stage. As the result, our resulting image will be lean and mean and the image for the build stage can be simply discarded

So let’s add the build directory to our .dockerignore, so the webpack app is not copied from the machine:

node_modules
.git
.gitignore
build

Next, modify the Dockerfile for the two-stage build:

FROM node:alpine AS build

COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build

FROM nginx:alpine AS final

COPY --from=build /build/* /usr/share/nginx/html/

As you see,

  • our build stage is based on the node:alpine image which has all tools necessary to produce the webpack build.
  • next we copy the package.json and package-lock.json which track all dependencies we need to build our webpack
  • then, we run npm install to install those dependencies.
  • copy everything from the source directory, this includes source files, .dockerignore takes care that we don’t copy anything unnecessary.
  • finally, we do npm run build which produces a neat webpack build in our sitting in our build stage.
  • as the last step we copy the webpack build to the nginx-based image as before, but this time not from the machine, but from the build stage. We can rebuild the image and check that it works:
docker rm -f frontend
docker build -t pixies_frontend .
docker run -d --name frontend -p 8080:80 pixies_frontend

We can check in the browser and see the nothing has changed. But we eliminated the need to produce the build on the machine and still getting the lean, mean docker image ready to be deployed live. But that is in the next module.


See also