What I’ve Learned About Developing in Docker

Table of Contents

  1. Developing with complex filesystem layouts
  2. Save yourself some bandwidth with a docker hub mirror
  3. Getting localhost to make sense
  4. Deploying swap like it’s an app
  5. Building super complex images
  6. Cleaning up images, volumes, and networks

Always expect the unexpected

Docker, the company, has not ceased to amaze me. They are solving some hard problems in (mostly) elegant ways. This article is about the more complex side of Docker life. If you are a beginner, this may be a little over your head. However, fear not. Leave a comment and someone will help you to wrap your head around what’s going on. I’ve created a git repository with examples that may help you, especially if you are a hands on learner.

If you are a veteran, some of these may feel familiar. If you can add more, please let me know in the comments below and I’ll add it to this article (with attribution of course)!

And now, on to the techniques…

Developing with complex filesystem layouts

example on github

Imagine a directory layout, that looks something like this:

|- B
|- C
|  |- D

Inside a running container, you need B to be where D is… here’s a little known fact that people tend to discover by accident: you can nest volumes.

Here’s an example docker-compose.yml:

version: '2'
        image: ubuntu:16.04
            - ./C:/app
            - ./B:/app/D
        command: "cat /app/D/content"

What interesting, is that the volumes can also be named volumes, such as the case when you are dealing with multi-platform development environments and using nodejs inside a container.

version: '2'
        image: node:slim
            - ./src:/app
            - example_node_modules:/app/node_modules
        command: "npm install"

This setup will install the node_modules inside the container but keep them separate from your development environment. This allows scripts to function inside or outside the container.

Some caveats:

  • Volumes mounted inside other volumes create a filesystem boundary. This means the command: rm -rf /app/node_modules in the second example won’t actually do anything. Instead (and there’s a simpler way, but I want to be explicit here), you have to do rm -rf /app/node_modules/* && rm -rf /app/node_modules/.* Don’t forget to delete the dot-files in the mounted volume.
  • This can lead to confusion when it gets gnarly. More than one developer has gotten confused that the file they created in D, on the host machine, isn’t in D in the container.

Save yourself some bandwidth with a docker hub mirror

example on github

This one is a bit more involved, and only works on Docker for Mac, though I’ll leave it up to some enterprising person to allow it to work in Docker for Windows as well.

One of the most annoying things about docker, is running something like docker-clean all to get back some much needed disk space and having to download all the base images all over again. Fear not, with this little hack, you get a local Hub mirror (and you can forget about logging in, forever).

We use this to also provide a login for the developers. We don’t need to manage logins for each developer in Docker Hub, it’s provided as part of the development environment. If they need push access, then we’ll give them a login, but most developers don’t need that.

So, this is what the docker-compose.yml file will look like:

version: '2'
        image: registry:2
            - '5000:5000'
            - ./registry:/var/lib/registry
            - ./configs/mirror_config/config.yml:/etc/docker/registry/config.yml
        restart: always
        image: redis:latest
        restart: always

The config.yml will be mostly boilerplate (go see the example on github), but there’s one important part:

  remoteurl: https://index.docker.io
  username: username
  password: password

You’ll want to replace (or remove) the username and password keys to use this locally. At BoomTown, we put a read-only user/pass so that docker login isn’t required to get up and running.

And finally, you’ll need a very important script to enable this:



docker-compose up -d

cd ~/Library/Containers/com.docker.docker/Data/database/
git reset -- com.docker.driver.amd64-linux
git checkout -- com.docker.driver.amd64-linux
cd com.docker.driver.amd64-linux
echo '{"registry-mirrors":["http:\/\/localhost:5000"],"debug":true,"storage-driver":"aufs","insecure-registries":["localhost:5000"]}' > etc/docker/daemon.json
git add etc/docker/daemon.json
git commit -m 'configuration for local hub mirror'
cd $DIR

echo "Configured docker hub registry mirror"

I’m not going to lie, I spent several weeks figuring out those whole 15 lines after they removed the pinata tool from OSX. This uses undocumented (rather, very little documented) api’s of docker … It currently works at the time of this writing (docker for mac 1.12.2) and works as far back as (docker for mac 1.12.0-beta-20ish).

A few minutes after running ./init.sh, the whale will begin to dance in the status bar and restart, automatically detecting the configuration change.

This has saved me (and my team) countless hours when having to wipe our development environment and some of our layers are several hundred megabytes.

Getting localhost to make sense

Someday, you may be using php-fpm with nginx. You may be tempted to make them different containers, in fact, you might do the work. You then run your web app (like WordPress), and everything seems to be fine. Except that it isn’t.

When your PHP script calls back to localhost, it just doesn’t work! It’s expecting a web server at localhost, but instead it finds a php-fpm server which isn’t listening to anything on port 80.

Believe it or not, (remember, I said expect the unexpected) Docker expected this and it’s really simple to fix with Docker Compose.

Here’s the example docker-compose.yml file:

version: '2'
        build: images/nginx
            - '80:80'
            - '443:443'
        build: images/php-fpm
        network_mode: "service:nginx"

One thing to keep in mind, you have to open the ports to the php-fpm container on the nginx container. Imagine that it looks something like this normally:

    [ NIC ]   [ NIC ]
       |         |
    [nginx]   [ fpm ]

By setting the network mode, it looks more like this:

      [ NIC ]
[nginx] -|
         |- [ fpm ]

When you realize this, it makes sense that nginx would control the ports, since it’s higher up the graph. You can do this with many services, so that on the network level, it’s as though they are all running on the same machine but their filesystem’s are isolated from one another (with the exception of shared volumes, of course).

Keep in mind that these services have become coupled using this method. They must be deployed with each other and their life-cycles become linked. It’s better to change the code that is running to not connect to localhost, and instead go to some kind of smart load balancer like traefik or haproxy.

But in a pinch, this works…

Deploying swap like it’s an app

So, you built this awesome cluster … and you realize (after the fact) that you didn’t get big enough machines … well. While you rebuild your cluster, just deploy this little guy to each machine in the cluster.

It doesn’t need any network access, and since all the containers are sharing the same kernel, all containers will get access to this swap space immediately.

When you don’t need the swap space any longer, just kill the container and they’ll clean up after themselves.

Building super complex images

If you haven’t heard about it, you should check out rocker from grammarly.

It extends the Dockerfile grammar, adding some much needed commands like MOUNT, TAG and PUSH which is really just sugar on top of Docker. It also allows using multiple FROM statements in a Dockerfile and passing artifacts between them. Rocker makes it easy to declare a build image and a runtime image in the same file.

It’s not for every project, but in projects it is useful in, you get to delete a bunch of code. There’s no feeling like deleting code.

Cleaning up images, volumes, and networks

I mentioned it earlier, but docker-clean by zzrot is by far, the most feature complete. It involves running a docker run command and bam, you’ll get output showing how much disk space you cleaned up. Most tools don’t give you diskspace feedback, so that’s really cool, IMHO.

The end

Those are all the advanced tips, tricks and hacks for Docker that I can think of. If you have some of your own, let us know in the comments below and I’ll add them to the article.

Thanks for reading,


  • Mapu

    Thank you- very interesting tidbits.

  • Lajos Incze

    “There’s no feeling like deleting code.” – well said.

  • Was not aware about Rocker. Sometime I would just like to call a bash script. Would it be possible ?

    • It simply extends the Dockerfile syntax, in fact a vanilla Dockerfile will work without any changes.