Using docker for persistent development environments

When thiking about Docker and what it is designed to do, what comes to mind are disposable containers: containers that are instantiated as many times as necessary to run an application or to complete some work, to then be deleted as soon as the work is done or the application needs to be restarted.

It's what you do when you use a plain Dockerfile to package an application and all its dependencies into a container at build or release time, for example, to then instantiate (and delete) that container on your production machines (kubernetes?).

It's what you do when you compose one or more containers to create an hermetic environment for your build or test system to run on, with a new container instantiated for each run of your build or test, deleted as soon as the process has completed.

Over the last year, however, I learned to love to use docker for something it was not quite designed to do: persistent development environments.

Before attacking me violently for committing such a horrible sin, let me explain the use case first.

Let's say you are a developer, working on a number of different projects at the same time. Some of those projects are quite different: there's some javascript, different versions of the tooling, there's some C code you are playing with, one of the projects uses a go backend, another two use different versions of node.

Let's also say that as most developers, you use a single laptop for all your work.

In this scenario, ideally, you'd want each project to have as little dependencies as possible on the host operating system, and have each project as isolated as possible from the other ones. Your C projects using LLVM libraries? Should not use the LLVM libraries on your system. An update may break them all. Your nodejs project? Can you keep a different version of node per project? Same thing.

Of course this is generally solved by dockerizing the build environment itself: run all the build in a docker container, together with all the dependencies it needs. But how do you kick this build at all? manually cutting and pasting commands from a doc? A Makefile? A shell script?

Will your developer on MAC be able to kick off this Makefile? What will the dependencies of the shell script be? Will you need yet another container to kick off a dockerized build environment? What if it's hard to build a Dockerfile at all, but you still want to hack on a linux project from your MAC or Windows machine?

This is where persistent development environments are most useful. Basically:

  1. On your MAC/Windows/Linux box you create a container starting from your target operating system (eg, will you build/run/test your code under debian? use a debian container).

  2. Get a shell in there, start hacking. Build, and test. Use it as if it was your own machine. Install and update dependencies, do what you like. Or if you prefer, use your graphical or favourite editor on the files directly, outside the container. Keep using this same container for as long as you have not fully automated / made portable / made hermetic / simplified your build system and full set of dependencies so they can run on your system. Stop and start the container as needed. Or just assume that working like this is good enough, and that it may be just simpler if your build and test systems only needed to worry about running on one single OS, with all developers using a specific container like described.

  3. If you need to hack on a different project, just start a different container. If you need to make some dangerous changes to your container (eg, a system update) create a new container based on it.

  4. Never touch your own host system. All it needs is well, docker.

If you've read my blog before, you may well remember that in the last few years, I've pretty much done the same using libvirt.

Given that this is not quite the recommended or common pattern with using docker, it is often hard to find the correct commands to use.

The practice

Let's look at a concrete use case: I'm on my work macbook. I'm developing an app that ultimately will run on linux, and whose CI/CD system runs on linux.

The build system is not quite written to run on Mac OS. Even though the intent was for the build to be portable, a number of GNU extensions were used, and it's now hard to get rid of them. Whenever I hack with this app, I would benefit from just using the same tools running on my CI/CD system.

I'll start by creating my "developer environment":

docker run -dt -v /home/me/projects:/opt/projects -p 5000-6000:5000-6000 --name project-foo debian:10 bash

This will start a docker container named project-foo mapping my source code /home/me/projects in /opt/projects and exposing ports from 5000 to 6000 as local ports. This will allow me, for example, to just use my favourite MAC editor to modify the code in /home/me/projects, and see the changes in the container, and start my dev app on port 5432 and access it at http://127.0.0.1:5432. I could of course use --network host instead of -p 5000...6000 as per my other article to just expose all ports, faster, with great performance, but unfortunately this does not work on Mac or Windows.

I can stop or start this container at any time with:

docker stop project-foo
docker start project-foo

even after a reboot of my machine, and I can see it is running with:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                              NAMES
b9ff98190412        debian:10           "bash"                   11 minutes ago      Up 22 seconds       0.0.0.0:5000-6000->5000-6000/tcp   project-foo

now, I can get as many shells I need in this container by running:

docker exec -it project-foo bash

in this shell, I can install any software I need, in complete isolation from other projects and from the host operating system, and still use it mostly like it was local.

I can even use a graphical editor to modify the files in /home/me/projects and run a watcher (like ibazel) in the container to automatically have the project rebuilt.

Now, let's say I want to change the port mappings, or I want to instantiate another copy of my development container so I can install a new version of gcc or LLVM to see if the project still builds. All I have to do is:

docker commit project-foo project-foo

and then:

docker run -dt -v /home/me/projects:/opt/projects -p 6000-7000:6000-7000 --name project-foo2 project-foo bash

to start a project-foo2, and well, keep hacking around. Same source code, but running in two very different environments at the same time.

And that's all for now.


Other posts

  • From PDF to interactive map Let's say you are thinking about moving to Rome in the near future. Let's say you have family, and you want to find all daycares within 30 mins by pu...
  • Flying with babies, how we survived A bit more than a year ago I became the proud parent of the cutest little girl in the world. By living abroad and traveling often, the little one had...
  • When cardboard boxes are better than suitcases Have you ever had to pay extra fees to carry oversize luggage? Flown somewhere, ended up buying so many things that they did not fit your suitcase any...
  • Hanging clothes in windy weather Some say that clothes smell and look better when dried with sunlight in the summer breeze. I can certainly state that if you have to pay to use a dr...
Life