The Docker Dilemma: Benefits and risks going into production with Docker

Written by - 2 comments

Published on - Listed in Docker Virtualization Linux Containers

Over a period of more than one year I've followed the Docker hype. What is it about? And why does it seem do all developers absolutely want to use Docker and not other container technologies?

Important note: Although it may seem that I'm a sworn enemy of Docker; I am not! I find all kinds of new technologies interesting, to say at least. But I'm a skeptical, always have been, when it comes to phrases like "this is the ultimate solution to all your problems".

So this article is mainly to document the most important points I dealt with in a period of one year, mainly handling risks and misunderstandings and trying to get a solution for them.

Note that this article was written in 2016. Meanwhile container environments have drastically changed (mainly due to Kubernetes). However some of the thoughts and potential security risks mentioned in this blog post still apply today.

Initial research: What are Docker containers?

When Docker came up the first time as a request (which then turned into a demand), I began my research. And completely understood, what Docker was about. Docker was created to fire up new application instances quickly and therefore allow greater and faster scalability. A good idea, basically, which sounds very interesting and makes sense - as long as your application can run independently. What I mean with that is that:

  • Data is stored elsewhere (not on local file system), for example in an object store or database which is accessed by the network layer
  • There are no hardcoded (internal) IP addresses in the code
  • The application is NOT run as root user, therefore not requiring privileged rights
  • The application is scalable and can run in parallel in several containers

The problem with persistent data

But the first problems already arose. The developer in question (let's call him Dave) wanted to store data in the container. He didn't care if his application ran as root or not (I'm quoting: "At least then I got no permission problems"). And he wanted to access an existing NFS share from within the container.

I told him about Linux Containers and that this would be better solved with the LXC technology. To my big surprise, he didn't even know what LXC was. So not only was the original concept of Docker containers misunderstood, the origins of the project (Docker was originally based on LXC until Docker eventually rewrote the library to libdocker) were not even known.

Another reason to use Docker, according to this developer: "I can just install any application as a Docker container I want - and I don't even need to know how to configure it." Good lord. It's as if I wanted to build a car myself just because I don't want anyone else to do it. The fact that I have no clue how to build a car, does obviously not matter.

More or less in parallel, another developer from another dev team (let's call him Frank), was also pushing for Docker. He created his code in a Docker environment (which is absolutely fine) using a MongoDB in the background. It's important to understand that using a code library to access a MongoDB and managing a MongoDB server is entirely different and requires different knowledge.

By installing the MongoDB from a Docker image (he had found on the Internet) he had a working MongoDB, yes. But what about the tuning and security settings of MongoDB? This was left as is, because the knowledge of managing MongoDB was not there. As I've been managing MongoDB since 2013, I know where to find it's most important weaknesses and how to tackle them (I wrote about this in an older article "It is 2015 and we still should not use MongoDB (POV of a Systems Engineer)"). If I had let this project go in production as is, the MongoDB would have been available to the whole wide world - without any authentication!

So I was able to convince this developer that MongoDB should be run separately, managed separately, and most important: MongoDB stores persistant data. Don't run this as a Docker container.

Performance, troubleshooting and additional risks

While I was able to talk some sense into Frank, Dave still didn't see any issues or risks. So I created the following list to have an overview of unnecessary risks and problems:

  • readonly file system (Overlay FS, Layers per app): Means you can temporarily alter files but at the next boot of the container, these changes are gone. You will have to redeploy the full container, even for fixing a typo in a config file. This also means that no security patches can be applied.
  • If you want to save persistant data, an additional mount of a data volume is required. Which adds complexity, dependencies and risks (see details further down in this article).
  • Shutdown/Reboot means data loss, unless you are using data volumes or your application is programmed smart enough to use object stores like S3 (cloud-ready).
  • If you use data volumes mounted from the host, you lose flexibility of the containers, because they're now bound to the host.
  • Docker containers are meant to run one application/process, no additional services and daemons. This makes troubleshooting hard, because direct SSH is not possible, monitoring and backup agents are not running. You can solve this by using a docker image already being prepped up with all the necessary stuff. But when adding all this stuff in the first place, LXC would be a better choice.
  • A crash of the application which crashes the container cannot be analyzed properly, because log files are not saved (unless, again, a separate data volume is used for the logs).
  • Not a full network stack: Docker containers are not "directly attached" to the network. They're connected through the host and connections are going through Network Address Table (NAT) firewall rules. This adds additional complexity for troubleshooting network problems.
  • The containers run as root and install external contents through public registries (Dockerhub for example). Unless this is defined differently by using an internal and private Docker registry, this adds risks. What is installed? Who verified the integrity of the downloaded image/software? This is not me just saying this, it's proven that this is a security problem. See InfoQ article Security vulnerabilities in Docker Hub Images.
  • OverlayFS/ReadOnly FS are less performant compared to "classical" ext4 or xfs FS.
  • In general troubleshooting a problem will take more time because of additional complexity compared to "classic" systems or Linux containers because of the network stack, additional file system layers, data volume mounts, missing log files and image analysis.
  • Most of these problems can be solved with workarounds. For example by sing your own registry with approved code. Or rewrite your application code to use object stores for file handling. Or create custom base images which contain all your necessary settings and programs/daemons. Or use a central syslog server. But as we all know, workarounds means additional work which means costs.

Data in volumes and application upgrades

Even with all these technical points, Dave went on with his "must use Docker for everything" monologue. He was even convinced that he wanted to manage all the servers himself, even database servers. I asked him why he'd want to do that in the first place and his answer was "So I can try a new MySQL version".

Let's assume for a moment, that is a good idea and MySQL runs as a Docker container with an additional volume mounted in the container at /var/lib/mysql. Now Dave deploys a new Docker container with a new MySQL version - being smart and shutting down the old container first. As soon as MySQL starts up, it will start running over the databases found in /var/lib/mysql and upgrades the tables according to the new version (mainly the tables in the mysql database).

And now, let's assume, after two days a new bug is found in the production app, that the current application code is not fully compatible with the newer MySQL version. You cannot downgrade to the older MySQL version anymore because tables were already altered. I've seen such problems in the past already (see Some notes on a MySQL downgrade 5.1 to 5.0). So I personally know the problems of downgrading already upgraded data. But obviously my experience and my warnings didn't count and were ignored.

Eventually Dave's team started to build their own hosting environment. I later heard that they had destroyed their entire ElasticSearch data, because something wrong happened within their Docker environment and the data volume holding the ES data...

Risks with volume mounts

Meanwhile I continued my research and created my own test-lab using plain Docker (without any orchestration). I came across several risks. Especially the volume mounts from the host caught my eye. A Docker container is able to mount any path from it's host when the container is started up (docker run). As a simple test, I created a docker container with the following volume information:

docker run ... -v /:/tmp ...

The whole file system of the host was therefore mounted in the container as /tmp. With write permissions. Meaning you can delete your entire host's filesystem, by error or on purpose. You can read and alter the (hashed) passwords from /etc/shadow (in this case by simply accessing /tmp/etc/shadow in the container).

root@5a87a58982f9:/# cat /tmp/etc/shadow | head

Basically by being root in the container with such a volume mount, you take over the host - which is supposed to be the security guard for all containers. A nice article with another practical example can be found here: Using the docker command to root the host (totally not a security issue).

Another risk, less dangerous but still worth to mention it, is the mount of the hosts docker socket (/var/run/docker.sock) into a container. This container is then able to pull information about all containers running on the same host. This information sometimes contains environment variables. Some of these may contain cleartext passwords (e.g. to start up a service to connect to a remote DB with given credentials, see The Dangers of Docker.sock).

In general you can find a lot of articles warning you about exposing the docker socket. Interestingly these articles were mainly written by System Engineers, rarely by developers.  Some of them:

Besides the volumes, another risk is the creation of privileged containers. They are basically allowed to do anything, even when they're already running. This means that within a running container you can create a new mount point and mount the host's file system right into the container. For unprivileged containers this would only work during the creation/start of the container. Privileged containers can do that anytime.

Attempt to prevent mounting volumes into containers 

My task, as being responsible for systems and their stability and security, is to prevent volumes and privileged containers in general. Once more: A volume from a point of view of a container is only needed, if persistent data needs to be written on the local filesystem. And if you do that, Docker is anyway not the right solution to you.

I started looking but to my big surprise there is no way to simply prevent Docker containers to create and mount volumes. So I created the following wrapper script, which acts as main "docker" command:

# Simple Docker wrapper script by


echo "Your command was: $CMD" >> /var/log/dockerwrapper.log

if echo $CMD | grep -e "-v" > /dev/null; then echo "Parameter for volume mounting detected. This is not allowed."; exit 1;fi
if echo $CMD | grep -e "--volume" > /dev/null; then echo "Parameter for volume mounting detected. This is not allowed."; exit 1;fi
if echo $CMD | grep -e "--privileged" > /dev/null; then echo "Parameter for privileged containers detected. This is not allowed."; exit 1;fi

/usr/bin/docker.orig $CMD

While this works on the local Docker host (using the docker command), this does not work when the Docker API is used through the Docker socket. And because in the meantime we decided (together with yet another developer, who understands my concerns and will be in charge for the Docker deployments) to use Rancher as overlying administration interface (which at the end uses Docker socket through a local agent), the wrapper script is not enough.

So a prevention should either be configurable in Docker or Rancher; most importantly Docker itself should support security configurations to prevent certain functions or container settings (comparable to disable_functions in PHP).

In my attempts to prevent Docker mounting host volumes, I also came across a plugin called docker-novolume-plugin. This plugin prevents creation of data volumes - but unfortunately does not prevent the mounting of the host's filesystem. I opened up a feature request issue on the Github repository but as of today it's not resolved.

Another potential solution could have been a working AppArmor profile of the Docker engine. But a working AppArmor profile is only in place for a running container itself, not for the engine creating and managing containers:

Docker automatically loads container profiles. The Docker binary installs a docker-default profile in the /etc/apparmor.d/docker file. This profile is used on containers, not on the Docker Daemon.

I also turned to Rancher and created a feature request issue on their Github repo as well. To be honest with little hope that this will be implemented soon, because as of this writing, the Rancher repo still has over 1200 issues open to be addressed and solved.

So neither Docker, nor Rancher, nor AppArmor are at this moment capable of preventing dangerous (and unnecessary) container settings.

Monitoring mounted volumes using Rancher API

How to proceed from here? I didn't want to "block" the technology, yet "volume" and "privileged" are the clear no-gos for a production environment (once again, OK in development environments). I started digging around in the Rancher API and this is actually a very nice and easy to learn API. It turns out, a container can be stopped and deleted/purged through the API using an authorization key and password. I decided to combine this with our Icinga2 monitoring in place. The goal: On each Docker host, a monitoring plugin is called every other minute. This plugin goes through the list of every container using the ID of the "docker ps" output.

root@dockerhost:~# docker ps | grep claudio
5a87a58982f9  ubuntu:14.04.3  "/bin/bash"   5 seconds ago  Up 4 seconds             r-claudiotest

This ID represents the "externalId" which can be looked up in the Rancher API. Using this information, the Rancher API can be queried to find out about the data volumes of this container in the given environment (1a12) using the "externalId_prefix" filter :

curl -s -u "ACCESSUSER:ACCESSKEY" -X GET -H 'Accept: application/json' -H 'Content-Type: application/json' -d '{}' '' | jshon -e data -a -e dataVolumes

As soon as something shows up in the array, this is considered bad and further action takes place. The Docker "id" within the Rancher environment can be figured out, too:

curl -s -u "ACCESSUSER:ACCESSKEY" -X GET -H 'Accept: application/json' -H 'Content-Type: application/json' -d '{}' '' | jshon -e data -a -e id

Using this "id", the bad container can then be stopped and deleted/purged:

curl -s -u "ACCESSUSER:ACCESSKEY" -X POST -H 'Accept: application/json' -H 'Content-Type: application/json' -d '{"remove":true, "timeout":0}' ''

Still, this does not prevent the creation and mounting of volumes or the hosts filesystem, nor does it prevent privileged containers upon creation of a container. But it will ensure that such containers, created on purpose, by error or through a hack, are immediately destroyed. Hopefully before they can do any harm. I sincerely hope that Docker will look more into security and Docker settings though. Without such workarounds and efforts - and a cloud-ready application - it's not advisable to run Docker containers in production. And most importantly: You need the technical understanding of your developer colleagues where and when Docker containers make sense.

The future and experience will tell more

For now the path to Docker continues with the mentioned workaround and there will be a great learning curve, probably with some inevitable problems at times - but at the end (hopefully) a stable, dynamic and scalable production environment running Docker containers.

How did YOU tackle these security issues? What kind of workarounds or top layer orchestration are you using to prevent the dangerous settings? Please leave a comment, that would be much appreciated!

Add a comment

Show form to leave a comment

Comments (newest first)

ck from Switzerland wrote on Dec 18th, 2016:

Hello Ralph. Thanks for your comment! I agree with you 100%. I see (and I know ;-)) you have put a lot of thoughts into Docker and this is needed. The problem are developers like "Dave" I mentioned, who want to run legacy applications in Docker - just because it's cool.

"Or another next step I would try is to hack the application layer."
This is actually the way most applications and systems are hacked. I dealt with it hundreds of times. And here Docker provides actually a great solution with the readonly FS - the hack is gone after a reboot. But this only helps when no volumes are mounted of course.

"Even if you need some time to set it up, you can invest a lot of saved time into securing the systems. Means in summary, you will still save time."
I agree, too. However as the containers are built on top of an existing image, this image needs to be patched regularly.

"In my view a state of the art application doesn't use mounted volumes anyway. Even the mid sized applications need to be scalable and flexible nowadays"
Then Docker is the right way to go. As I mentioned before, the problem is Dave with legacy application and system setup which require mounts as local file system.

"you've mentioned "Docker containers are meant to run one application/process, no additional services and daemons...". I don't agree with that."
I don't agree either. This is what Docker was supposed to be, at the beginning (the definition of Docker). This is not my opinion and this is also what, imho, shouldn't be done with a container. That's also why I added this to the "negative list", because this "definition" is not helpful at all. Thankfully nobody has to follow it though.

So I'm actually looking forward a lot to soon go into PROD with the Docker setup. With developers as the one I mentioned last (the third one) it's possible and it makes fun, too. But I'm still a bit shocked that Docker allows all settings by default (as of today). A simple config file, telling the Docker Deamon what settings to allow or the opposite to not allow, would be a simple and effective solution.

Ralph Meier from Switzerland wrote on Dec 18th, 2016:

Hi Claudio

I'm an application developer and we already have talked together several times about security issues with docker.

I see that there are some flaws with docker, but I also want to share my oppinion about the general topic regarding security and how we "should" develop applications.

1. It's right, there are some flaws with docker. But it's still very hard to hack a system on a system level. It's much easier to do some social engineering and just asking some guy for his password. Or another next step I would try is to hack the application layer. Only my last try would be to hack a system on a system level.

2. The development/delivery boost is enormous with docker. Even if you need some time to set it up, you can invest a lot of saved time into securing the systems. Means in summary, you will still save time.

3. In my view a state of the art application doesn't use mounted volumes anyway. Even the mid sized applications need to be scalable and flexible nowadays, and to be able to do that, you try to use services to fullfill that requirements. Even for logging we use a log orchestration service. Then you have no problems, if someone kills your machine.

Maybe just one additional comment: you've mentioned "Docker containers are meant to run one application/process, no additional services and daemons...". I don't agree with that. This was the first idea at the beginning, but the practical usage has shown that it's absolutely no problem to use more than one service in a container. It was the same with microservices. In the beginnen there was a rule to not have more than 100 lines of code for a microservice, this oppinion has changed drastically.


RSS feed

Blog Tags:

  AWS   Android   Ansible   Apache   Apple   Atlassian   BSD   Backup   Bash   Bluecoat   CMS   Chef   Cloud   Coding   Consul   Containers   CouchDB   DB   DNS   Database   Databases   Docker   ELK   Elasticsearch   Filebeat   FreeBSD   Galera   Git   GlusterFS   Grafana   Graphics   HAProxy   HTML   Hacks   Hardware   Icinga   Icingaweb   Icingaweb2   Influx   Internet   Java   KVM   Kibana   Kodi   Kubernetes   LVM   LXC   Linux   Logstash   Mac   Macintosh   Mail   MariaDB   Minio   MongoDB   Monitoring   Multimedia   MySQL   NFS   Nagios   Network   Nginx   OSSEC   OTRS   Office   PGSQL   PHP   Perl   Personal   PostgreSQL   Postgres   PowerDNS   Proxmox   Proxy   Python   Rancher   Rant   Redis   Roundcube   SSL   Samba   Seafile   Security   Shell   SmartOS   Solaris   Surveillance   Systemd   TLS   Tomcat   Ubuntu   Unix   VMWare   VMware   Varnish   Virtualization   Windows   Wireless   Wordpress   Wyse   ZFS   Zoneminder   

Update cookies preferences