Docker/application containers are not supposed to write on the local disk, at least in a perfect stateless container world. However it can still happen that the filesystem runs out of space (temporary files, application logs, etc). With a proper monitoring in place the warning is out in advance, so there's time to react.
root@dockerhost:~# df -h /var/lib/docker
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vgdocker-lvdocker ext4 50G 46G 1.6G 97% /var/lib/docker
The docker command offers a sub-command "system" which can help identify how much disk space is used by the container eco-system. However this command is only partly helpful, as long as the data resides within the container's file system:
root@dockerhost:~# docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 43 43 15.55GB 5.102GB (32%)
Containers 57 56 346.2MB 0B (0%)
Local Volumes 5 4 56.68MB 36B (0%)
Build Cache 0 0 0B 0B
The total collected size of roughly 16GB is nowhere near the used 46GB seen on the filesystem usage.
If the used disk space is not caused from within a container, the container files (outside of the container) should be checked, too:
root@dockerhost:~# du -ks /var/lib/docker/containers/* | sort -n | tail
31096 /var/lib/docker/containers/51cdfabd2c18e509644454bd479c062582b55e6a5ee679256f2e4a6b3f523126
32180 /var/lib/docker/containers/723132a88b9a8a50f8a747875f01da1fe25497b8a9a47372c3eb96435bda777b
45180 /var/lib/docker/containers/3d739834517dbd76f8756e4280d42d1f7a6a369ef432afc228455001e903778d
51308 /var/lib/docker/containers/76d2ef75ae03b3bec362bdc135ec3add95c3bd8550b817a6a3373d493101fe0a
66768 /var/lib/docker/containers/0e0cfa9079f910438260cf76ac051e959f0d0f572893c130e790a912590d04db
433220 /var/lib/docker/containers/3422c8225917a8997444e20fbcbd93b2bc966775e63da1937b111675abf84273
436536 /var/lib/docker/containers/9ac37529eb51d5af264ebfd7b9288b9ba152d50739b8553e241cff3276cecb4c
567556 /var/lib/docker/containers/b444b0ce5972cb6639c12b610ee008845aded2baf991b64278443281460ea629
1103040 /var/lib/docker/containers/fabe6a6060e123733eb42560a9fbd178dd20a7955e4c8f45a7e38632ac4e498b
23962464 /var/lib/docker/containers/a17749f9a33b6501d8b48ee31591a0f27c9e472189b8f32a5e849fe6db419328
Indeed. Container a17749f9... uses a large part of the whole file system:
root@dockerhost:~# du -ksh /var/lib/docker/containers/a17749f9a33b6501d8b48ee31591a0f27c9e472189b8f32a5e849fe6db419328
23G /var/lib/docker/containers/a17749f9a33b6501d8b48ee31591a0f27c9e472189b8f32a5e849fe6db419328
Looking inside this directory, it's easy to see that the container's "console" log file (stdout/stderr) is responsible for eating all this disk space:
root@dockerhost:~# ll /var/lib/docker/containers/a17749f9a33b6501d8b48ee31591a0f27c9e472189b8f32a5e849fe6db419328
total 23963048
-rw-r----- 1 root root 24538098764 Aug 15 07:46 a17749f9a33b6501d8b48ee31591a0f27c9e472189b8f32a5e849fe6db419328-json.log
drwx------ 2 root root 4096 Jul 19 14:50 checkpoints
-rw------- 1 root root 6579 Jul 19 14:50 config.v2.json
-rw-r--r-- 1 root root 1547 Jul 19 14:50 hostconfig.json
-rw-r--r-- 1 root root 13 Jul 19 14:50 hostname
-rw-r--r-- 1 root root 177 Jul 19 14:50 hosts
drwx------ 3 root root 4096 Jul 19 14:50 mounts
-rw-r--r-- 1 root root 111 Jul 19 14:50 resolv.conf
-rw-r--r-- 1 root root 71 Jul 19 14:50 resolv.conf.hash
If the json-file log is suspected from the beginning, the docker inspect command can be used to quickly check the log sizes across all containers:
root@dockerhost:~# docker inspect --format '{{.LogPath}}' $(docker ps --format '{{.ID}}') | xargs du -ks | sort -n | tail
18760 /var/lib/docker/containers/b11aa795ce5bef78bf90c355cbcf7b47a565774654cb8443bc4c82d54c9e8569/b11aa795ce5bef78bf90c355cbcf7b47a565774654cb8443bc4c82d54c9e8569-json.log
24520 /var/lib/docker/containers/75bcb03368cab475b00e736e13c09a552c66db516ea7f217c313e9770cf314dd/75bcb03368cab475b00e736e13c09a552c66db516ea7f217c313e9770cf314dd-json.log
45140 /var/lib/docker/containers/3d739834517dbd76f8756e4280d42d1f7a6a369ef432afc228455001e903778d/3d739834517dbd76f8756e4280d42d1f7a6a369ef432afc228455001e903778d-json.log
51268 /var/lib/docker/containers/76d2ef75ae03b3bec362bdc135ec3add95c3bd8550b817a6a3373d493101fe0a/76d2ef75ae03b3bec362bdc135ec3add95c3bd8550b817a6a3373d493101fe0a-json.log
66760 /var/lib/docker/containers/0e0cfa9079f910438260cf76ac051e959f0d0f572893c130e790a912590d04db/0e0cfa9079f910438260cf76ac051e959f0d0f572893c130e790a912590d04db-json.log
433200 /var/lib/docker/containers/3422c8225917a8997444e20fbcbd93b2bc966775e63da1937b111675abf84273/3422c8225917a8997444e20fbcbd93b2bc966775e63da1937b111675abf84273-json.log
436524 /var/lib/docker/containers/9ac37529eb51d5af264ebfd7b9288b9ba152d50739b8553e241cff3276cecb4c/9ac37529eb51d5af264ebfd7b9288b9ba152d50739b8553e241cff3276cecb4c-json.log
567528 /var/lib/docker/containers/b444b0ce5972cb6639c12b610ee008845aded2baf991b64278443281460ea629/b444b0ce5972cb6639c12b610ee008845aded2baf991b64278443281460ea629-json.log
1103392 /var/lib/docker/containers/fabe6a6060e123733eb42560a9fbd178dd20a7955e4c8f45a7e38632ac4e498b/fabe6a6060e123733eb42560a9fbd178dd20a7955e4c8f45a7e38632ac4e498b-json.log
23968192 /var/lib/docker/containers/a17749f9a33b6501d8b48ee31591a0f27c9e472189b8f32a5e849fe6db419328/a17749f9a33b6501d8b48ee31591a0f27c9e472189b8f32a5e849fe6db419328-json.log
To immediately reduce the used disk space, the log can be cleared:
root@dockerhost:~# echo "" > /var/lib/docker/containers/a17749f9a33b6501d8b48ee31591a0f27c9e472189b8f32a5e849fe6db419328/a17749f9a33b6501d8b48ee31591a0f27c9e472189b8f32a5e849fe6db419328-json.log
And a lot of disk space is available again:
root@dockerhost:~# df -h /var/lib/docker
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/vgdocker-lvdocker ext4 50G 22G 26G 47% /var/lib/docker
Yes, it probably will. Especially if the log file was full of stdout messages and the containers are in heavy use. So how can this be prevented from happening in the future?
The json-file logging method is the default setting for Docker container logging. And by default log entries are just appended into the containerid-json.log file. But json-file does support configuration parameters for log rotation and limits. From the documentation:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Because this Docker environment is managed by Rancher 1.x, the log settings need to be adjusted in the service which starts these containers. In Rancher the container names contain the service name:
root@dockerhost:~# docker inspect --format '{{.Name}}' a17749f
/r-Q-Q-Server-8-1821ddd8
According to the nomenclature, the "Q-Server" service needs to be adjusted.
In a standalone container environment managed by command line, you'd simply append the docker run command with the json-file settings:
root@dockerhost:~# docker run --log-opt max-size=500m --log-opt max-file=3 [...] imagename
After the service is upgraded, verify with docker inspect that the log options are active. Here the new container ID is checked, because (obviously) the containers of that service are newly deployed containers:
root@dockerhost:~# docker ps --filter 'name=r-Q-Q-Server-*' --format '{{.ID}}'
c2693d891c0f
root@dockerhost:~# docker inspect --format '{{.HostConfig.LogConfig}}' c2693d891c0f
{json-file map[max-file:3 max-size:500m]}
Great, the log options are active! From now on the container's json-file "console" log will be rotated when it reaches a size of 500MB and there will be a maximum of 3 log files (summa summarum: This should use maximum 1.5GB of disk space).
UserUnknown from wrote on Oct 27th, 2020:
you can use du -h to show sizes in different human formats and sort -h to properly sort them in this format and it is way more informative imo in most cases, probably except for machine consumption e.g. scripts.
du -hs /var/lib/docker/containers/* | sort -h | tail
32K /var/lib/docker/containers/8833c303abc0bcd0654cf56c829490bf9b4f53e6968163d3ce66fb1d1dacf42a
36K /var/lib/docker/containers/fe0ced89f78d2d233f46ad5a672a94492830c025ab8800d1a1a4f443a493a5d6
40K /var/lib/docker/containers/1cb4010629aa064c2fc84d6a111bb023755fdfc525ca16c126cd64012757e490
40K /var/lib/docker/containers/81d92734ef8028de91b8a5ff0628d24856b053fba1d03829d0134c4f021cbc6e
72K /var/lib/docker/containers/a46d5f5e34c5e0969682aa5c27105127e3e48719a3e14b68cc05c38a05cf2d0d
356K /var/lib/docker/containers/24f35a9c8e60afa9e21150221ead35f91fe0ade4e1ae7847a943fa0d4be85cd5
58M /var/lib/docker/containers/f139e33d109c02f18143c21d75b7554cb9daefdc0f83bbb2982960485b7ce118
17G /var/lib/docker/containers/21dd3fab0978633616bb997b8c6892033af7c7e795386cda8c4eb1be130ac81a
AWS Android Ansible Apache Apple Atlassian BSD Backup Bash Bluecoat CMS Chef Cloud Coding Consul Containers CouchDB DB DNS Database Databases Docker ELK Elasticsearch Filebeat FreeBSD Galera Git GlusterFS Grafana Graphics HAProxy HTML Hacks Hardware Icinga Influx Internet Java KVM Kibana Kodi Kubernetes LVM LXC Linux Logstash Mac Macintosh Mail MariaDB Minio MongoDB Monitoring Multimedia MySQL NFS Nagios Network Nginx OSSEC OTRS Office PGSQL PHP Perl Personal PostgreSQL Postgres PowerDNS Proxmox Proxy Python Rancher Rant Redis Roundcube SSL Samba Seafile Security Shell SmartOS Solaris Surveillance Systemd TLS Tomcat Ubuntu Unix VMWare VMware Varnish Virtualization Windows Wireless Wordpress Wyse ZFS Zoneminder