LXC cgroup limits not working anymore after Debian Bullseye upgrade on the host

Written by - 6 comments

Published on - Listed in LXC Linux


After a server, used as LXC virtualization host, was upgraded to Debian Bullseye, the LXC container limits stopped working. Here are two reasons why this likely happens after a Debian upgrade.

Change from cgroup v1 to cgroups v2

Debian 11 (Bullseye) with the Kernel 5.10 switched from cgroups v1 to cgroups v2. This also means that a LXC container config's limits uses the legacy cgroup v1 syntax, in case you just upgraded the host OS. An example:

root@host ~ # cat /var/lib/lxc/container/config | grep cgroup
lxc.cgroup.cpuset.cpus = 21-22
lxc.cgroup.cpu.shares = 1024
lxc.cgroup.memory.limit_in_bytes = 4G
lxc.cgroup.memory.memsw.limit_in_bytes = 5G

The old cgroup syntax is ignored. This can be seen when starting a LXC container with a start log enabled:

lxc-start container 20230628041658.718 WARN     cgfsng - cgroups/cgfsng.c:cgfsng_setup_limits_legacy:2852 - Invalid argument - Ignoring legacy cgroup limits on pure cgroup2 system

The new lxc configuration syntax uses lxc.cgroup2 followed by the capability name. To see the capability names, you can list them:

root@host ~ # ls /sys/fs/cgroup/lxc.payload.container/
cgroup.controllers      cgroup.type            cpuset.mems               hugetlb.1GB.rsvd.max      io.pressure          memory.min           pids.current
cgroup.events           cpu.max                cpuset.mems.effective     hugetlb.2MB.current       io.stat              memory.numa_stat     pids.events
cgroup.freeze           cpu.pressure           dev-hugepages.mount       hugetlb.2MB.events        io.weight            memory.oom.group     pids.max
cgroup.max.depth        cpu.stat               dev-mqueue.mount          hugetlb.2MB.events.local  memory.current       memory.pressure      rdma.current
cgroup.max.descendants  cpu.weight             hugetlb.1GB.current       hugetlb.2MB.max           memory.events        memory.stat          rdma.max
cgroup.procs            cpu.weight.nice        hugetlb.1GB.events        hugetlb.2MB.rsvd.current  memory.events.local  memory.swap.current  system.slice
cgroup.stat             cpuset.cpus            hugetlb.1GB.events.local  hugetlb.2MB.rsvd.max      memory.high          memory.swap.events
cgroup.subtree_control  cpuset.cpus.effective  hugetlb.1GB.max           init.scope                memory.low           memory.swap.high
cgroup.threads          cpuset.cpus.partition  hugetlb.1GB.rsvd.current  io.max                    memory.max           memory.swap.max

To set cpu and memory limits on a LXC container using cgroupv2 the config should look like this:

root@host ~ # cat /var/lib/lxc/container/config | grep cgroup
lxc.cgroup2.cpuset.cpus = 21-22
lxc.cgroup2.cpu.weight = 100
lxc.cgroup2.memory.max = 4G
lxc.cgroup2.memory.high = 4G

Note that swap is disabled inside LXC containers with cgroupv2.

Missing (or outdated) lxcfs

The magic package, which sets up the cgroup limits inside the container, is called lxcfs. While on Ubuntu systems this package is (usually) automatically installed as a recommendation of the lxc package, on Debian this package is optional and needs to be installed manually. For setting limits to containers, it is required though.

I've had one special case where the host OS was upgraded to Debian 11 and the LXC container configs adjusted to use cgroupv2 (with a 2 CPUs and 4 GB memory limit).

However once the container(s) was started, no limits could be seen. The full amount of CPUs and memory (from the host) was shown in htop and in the output of free:

root@host ~ # lxc-attach -n container
root@container:~# free -m
               total        used        free      shared  buff/cache   available
Mem:          120835        2949        6392         554      111493      116346
Swap:          15258         503       14755

Let's make sure lxcfs is installed:

root@host ~ # apt-get install lxcfs
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following held packages will be changed:
  lxcfs
The following packages will be upgraded:
  lxcfs

1 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
Need to get 70.6 kB of archives.
After this operation, 60.4 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://mirror.init7.net/debian bullseye/main amd64 lxcfs amd64 4.0.7-1 [70.6 kB]
Fetched 70.6 kB in 0s (1083 kB/s)
apt-listchanges: Can't set locale; make sure $LC_* and $LANG are correct!
Reading changelogs... Done
(Reading database ... 90732 files and directories currently installed.)
Preparing to unpack .../lxcfs_4.0.7-1_amd64.deb ...
Unpacking lxcfs (4.0.7-1) over (3.0.3-2+deb10u1) ...
Setting up lxcfs (4.0.7-1) ...
Installing new version of config file /etc/init.d/lxcfs ...
Processing triggers for man-db (2.9.4-2) ...

So it turns out that the lxcfs package was not automatically upgraded during the OS upgrade and remained at the old version (3.0.3).

Right after this, the cgroup limits were enabled inside the container (without a container restart!):

root@host ~ # lxc-attach -n container
root@container:~# free -m
               total        used        free      shared  buff/cache   available
Mem:            4096         258        3677           2         160        3837
Swap:              0           0           0

The reason, why lxcfs was not upgraded, was most likely because the package was set to manual installation:

root@host ~ # apt-mark showmanual | grep lxc
lxc
lxcfs

It worked however for the lxc package itself. So yeah - could've been caused by another package dependency, too.


Add a comment

Show form to leave a comment

Comments (newest first)

Bendd from wrote on Jul 1st, 2023:

I found path
https://git.proxmox.com/?p=lxcfs.git;a=commit;h=62c5f3adc36310005758febd229955119718593e


ck from Switzerland wrote on Jun 29th, 2023:

Bendd, maybe they use a swap file instead of swap from the host. Check mount or /etc/fstab inside the container. Otherwise I would not know how they do that, but I did not investigate either.


Bendd from wrote on Jun 29th, 2023:

I have proxmox standalone server ver.7.3-6. And they have v2 syntax in containers config.


ck from Switzerland wrote on Jun 29th, 2023:

Hi Bendd. Appreciate the comments, thank you! I am not sure but I guess Proxmox is that they stuck with cgroup v1 for containers so they still can use swap? I have no Proxmox cluster here to verify but that would be the easiest solution.


Bendd from wrote on Jun 29th, 2023:

"Note that swap is disabled inside LXC containers with cgroupv2."
Did you know how Proxmox made it work?


Bendd from wrote on Jun 29th, 2023:

'cmdline systemd.unified_cgroup_hierarchy=0 and swapaccount=1'
and cgroup v1 syntax working normally


RSS feed

Blog Tags:

  AWS   Android   Ansible   Apache   Apple   Atlassian   BSD   Backup   Bash   Bluecoat   CMS   Chef   Cloud   Coding   Consul   Containers   CouchDB   DB   DNS   Database   Databases   Docker   ELK   Elasticsearch   Filebeat   FreeBSD   Galera   Git   GlusterFS   Grafana   Graphics   HAProxy   HTML   Hacks   Hardware   Icinga   Influx   Internet   Java   KVM   Kibana   Kodi   Kubernetes   LVM   LXC   Linux   Logstash   Mac   Macintosh   Mail   MariaDB   Minio   MongoDB   Monitoring   Multimedia   MySQL   NFS   Nagios   Network   Nginx   OSSEC   OTRS   Office   PGSQL   PHP   Perl   Personal   PostgreSQL   Postgres   PowerDNS   Proxmox   Proxy   Python   Rancher   Rant   Redis   Roundcube   SSL   Samba   Seafile   Security   Shell   SmartOS   Solaris   Surveillance   Systemd   TLS   Tomcat   Ubuntu   Unix   VMWare   VMware   Varnish   Virtualization   Windows   Wireless   Wordpress   Wyse   ZFS   Zoneminder