A weird situation recently happened on a LXC container with a macvlan network setup: It was unable to start after an unclean shutdown.
This happened on a Ubuntu 18.04 host with Kernel 4.15.0-76-generic and LXC 3.0.3, the container itself also running Ubuntu 18.04.
The default network configuration for LXC containers use virtual ethernet (veth) interfaces. However in certain situations, e.g. inside a VMware Virtual Machine, macvlan is required.
This results in the following network configuration of that container in question, including a static IP:
# Network configuration
lxc.net.0.type = macvlan
lxc.net.0.macvlan.mode = bridge
lxc.net.0.flags = up
lxc.net.0.link = virbr1
lxc.net.0.ipv4.address = 10.150.66.26/25
lxc.net.0.hwaddr = 00:16:3e:66:00:26
lxc.net.0.ipv4.gateway = 10.150.66.1
The LXC container in question was receiving some updates, however one of the updates pulled in netplan.io as an additional package. The problem with that? netplan.io interferes with static IP addressing defined in the container's config file on the LXC host and breaks the container's network.
Once the package was (unwillingly) installed, the container indeed lost network connectivity. But connected with lxc-attach the package could be removed again. A reboot should settle the issue. I thought.
But instead the container was marked as stopped on the host and a lxc-start would silently fail. In this situation, a LXC container needs to be started in foreground (-F) to see what's happening on the console:
root@host:~# lxc-start -n container -F
lxc-start: container: network.c: setup_hw_addr: 2762 Address already in use - Failed to perform ioctl
lxc-start: container: network.c: lxc_setup_netdev_in_child_namespaces: 2907 Failed to setup hw address for network device "eth0"
lxc-start: container: network.c: lxc_setup_network_in_child_namespaces: 3047 failed to setup netdev
lxc-start: container: conf.c: lxc_setup: 3516 Failed to setup network
lxc-start: container: start.c: do_start: 1263 Failed to setup container "container"
lxc-start: container: sync.c: __sync_wait: 62 An error occurred in another process (expected sequence number 5)
lxc-start: container: start.c: __lxc_start: 1939 Failed to spawn container "container"
lxc-start: container: tools/lxc_start.c: main: 330 The container failed to start
lxc-start: container: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options
Note: Another alternative is to set the log level to DEBUG and write all the output to a log file (lxc-start -n container -l DEBUG -o debug.txt).
The container's MAC address is 00:16:3e:66:00:26 (see config above). Looking at the virtual bridge's connected MACs showed that the container is not connected anymore - which is correct:
root@host:~# brctl showmacs virbr1
port no mac addr is local? ageing timer
1 00:16:3e:66:00:25 no 45.66
1 00:1c:7f:6c:ae:b0 no 0.00
1 00:50:56:8d:e2:89 yes 0.00
1 00:50:56:8d:e2:89 yes 0.00
1 00:50:56:8d:e5:c6 no 0.00
However there's a VERY close MAC address ending in :25 with a high ageing timer shown. Hmm...
After further research I came across LXC issue #2834. This issue mentions similar container start problems with the same error messages, after an unclean lxc-stop shutdown. The same situation happened on my container, as the network was "ripped" away from the container due to the netplan.io package.
Stéphane Graber, one of the LXC/LXD maintainers, identified the problem:
Ok, so this is a kernel bug. When the container dies, the mount namespace should get emptied which should also empty the network namespace, causing the NIC to go away and allowing you to start things back up.
I'd recommend testing with the most recent kernel you can get your hands on and if it still happens, file a bug report against the Linux kernel with as simple a reproducer as you can, possibly quoting what I wrote above as a potential source of this issue.
The research and reading through the found articles and issues took some time, maybe 30 minutes. To obtain more information, I wanted to start the container again with loglevel DEBUG and add some notes in the mentioned issue #2834, however to my big surprise the container started without any error:
root@host:~# lxc-start -n container -l DEBUG -o debug.txt
root@host:~# lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6 UNPRIVILEGED
container RUNNING 1 - 10.150.66.26 - false
Looking at the virtual bridge again showed the container's MAC address (ending with :26) is now listed:
root@host:~# brctl showmacs virbr1
port no mac addr is local? ageing timer
1 00:16:3e:66:00:25 no 0.97
1 00:16:3e:66:00:26 no 3.76
1 00:1c:7f:6c:ae:b0 no 0.00
1 00:50:56:8d:e2:89 yes 0.00
1 00:50:56:8d:e2:89 yes 0.00
1 00:50:56:8d:e5:c6 no 0.00
And interestingly the neighboring MAC address ending in :25 now showed a low ageing timer value.
Without having a technical fact I would have to guess that the container's improper stop caused indeed a hanging virtual network interface somewhere in the Kernel's network namespace. But this seems to have solved itself after a couple of minutes (= idle timeout?) and the container's virtual NIC was released, allowing the container to start again.
No comments yet.
AWS Android Ansible Apache Apple Atlassian BSD Backup Bash Bluecoat CMS Chef Cloud Coding Consul Containers CouchDB DB DNS Database Databases Docker ELK Elasticsearch Filebeat FreeBSD Galera Git GlusterFS Grafana Graphics HAProxy HTML Hacks Hardware Icinga Influx Internet Java KVM Kibana Kodi Kubernetes LVM LXC Linux Logstash Mac Macintosh Mail MariaDB Minio MongoDB Monitoring Multimedia MySQL NFS Nagios Network Nginx OSSEC OTRS Office PGSQL PHP Perl Personal PostgreSQL Postgres PowerDNS Proxmox Proxy Python Rancher Rant Redis Roundcube SSL Samba Seafile Security Shell SmartOS Solaris Surveillance Systemd TLS Tomcat Ubuntu Unix VMWare VMware Varnish Virtualization Windows Wireless Wordpress Wyse ZFS Zoneminder