In the past I've already had some connectivity issues with LXC (see Network connectivity problems when running LXC (with veth) in VMware VM). But today I experienced another kind of problem on a LXC installation on physical servers running Ubuntu 16.04 Xenial.
While network connectivity worked fine from other networks (outside of this LXC host), I was unable to ping between the LXC host and the container.
root@container:~# ping 10.166.102.10
  
PING 10.166.102.10 (10.166.102.10) 56(84) bytes of data.
  
From 10.166.102.15 icmp_seq=1 Destination Host Unreachable
  
From 10.166.102.15 icmp_seq=2 Destination Host Unreachable
  
From 10.166.102.15 icmp_seq=3 Destination Host Unreachable
  
From 10.166.102.15 icmp_seq=4 Destination Host Unreachable
  
From 10.166.102.15 icmp_seq=5 Destination Host Unreachable
  
From 10.166.102.15 icmp_seq=6 Destination Host Unreachable
  
^C
  
--- 10.166.102.10 ping statistics ---
  
9 packets transmitted, 0 received, +6 errors, 100% packet loss, time 8040ms
  
root@host:~# ping 10.166.102.15
  
PING 10.166.102.15 (10.166.102.15) 56(84) bytes of data.
  
From 10.166.102.10 icmp_seq=1 Destination Host Unreachable
  
From 10.166.102.10 icmp_seq=2 Destination Host Unreachable
  
From 10.166.102.10 icmp_seq=3 Destination Host Unreachable
  
^C
  
--- 10.166.102.15 ping statistics ---
  
5 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3999ms
  
Both host and container are in the same network range and are using the network's central gateway:
root@host:~# route -n 
  
Kernel IP routing table
  
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
  
0.0.0.0         10.166.102.1    0.0.0.0         UG    0      0        0 virbr0
  
10.166.102.0    0.0.0.0         255.255.255.192 U     0      0        0 virbr0
root@container:~# route -n
  
Kernel IP routing table
  
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
  
0.0.0.0         10.166.102.1    0.0.0.0         UG    0      0        0 eth0
  
10.166.102.0    0.0.0.0         255.255.255.192 U     0      0        0 eth0
Of course the container is using the hosts virbr0 as network link:
root@host:~# cat /var/lib/lxc/container/config  | grep network
  
lxc.network.type = macvlan
  
lxc.network.macvlan.mode = bridge
  
lxc.network.flags = up
  
lxc.network.link = virbr0
  
lxc.network.ipv4 = 10.166.102.15/26
  
lxc.network.ipv4.gateway = 10.166.102.1
  
lxc.network.hwaddr = 54:52:10:66:12:15
Now I remembered that at home I had a small test-server running which has the same specs as in this setup:
But there is one huge difference: At home the pings between the host and the container work, on this setup (as mentioned above) this doesn't work.
The first thing I checked were the virtual bridge settings. And by basically just showing the virbr0 I saw a big difference:
Home:
root@homehost ~ # brctl show
    
bridge name    bridge id        STP enabled    interfaces
    
virbr0        8000.1c1b0d6523df    no        eth0
    
                            veth0-container
    
                            veth0-container2
    
                            veth0-container3
    
                            veth0-container4
  
This setup:
root@host:~# brctl show
    
bridge name    bridge id        STP enabled    interfaces
    
lxdbr0        8000.000000000000    no       
    
virbr0        8000.a0369ff4d626    no        bond0
  
Even though several containers are running on this host, they don't show up as listed interfaces under this bridge!
I compared the container network config at home and on this setup and found this:
Home:
root@homehost ~ # cat /var/lib/lxc/invoicing/config | grep network
    
# networking
    
lxc.network.type = veth
    
lxc.network.flags = up
    
lxc.network.link = virbr0
    
lxc.network.ipv4 = 192.168.77.173/24
    
lxc.network.hwaddr = 54:52:00:15:01:73
    
lxc.network.veth.pair = veth0-container
    
lxc.network.ipv4.gateway = 192.168.77.1
    
  
This setup (again the same output as above):
root@host:~# cat /var/lib/lxc/container/config  | grep network
    
lxc.network.type = macvlan
    
lxc.network.macvlan.mode = bridge
    
lxc.network.flags = up
    
lxc.network.link = virbr0
    
lxc.network.ipv4 = 10.166.102.15/26
    
lxc.network.ipv4.gateway = 10.166.102.1
    
lxc.network.hwaddr = 54:52:10:66:12:15
  
The network type is macvlan on this setup. This is because I basically copied the network config from another LXC host in this environment. With the difference that this LXC host was virtual (running in VMware) and not physical. Hence the lxc.network.type was set to macvlan because of the connectivity problems mentioned in article Network connectivity problems when running LXC (with veth) in VMware VM).
As soon as I switched the network.type to veth, the container and the host could ping each other, too. And now the container shows up in brctl:
root@host:~# brctl show
    
bridge name    bridge id        STP enabled    interfaces
    
lxdbr0        8000.000000000000    no       
    
virbr0        8000.a0369ff4d626    no        bond0
    
                            veth0F7MCH
    
  
TL;DR: On LXC hosts running on physical servers/hardware, use veth interfaces. On LXC hosts running themselves as a virtualized host (inside VMware for example), use macvlan interfaces (once again, see Network connectivity problems when running LXC (with veth) in VMware VM).
No comments yet.
AWS Android Ansible Apache Apple Atlassian BSD Backup Bash Bluecoat CMS Chef Cloud Coding Consul Containers CouchDB DB DNS Databases Docker ELK Elasticsearch Filebeat FreeBSD Galera Git GlusterFS Grafana Graphics HAProxy HTML Hacks Hardware Icinga Influx Internet Java KVM Kibana Kodi Kubernetes LVM LXC Linux Logstash Mac Macintosh Mail MariaDB Minio MongoDB Monitoring Multimedia MySQL NFS Nagios Network Nginx OSSEC OTRS Observability Office OpenSearch PHP Perl Personal PostgreSQL PowerDNS Proxmox Proxy Python Rancher Rant Redis Roundcube SSL Samba Seafile Security Shell SmartOS Solaris Surveillance Systemd TLS Tomcat Ubuntu Unix VMware Varnish Virtualization Windows Wireless Wordpress Wyse ZFS Zoneminder Linux