Mount a GlusterFS volume in a LXC container

Written by - 6 comments

Published on - last updated on June 12th 2023 - Listed in Linux LXC GlusterFS


Unfortunately it is not as easy to mount a GlusterFS volume in a LXC container as it is with another device, for example an additional Logical Volume (see related post how to mount an additional block device into a LXC container). 

I first tried it with the LXC's own fstab file:

root@lxchost:~# cat /var/lib/lxc/lxcname/fstab
localhost:/vol1 mnt glusterfs defaults,_netdev 0 2

This should (in theory) mount the GlusterFS volume "vol1" from localhost into the LXC container with mountpoint /mnt. Yes, the missing slash is correct given the relative path to the LXC container.

But unfortunately, this didn't work as a start of the container in debug mode showed:

root@lxchost:~# lxc-start -n lxcname -o /var/lib/lxc/lxcname/stdout.log -l debug

root@lxchost:~# cat /var/lib/lxc/lxcname/stdout.log
[...]
lxc-start 1409577107.058 ERROR    lxc_conf - No such device - failed to mount 'localhost:/vol1' on '/usr/lib/x86_64-linux-gnu/lxc/mnt'
lxc-start 1409577107.058 ERROR    lxc_conf - failed to setup the mounts for 'lxcname'
lxc-start 1409577107.058 ERROR    lxc_start - failed to setup the container
lxc-start 1409577107.058 ERROR    lxc_sync - invalid sequence number 1. expected 2
[...]

As a second attempt, I tried it within the LXC container (as with a normal Linux host) in /etc/fstab:

root@container:~# cat /etc/fstab
# UNCONFIGURED FSTAB FOR BASE SYSTEM
10.10.11.10:/vol1 /mnt glusterfs defaults,_netdev 0 0

Where 10.10.11.10 is the IP address of the physical host of this LXC container.

Before rebooting the container, I tried to mount the gluster volume manually:

root@container:~# mount.glusterfs 10.10.11.10:/vol1 /mnt
Mount failed. Please check the log file for more details.

Ah crap! What now? I checked the glusterfs mount log:

root@container:~# cat /var/log/glusterfs/mnt.log
[...]
I [glusterfsd.c:1910:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.4.2 (/usr/sbin/glusterfs --volfile-id=/vol1 --volfile-server=10.10.11.10 /mnt)
E [mount.c:267:gf_fuse_mount] 0-glusterfs-fuse: cannot open /dev/fuse (No such file or directory)
E [xlator.c:390:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again

Indeed, the special character device /dev/fuse is missing in the container while on the physical host it exists.

At first I thought this is a device permission issue which needs to be solved in the container's config file but the relevant config for /dev/fuse is already there by default:

root@lxchost:~# cat /usr/share/lxc/config/ubuntu.common.conf | grep -A 1 "## fuse"
## fuse
lxc.cgroup.devices.allow = c 10:229 rwm

Then I stumbled across Github issue #80, where Stéphane Graber, one of the LXC's main developers, answered this:

Some modules will also require the creation of device nodes in the container which you'll need to do by hand or through init scripts.

So to solve this, I created /dev/fuse manually within the container:

root@container:~# mknod /dev/fuse c 10 229 

And then tried the manual mount again:

root@container:~# mount.glusterfs 10.10.11.10:/vol1 /mnt

No error this time. Verification with df:

root@container:~# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/lxc/lxcname          20G  830M   18G   5% /
none                     4.0K     0  4.0K   0% /sys/fs/cgroup
none                      13G   60K   13G   1% /run
none                     5.0M     0  5.0M   0% /run/lock
none                      63G     0   63G   0% /run/shm
none                     100M     0  100M   0% /run/user
10.10.11.10:/vol1         99G   60M   99G   1% /mnt

To simplify such things, the /dev/fuse can of course already be created during "lxc-create" by modifying the relevant lxc template. Saves you a headache later.


Add a comment

Show form to leave a comment

Comments (newest first)

Varun from Canada wrote on Dec 17th, 2023:

Worked! You saved me a bunch of time. Was trying to fix this for a while :)


Dino from wrote on Jan 12th, 2017:

I believe I ran into this issue as well with /dev/fuse


ck from Wil, Switzerland wrote on Dec 14th, 2015:

Hi Mark. In my case it worked. Once created within the container, the device node staid. Verify if the permissions (lxc.cgroup.devices.allow) are correct so the container is allowed to access the devices. I haven't tried glusterfs mounts yet with newer LXC releases.


Mark from wrote on Dec 13th, 2015:

But how do we actually get the 'mknod /dev/fuse c 10 229' to be persistent across reboots and also the glusterfs mounts? I haven't had any luck with either.


Ovidiu from San Francisco Bay Area wrote on May 22nd, 2015:

Sadly this doesn't seem to be working with LXD 0.9, where containers run as unprivileged users.


Adam from Boone, NC wrote on Jan 2nd, 2015:

Thanks for posting this quick guide. I just ran into this issue of mounting a GlusterFS inside a LXC container and would have spent a lot of time tracking down the /dev/fuse issue.

Thanks again,
Adam


RSS feed

Blog Tags:

  AWS   Android   Ansible   Apache   Apple   Atlassian   BSD   Backup   Bash   Bluecoat   CMS   Chef   Cloud   Coding   Consul   Containers   CouchDB   DB   DNS   Database   Databases   Docker   ELK   Elasticsearch   Filebeat   FreeBSD   Galera   Git   GlusterFS   Grafana   Graphics   HAProxy   HTML   Hacks   Hardware   Icinga   Icingaweb   Icingaweb2   Influx   Internet   Java   KVM   Kibana   Kodi   Kubernetes   LVM   LXC   Linux   Logstash   Mac   Macintosh   Mail   MariaDB   Minio   MongoDB   Monitoring   Multimedia   MySQL   NFS   Nagios   Network   Nginx   OSSEC   OTRS   Office   PGSQL   PHP   Perl   Personal   PostgreSQL   Postgres   PowerDNS   Proxmox   Proxy   Python   Rancher   Rant   Redis   Roundcube   SSL   Samba   Seafile   Security   Shell   SmartOS   Solaris   Surveillance   Systemd   TLS   Tomcat   Ubuntu   Unix   VMWare   VMware   Varnish   Virtualization   Windows   Wireless   Wordpress   Wyse   ZFS   Zoneminder   


Update cookies preferences