Almost a year ago I wrote an article how to create a persistant volume in Rancher 2.x from a NFS share. Interestingly this was never requested on our older but still running smoothly in production Rancher 1.x (currently 1.6.26) environment. Until these days.
The approach between Rancher 1.x and 2.x is comparable, yet a little bit different. This is a step by step guide how to connect an existing NFS share to your Rancher 1.x environment, create a persistent volume and mount the volume in the containers.
So let's make these assumptions: The NFS server runs on 192.168.252.230, the share (export) is called "v_storytelling_maps_stage".
Each Docker/Rancher host (node) needs to be prepared to be able to mount NFS shares. The nfs-common package (in Ubuntu) should cover this.
root@dockerhost1:~# apt-get install nfs-common
Now it's a wise idea to first try and mount the NFS share manually.
root@dockerhost1:~# mount -t nfs 192.168.252.230:/v_storytelling_maps_stage /tmp/claudio
Job for rpc-statd.service failed because the control process exited with error code. See "systemctl status rpc-statd.service" and "journalctl -xe" for details.
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified
If you get this error, check out article Mounting NFS export suddenly does not work anymore - blame systemd for a solution.
So once you've solved this, you should be able to mount the share on your Docker host:
root@st-radoi01-t:~# mount -t nfs 192.168.252.230:/v_storytelling_maps_stage /tmp/claudio
root@st-radoi01-t:~# mount| grep nfs
192.168.252.230:/v_storytelling_maps_stage on /tmp/claudio type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.252.230,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=192.168.252.230)
So far so good. No permission problems, we can proceed and unmount this share again:
root@st-radoi01-t:~# umount /tmp/claudio
In order to mount NFS shares into containers, a volume plugin is needed. The Rancher NFS plugin from the Rancher Catalog is such a plugin:
The plugin needs to be configured with the relevant information, meaning NFS Server and NFS Share (Export Base Directory):
The "Mount Options" allow the typical mount options which you would define with mount -t nfs -o ... on the cli.
"NFS Version" defaults to "nfsvers=4" in the plugin, but in my case the NFS share runs on a NFSv3 server.
The "On Remove" option is quickly overlooked but important: What should happen with your data if a NFS volume is removed? The default is set to "purge". You can chose "retain" to keep the data on the share.
Once this was finished, you will see a new service "nfs-driver" starting up in the Infrastructure stack:
Nope, we can't configure the container service yet. There is first need to create a "volume" on that share. Only a volume can finally be mounted into a container, so let's do this.
To create the volume, click on Infrastructure -> Storage. This will show you the available storage drivers and any other volumes, if there are already. But in this case we create the first volume:
A click on Add Volume will show a simple form with only two fields:
Name: Obviously the name of the volume. This must be a unique name across the whole Rancher environment.
Description: That's obvious, right?
After hitting Create, the screen should now show the just created volume and it will be shown as inactive. This means that this volume is not yet mounted by any container service.
Now to the magic: Mounting the volume in the containers. This is configured in the service of a container under the Volumes tab:
There's not much to fill into to the form, but the correct syntax is mandatory!
map-test:/data:ro
In this case we use the environmental-wide volume "map-test" (the name we gave the volume before) and mount it as /data inside the container(s) of that service. The final "ro" is an optional mount option for read-only (ro) or read-write (rw) permission on that volume. In this case I chose "ro" as a read-only volume. The container will therefore only be able to read data from /data.
After I hit the Upgrade button, the containers of that service were deployed - hopefully with the volume mounted.
As soon as I upgraded that particular service, I saw the status change in Infrastructure -> Storage; the volume turned active!
It also shows which container (Q-Q-Locator-Map-1) and under which path (/data) the volume was mounted. Nice!
On the Docker host I can see the NFS mount appearing as well:
root@dockerhost1:~# mount| grep nfs
192.168.252.230:/v_storytelling_maps_stage/map-test on /var/lib/rancher/volumes/rancher-nfs/map-test type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.252.230,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=192.168.252.230)
192.168.252.230:/v_storytelling_maps_stage/map-test on /var/lib/rancher/volumes/rancher-nfs/map-test type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.252.230,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=192.168.252.230)
And how does it look inside the container?
/app # mount|grep nfs
192.168.252.230:/v_storytelling_maps_stage/map-test on /data type nfs (ro,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.252.230,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=192.168.252.230)
/app #
Because I mounted the volume with the "ro" option, I shouldn't be able to create any files within the mount-path /data. Verifying:
VoilĂ , working as it should.
Just to proof we're really on the correct share and the correct path and the container is able to read the data, I mounted the NFS share on the host again, as in the begin:
root@dockerhost1:~# mount -o rw,nfsvers=3 192.168.252.230:/v_storytelling_maps_stage /tmp/claudio
root@dockerhost1:~# ll /tmp/claudio/
total 4
drwxr-xr-x 2 root root 4096 Jun 13 14:33 map-test
So here we can see a very important fact: The volume (map-test) is represented as a directory inside the NFS share! If I create a file outside this map-test folder, the container won't be able to see it.
But if I create the file inside the map-test, the container should be able to see it:
root@dockerhost1:~# touch /tmp/claudio/map-test/we-all-just-wanna-be-big-rockstars.txt
Inside the container:
/app # ls -la /data
total 12
drwxr-xr-x 2 root root 4096 Jun 13 13:56 .
drwxr-xr-x 48 root root 4096 Jun 13 12:35 ..
drwxrwxrwx 2 root root 4096 Jun 13 08:08 .snapshot
-rw-r--r-- 1 root root 0 Jun 13 13:56 we-all-just-wanna-be-big-rockstars.txt
Yep, the file is here and can be read.
Claudio from Switzerland wrote on Jun 14th, 2019:
Leons, nothing is architecturally wrong, if it works and is stable :-).
Basically this plugin does the same thing, at the end the NFS shares are mounted on the host.
That approach would have been my fallback-solution if it didn't work with the nfs-driver.
Leons P. from wrote on Jun 13th, 2019:
Our approach with Rancher 1.6, Cattle, and NFS was to mount NFS on the host and then mount the host directory in the containers, which may be architecturally wrong but is otherwise robust.
AWS Android Ansible Apache Apple Atlassian BSD Backup Bash Bluecoat CMS Chef Cloud Coding Consul Containers CouchDB DB DNS Database Databases Docker ELK Elasticsearch Filebeat FreeBSD Galera Git GlusterFS Grafana Graphics HAProxy HTML Hacks Hardware Icinga Influx Internet Java KVM Kibana Kodi Kubernetes LVM LXC Linux Logstash Mac Macintosh Mail MariaDB Minio MongoDB Monitoring Multimedia MySQL NFS Nagios Network Nginx OSSEC OTRS Observability Office OpenSearch PGSQL PHP Perl Personal PostgreSQL Postgres PowerDNS Proxmox Proxy Python Rancher Rant Redis Roundcube SSL Samba Seafile Security Shell SmartOS Solaris Surveillance Systemd TLS Tomcat Ubuntu Unix VMWare VMware Varnish Virtualization Windows Wireless Wordpress Wyse ZFS Zoneminder