Header RSS Feed
 
If you only want to see the articles of a certain category, please click on the desired category below:
ALL Android Backup BSD Database Hacks Hardware Internet Linux Mail MySQL Nagios/Monitoring Network Personal PHP Proxy Shell Solaris Unix Virtualization VMware Windows Wyse

node js: Fix bodyParser error (is no longer bundled with Express)
Tuesday - Sep 16th 2014 - by - (0 comments)

Let me begin with: I am no professional with node.js. Hell it's the first time I'm working on a node.js script. So you can imagine my eyebrows raising up when I saw that error when I tried to move an existing script to a newer platform:

nodejs /home/myscripts/callMe
Error: Most middleware (like bodyParser) is no longer bundled with Express and must be installed separately. Please see https://github.com/senchalabs/connect#middleware.
    at Function.Object.defineProperty.get (/home/myscripts/node_modules/express/lib/express.js:89:13)
    at Object. (/home/myscripts/lib/callMe/app.js:16:17)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Module.require (module.js:364:17)
    at require (module.js:380:17)
    at Object. (/home/myscripts/sbin/callMe:2:1)
    at Module._compile (module.js:456:26)

The app.js contained the following definitions at the begin:

var app = express();
app.use(express.bodyParser());

According to this stackoverflow question, the "Express" module has removed the bodyParser function in newer versions. Instead of using this function as part of Express, it can now be used as its own module:

var app = express();
// Disabled because its not working anymore
//app.use(express.bodyParser());
// ... use body-parser package instead
var bodyParser = require('body-parser');
app.use(bodyParser());

Just needed to install the node.js module body-parser on the system:

npm install body-parser

And then the script was working again.

 

Sony Xperia Tablet Z with Kitkat: Fix for permission denied on sd card
Thursday - Sep 11th 2014 - by - (0 comments)

Since December 2013 I'm an owner of a Sony Xperia Z Tablet and until this week I never had to rant about it. Hell I have even decided to stay with the original Sony Android version and to not install Cyanogenmod, because Sony did a fine job and didn't f%ck up Android as much as other device vendors do. The number of Sony bloatware is limited and is not intrusive... but let's get back to my rant.

If you prefer, skip my rant and go straight to the solution.

I use my tablet for mainly two things:
- To be able to work remotely on my servers (see article Remote SSH work on Android tablet with ConnectBot and physical keyboard)
- To watch movies when I'm travelling long distances

Since the last device update to build number 10.5.A.0.230, admittedly a while ago, I cannot use my SD card anymore. I usually use the AndSMB app to transfer the movie I want to prepare for the travel directly from my NAS onto the SD card. It used to work flawlessly but when I copied two days ago, I got a "Permission denied" error in the AndSMB app. At first I suspected a bug in the AndSMB application because there was a recent update of the app, too. I also rebooted the tablet, just to make sure it's not a mounting issue of the AndSMB app. I tried the transfer again yesterday and the "Permission denied" was still there. So I got curious and opened "File Manager" app and tried to manually create a file on my SD card (which has worked before) and got an error "Operation Failed." which appeared for about 2 seconds at the bottom of the app.

At this point I realized, that with the update to build 10.5.A.0.230, I had also received the Android version 4.4, also known as KitKat. Now search for "Kitkat sd card" and you get thousands of websites cussing and swearing about Google having removed the write access to all apps (except their own!) to the (external) SD card. See this Google+ post by Tod Liebeck for a good summary of what exactly happened in KitKat. He sums it up so that every idiot can understand it:

"What this means is that with KitKat, applications will no longer be able create, modify, or remove files and folders on your external SD card."

Google, just what the hell were you thinking?! Yes ok, applications can no longer just create files in a chaotic and anarchic way. But what about users like me wanting to place large media files on the SD card. On purpose! What about applications running on the SD card? They stop working because they can no longer write to the folder they were installed/moved into. Additionally to this, there seems to have been no information at all about that change so users and application developers were left out in the dark and have to spend time in figuring out what the hell is going on and why such permission errors occur. Agreed, power users, not the usual oh-free-wifi-internet-hipster in Starbucks.

After a couple of minutes of research on this newly introduced limitation of Android (with a couple of swear words leaving my mouth), I was thinking of formatting the tablet completely and install CyanogenMod 11 on it. According to this Reddit post, CM seems to have removed the SD card write limitations. But before that I wanted to see if I could somehow fix it myself. After all I'm a Linux Sys Engineer and Android is running on Linux (somewhat but not exactly)... I installed the "Terminal Emulator" app and navigated to the SD card path:

u0_a44@SGP321: / $ cd /storage
u0_a44@SGP321: /storage $ ll
d---r-x---  root  sdcard_r     2014-09-11 09:00 emulated
lrwxrwxrwx  root  root         1971-01-05 19:16 sdcard0 -> /storage/emulated/legacy
drwxrwx--x  root  sdcard_r     2014-09-11 08:57 sdcard1

I tried to change the folder permissions of sdcard1 (which is the external SD card) to 777:

u0_a44@SGP321: /storage $ chmod 777 sdcard1

But that didn't work. The permissions stayed the same (Note: Even as root you cannot change that folder's permissions because of enabled SELinux).
Maybe it's just an issue with the FAT filesystem I thought and was then checking the Google Play store for an app to completely re-format the SD card to ext4 - but to my big surprise my search of "sd card" showed the app "SDFix: KitKat Writable MicroSD" by NextApp as one of the first results. Wow! That's exactly what I need!!! But the app's description mentions that root access is required. As I haven't rooted my tablet, I needed to find a way to root the device first. A quick research "root xperia z tablet" pointed me to quite a lot of results, some with large manuals, some to discussion threads in the xda forums. In the xda forums I saw the name "towelroot" appear very often. And that's where the solution for this whole SD card write permission issue starts.

The solution: How to fix your KitKat SD card write permission issue yourself

General information and disclaimer: You do this at your own risk. If you brick your device it's your own fault.
I did these steps successfully on a Xperia Z Tablet (model number SGP321) running on Android 4.4.2 and build 10.5.A.0.230 with Kernel version 3.4.0-perf-g32ce454.

1. Download towelroot application to root your device
On your tablet, open a browser and navigate to https://towelroot.com/. Click on the red sign to download the application package "tr.apk".

2. Install towelroot
On your tablet you will find the downloaded tr.apk in the "Downloads" folder. Launch a file explorer and click on tr.apk. Your system might tell you that the current settings do not allow to install apps from untrusted sources. In this case go to the Settings -> Security and under "Device Administration" click on "Unknown sources". Now you can install tr.apk:

Towelroot installation 

Funnily a warning appears, that Google does not recommend the installation of this package. Well Google, you left me no choice!

Towelroot installation warning 

3. Root the device
This sounds complicated but it is the easiest thing ever thanks to the towelroot application by geohot (yes, that's the guy who hacked the PlayStation!). Launch the towelroot application and click on the "make it ra1n" button.

towelroot root xperia z tablet  towelroot rooted xperia tablet z

If the rooting process was successful, the following text appears: "Thank you for using towelroot! You should have root, no reboot required.". 

Amazing. It worked. Becoming root in "Terminal Emulator" now works. 

root in Terminal Emulator

4. Install SDFix
Now that the tablet is rooted, you can install the SDFix application from Google Play.

SDFix app download SDFix App Download

Once installed, launch SDFix. The application itself is like an installer on Windows, just click yourself through.

SDFix Installation SDFix Installation SDFix Installation

After you see the green "Complete" page, you must reboot your tablet. Otherwise the applications still can't write into the SD.

5. Create file or folder with File Explorer in SDCard
To test both console and application permissions, I first created a folder "Movies" in /storage/sdcard1 in the "Terminal Emular" as root:

root@SGP321: / $ cd /storage/sdcard1
root@SGP321: /storage/sdcard1 $ mkdir Movies
root@SGP321: /storage/sdcard1 $ chmod 777 Movies
root@SGP321: /storage/sdcard1 $ ll
drwxrwx--x  root  sdcard_r      2014-09-10 20:14 Android
drwxrwx---  root  sdcard_r      2014-09-10 20:14 LOST.DIR
drwxrwx---  root  sdcard_r      2014-09-11 20:30 Movies
-rwxrwx---  root  sdcard_r      2014-09-11 20:07 customized-capability.xml
-rwxrwx---  root  sdcard_r      2014-09-11 20:07 default-capability.xml

And then created a folder within Movies in "File Explorer" app:

File Explorer Create Folder on SDCard Kitkat Successfully created folder on SD Card on Kitkat

That's it! Success! And I'm back to being a happy user again.

 

Rsnapshot does not remove LV snapshot when mount failed
Thursday - Sep 11th 2014 - by - (0 comments)

On a system running rsnapshot as local backup method, the rsnapshot process failed and the backup didn't run correctly.

After analyzing the logs, it appears that rsnapshot does not remove a logical volume snapshot if the snapshot could not be mounted successfully:

[10/Sep/2014:02:07:43] /sbin/lvcreate --snapshot --size 200M --name rsnapshot /dev/vgdata/mylv
[10/Sep/2014:02:07:44] /bin/mount /dev/vgdata/rsnapshot /mnt/lvm-snapshot
[10/Sep/2014:02:07:44] /usr/bin/rsnapshot -c /etc/rsnapshot.backup.conf daily: ERROR: Mount LVM snapshot failed: 8192
[10/Sep/2014:02:07:44] rm -f /var/run/rsnapshot.pid

Reason for the mount error was, that the defined mountpoint (/mnt/lvm-snapshot) did not exist. However, after the mount failed, the LV snapshot was not removed...

On the next run of,  the creation of the LV snapshot failed, because (obviously) it already existed (from the previous run):

[10/Sep/2014:09:30:08] /sbin/lvcreate --snapshot --size 200M --name rsnapshot /dev/vgdata/mylv
[10/Sep/2014:09:30:08] /usr/bin/rsnapshot -c /etc/rsnapshot.backup.conf daily: ERROR: Create LVM snapshot failed: 1280
[10/Sep/2014:09:30:08] rm -f /var/run/rsnapshot.pid

To solve this issue, the mountpoint must be created and the logical volume snapshot must be deleted manually. Afterwards, rsnapshot runs correctly again.

This seems to be a bug of rsnapshot, so I opened an issue in the github repository of rsnapshot.

 

Firefox displays: The image cannot be displayed because it contains errors
Monday - Sep 8th 2014 - by - (0 comments)

Recently I was contacted by a user that on his website his uploaded images cannot be seen. According to the user, this must of course be an issue on the server... (arr!)

When I manually loaded the picture, I was a little bit surprised by the error message shown by Firefox:

The image "http://example.com/logo.png" cannot be displayed because it contains errors.

The image cannot be displayed because it contains errors

By testing the same URL on Chrome, the "classical" image not found/image broken image appeared:

broken image in chrome

To proof that its not a server issue, I uploaded a png picture myself and opened the URL. Which worked fine, of course. 

By using the "Page Info" on Firefox, additional information was shown about the images. First the working one, uploaded by me:

Page Info working image 

And then the non-working image:

Info image not working

Very interesting here are the mentioned dimensions. So there's definitely something wrong.

I told the user that there must have been either a FTP transfer error or that the pictures were not properly created. At the end it turned out, that the user transferred the images in ASCII mode instead of BINARY mode...

Yes, it's always the server! :-)

 

Bind9 Master Slave replication: Zone transfer not working in Ubuntu 14.04
Monday - Sep 8th 2014 - by - (0 comments)

On a Bind master server installed on an Ubuntu 14.04 machine, it is pretty much "standard" to run everything within /etc/bind. As this is a master DNS server, the zone files are usually updated manually.

But if you run a master-slave-replication, do not use the same directory structure on the slave!

By troubleshooting a case, where the replication did not work and the zone files were not created on the slave server, I came across the following error message in syslog on the slave:

named[318]: client 10.10.44.67#7865: received notify for zone 'example.com'
named[318]: zone example.com/IN: Transfer started.
named[318]: transfer of 'example.com/IN' from 10.10.44.67#53: connected using 10.10.44.68#33813
named[318]: zone example.com/IN: transferred serial 2014090801
named[318]: transfer of 'example.com/IN' from 10.10.44.67#53: Transfer completed: 1 messages, 33 records, 1170 bytes, 0.001 secs (1170000 bytes/sec)
named[318]: zone example.com/IN: sending notifies (serial 2014090801)
named[318]: dumping master file: /etc/bind/zones/tmp-kP27d0CASU: open: permission denied
kernel: [239980.946541] type=1400 audit(1410164178.794:90): apparmor="DENIED" operation="mknod" profile="/usr/sbin/named" name="/etc/bind/zones/tmp-kP27d0CASU" pid=319 comm="named" requested_mask="c" denied_mask="c" fsuid=111 ouid=111

Interesting. The master sends the notify for the zone and the slave receives the notify and the transfer is initiated. But when the slave tries to create the zonefile in /etc/bind/zones, a permission denied error arises. One line further the "blocker" is identified: apparmor.

Indeed, in the apparmor profile for /usr/sbin/named (/etc/apparmor.d/usr.sbin.named) does not allow the bind process to write anything into /etc/bind/:

  # /etc/bind should be read-only for bind
  # /var/lib/bind is for dynamically updated zone (and journal) files.
  # /var/cache/bind is for slave/stub data, since we're not the origin of it.
  # See /usr/share/doc/bind9/README.Debian.gz
  /etc/bind/** r,
  /var/lib/bind/** rw,
  /var/lib/bind/ rw,
  /var/cache/bind/** lrw,
  /var/cache/bind/ rw,

As solution, one should use /var/lib/bind/(zones) as path for the zone files, which are dynamically created through the master-slave replication.

 

Ubuntu 14.04: Slow network boot prevents glusterfs volumes to mount
Wednesday - Sep 3rd 2014 - by - (0 comments)

On a physical server with Ubuntu 14.04 LTS installed and a rather complicated network setup (including vlan tags and virtual bridges) I experienced very slow booting as soon as the boot process was about to start the network devices.

Unfortunately this slow starting up of the network devices prevented to mount the GlusterFS volumes automatically, as they require the network to mount the volumes. Here's the console log:

[...]
 * Starting configure network device security                            [ OK ]
 * Starting configure network device                                     [ OK ]
 * Starting configure network device security                            [ OK ]
 * Starting configure network device                                     [ OK ]
 * Starting configure network device security                            [ OK ]
 * Starting configure network device                                     [ OK ]
 * Starting Mount network filesystems                                    [ OK ]
 * Stopping Mount network filesystems                                    [ OK ]
 * Starting configure virtual network devices                            [ OK ]
Waiting for network configuration...
 * Starting Waiting for state                                            [fail]
 * Stopping Waiting for state                                            [ OK ]
 * Starting Block the mounting event for glusterfs filesystems until the network interfaces are running                                                   [fail]
 * Stopping Mount filesystems on boot
Waiting up to 60 more seconds for network configuration...
[...]

I was looking for the reason why the system waits several times "Waiting ... for network configuration".
Through this forum post I found that these outputs come from /etc/init/failsafe.conf:

[...]
    # The point here is to wait for 2 minutes before forcibly booting
    # the system. Anything that is in an "or" condition with 'started
    # failsafe' in rc-sysinit deserves consideration for mentioning in
    # these messages. currently only static-network-up counts for that.

        sleep 20

    # Plymouth errors should not stop the script because we *must* reach
    # the end of this script to avoid letting the system spin forever
    # waiting on it to start.
        $PLYMOUTH message --text="Waiting for network configuration..." || :
        sleep 40

        $PLYMOUTH message --text="Waiting up to 60 more seconds for network configuration..." || :
        sleep 59
        $PLYMOUTH message --text="Booting system without full network configuration..." || :
[...]

What the hell? There's just a bunch of sleep and not doing anything (reminds me of the Autobahn working areas in Switzerland). So during 2 minutes the system is just sleeping, trying to wait for something, blocking the end of the booting process.

I commented/disabled the first "sleep 20" and set the other sleeps to 5 seconds each. The system now boots up much faster and even automatically mounts the glusterfs volume:

[...]
 * Starting configure network device security                            [ OK ]
 * Starting configure network device                                     [ OK ]
 * Starting configure network device security                            [ OK ]
 * Starting configure network device                                     [ OK ]
 * Starting configure network device security                            [ OK ]
 * Starting configure network device                                     [ OK ]
Waiting up to 5 more seconds for network configuration...
 * Starting Mount network filesystems                                    [ OK ]
 * Stopping Mount network filesystems                                    [ OK ]
 * Starting configure virtual network devices                            [ OK ]
Booting system without full network configuration...
[...]

mount | grep fuse.glusterfs
localhost:/vol1 on /mnt type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

I will probably comment/disable the "$PLYMOUTH message" lines, too. The network configuration is just fine after a few seconds.

 

Mount a GlusterFS volume in a LXC container
Monday - Sep 1st 2014 - by - (0 comments)

Unfortunately it is not as easy to mount a GlusterFS volume in a LXC container as it is with another device, for example an additional Logical Volume. 

I first tried it with the LXC's fstab file:

cat /var/lib/lxc/lxcname/fstab
localhost:/vol1 mnt glusterfs defaults,_netdev 0 2

This should (in theory) mount the GlusterFS volume "vol1" from localhost into the LXC container with mountpoint /mnt. Yes, the missing slash is correct given the relative path to the LXC container.

But unfortunately, this didn't work as a start of the container in debug mode showed:

lxc-start -n lxcname -o /var/lib/lxc/lxcname/stdout.log -l debug

cat /var/lib/lxc/lxcname/stdout.log
[...]
lxc-start 1409577107.058 ERROR    lxc_conf - No such device - failed to mount 'localhost:/vol1' on '/usr/lib/x86_64-linux-gnu/lxc/mnt'
lxc-start 1409577107.058 ERROR    lxc_conf - failed to setup the mounts for 'lxcname'
lxc-start 1409577107.058 ERROR    lxc_start - failed to setup the container
lxc-start 1409577107.058 ERROR    lxc_sync - invalid sequence number 1. expected 2
[...]

As a second attempt, I tried it within the LXC container (as with a normal Linux host) in /etc/fstab:

cat /etc/fstab
# UNCONFIGURED FSTAB FOR BASE SYSTEM
10.10.11.10:/vol1 /mnt glusterfs defaults,_netdev 0 0

Where 10.10.11.10 is the IP address of the physical host of this LXC container.
Before rebooting the container, I tried to mount the gluster volume manually:

mount.glusterfs 10.10.11.10:/vol1 /mnt
Mount failed. Please check the log file for more details.

Ah crap! What now? I checked the glusterfs mount log:

cat /var/log/glusterfs/mnt.log
[...]
I [glusterfsd.c:1910:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.4.2 (/usr/sbin/glusterfs --volfile-id=/vol1 --volfile-server=10.10.11.10 /mnt)
E [mount.c:267:gf_fuse_mount] 0-glusterfs-fuse: cannot open /dev/fuse (No such file or directory)
E [xlator.c:390:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again

Indeed, the special character device /dev/fuse is missing in the container while on the physical host it exists.
At first I thought this is a device permission issue which needs to be solved in the container's config file but the relevant config for /dev/fuse is already there by default:

cat /usr/share/lxc/config/ubuntu.common.conf | grep -A 1 "## fuse"
## fuse
lxc.cgroup.devices.allow = c 10:229 rwm

Then I stumbled across this Github issue: https://github.com/lxc/lxc/issues/80 where Stéphane Graber, one of the LXC's main developers, answered this:

Some modules will also require the creation of device nodes in the container which you'll need to do by hand or through init scripts.

So to solve this, I created /dev/fuse manually within the container:

mknod /dev/fuse c 10 229 

And then tried the manual mount again:

mount.glusterfs 10.10.11.10:/vol1 /mnt

No error this time. Verification with df:

df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/lxc/lxcname          20G  830M   18G   5% /
none                     4.0K     0  4.0K   0% /sys/fs/cgroup
none                      13G   60K   13G   1% /run
none                     5.0M     0  5.0M   0% /run/lock
none                      63G     0   63G   0% /run/shm
none                     100M     0  100M   0% /run/user
10.10.11.10:/vol1         99G   60M   99G   1% /mnt

To simplify such things, the /dev/fuse can of course already be created during "lxc-create" by modifying the relevant lxc template. Saves you a headache later.

 

Use bash to compare remote cpu load and print lowest value of array
Thursday - Aug 14th 2014 - by - (0 comments)

In some cases it might be useful to compare remote load values of different servers and use these values to determine the server with the lowest load. Practical examples would be a provisioning server or a load balancing server.

The current load averages (1min, 5min, 15min) can be displayed by using /proc/loadavg:

cat /proc/loadavg
0.18 0.24 0.20 1/563 28186

For balancing or provisioning purposes, the value to take a look at is the third value which is the load average during the last 15 minutes.

cat /proc/loadavg | awk '{print $3}'
0.20

This of course also possible by using a remote SSH command (don't forget to escape the Dollar sign):

ssh root@remoteserver "cat /proc/loadavg | awk '{print \$3}'"
0.05

To get the current load average on a bunch of server and to show the server with the lowest cpu load (in the last 15 minutes), the following script can be launched:

for server in server01 server02 server03 server04; do
  case $server in
    server01) load[1]=$(ssh root@$server "cat /proc/loadavg | awk '{print \$3}'");;
    server02) load[2]=$(ssh root@$server "cat /proc/loadavg | awk '{print \$3}'");;
    server03) load[3]=$(ssh root@$server "cat /proc/loadavg | awk '{print \$3}'");;
    server04) load[4]=$(ssh root@$server "cat /proc/loadavg | awk '{print \$3}'");;
  esac
done


echo "${load[*]}" | tr ' ' '\n' | awk 'NR==1{min=$0}NR>1 && $1<min{min=$1;pos=NR}END{print "Server #:"pos,"Load: "min}'
Server #:3 Load: 0.07

This can of course easily be verified

echo ${load[@]}
0.22 0.36 0.07 0.20 0.30

As a short explanation what this scriopt is doing:

For each server, a remote ssh command is executed to get the current 15min load average value. This value is saved into an array "load" and the array index number "1" (because I start with server #1, the array index should have the same number as the servername). After the for loop, the full array "load" is returned. Each array value is compared with the previous array value. If the current array value is smaller than the previous one, then the variable "min" is set with the value value of the new lowest value. Besides that, the variable "pos" is set, which defines the position of the current value (NR).
At the end, the information is printed to stdout with additional information ("Server #" and "Load:") as strings.

This of course also works without the case loop (see below) but the case loop may be helpful if additional information wants to be gathered at the same time.

i=1; for server in server01 server02 server03 server04 server05; do
myload[$i]=$(ssh root@$server "cat /proc/loadavg | awk '{print \$3}'")
let i++
done


echo "${myload[*]}" | tr ' ' '\n' | awk 'NR==1{min=$0}NR>1 && $1<min{min=$1;pos=NR}END{print "Server #:"pos,"Load: "min}'
Server #:3 Load: 0.06

Source for this very neat awk comparison: http://stackoverflow.com/questions/16610162/bash-return-position-of-the-smallest-entry-in-an-array

 

GlusterFS bricks should be in a subfolder of a mountpoint
Tuesday - Aug 5th 2014 - by - (0 comments)

When I did my first GlusterFS setup (not that long ago) in February 2014, I documented the following steps:

Create new LVM LV (which will be the brick):

lvcreate -n brick1 -L 10G vgdata

Format the LV (I used ext3 back then):

mkfs.ext3 /dev/mapper/vgdata-brick1

Create local mountpoint for the brick LV:

mkdir /srv/glustermnt

Mount brick LV to the local mointpoint (and create fstab entry):

mount /dev/mapper/vgdata-brick1 /srv/glustermnt

Create Gluster volume:

gluster volume create myglustervol replica 2 transport tcp node1:/srv/glustermnt node2:/srv/glustermnt
volume create: myglustervol: success: please start the volume to access data

This was on a Debian Wheezy with glusterfs-server 3.4.1.

This seems to have changed now on a Ubuntu 14.04 LTS with glusterfs-server 3.4.2, when I tried to create a volume over three nodes:

gluster volume create myglustervol replica 3 transport tcp node1:/srv/glustermnt node2:/srv/glustermnt node3:/srv/glustermnt
volume create: backup: failed: The brick node1:/srv/glustermnt is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.

I came across a mailing list discussion (see this page for the archive) where this same error message was mentioned by the OP. The answer was, to my surprise, that it should have never been a direct mount point in the first place - although it worked:

The brick directory should ideally be a sub-directory of a mount point (and not a mount point directory itself) for ease of administration.
We recently added code to warn about this

So I now created a subfolder within the mount point (on all the other peers, too) and relaunched the volume create command with the adapted path:

gluster volume create myglustervol replica 3 transport tcp node1:/srv/glustermnt/brick node2:/srv/glustermnt/brick node3:/srv/glustermnt/brick
volume create: myglustervol: success: please start the volume to access data

 Looks better. But I'm still wondering why it was working in February 2014 when the mailing list entry was from May 2013...

Update September 15th 2014:
In the GlusterFS mailing list, this topic came up again and I responded with the following use-case example which clearly shows why a sub folder of a mount point:

Imagine you have a LV you want to use for the gluster volume. Now you mount this LV to /mnt/gluster1. You do this on the other host(s), too and you create the gluster volume with /mnt/gluster1 as brick. By mistake you forget to add the mount entry to fstab so the next time you reboot server1, /mnt/gluster1 will be there (because it's the mountpoint) but the data is gone (because the LV is not mounted). I don't know how gluster would handle that but it's actually easy to try it out :)
So using a subfolder within the mountpoint makes sense, because that subfolder will not exist when the mount of the LV didn't happen.

 

New version of check_equallogic features snmp connection check
Friday - Jul 25th 2014 - by - (0 comments)

The newest version of the Nagios/Icinga plugin check_equallogic with version number 20140711 contains a snmp connection check. This was requested a lot over the last months and since I published the plugin on github (see https://github.com/Napsty/check_equallogic), there were even some issues and pull requests opened for that (thanks guys). 

But instead of  just creating a new check type (like -t snmp), I wanted that all checks are automatically using the snmp connection check. Otherwise every Nagios/Icinga admin would have to define service dependencies which would complicate configurations. Lame.

So the snmp connectivity check is defined as a function at the begin of the plugin which makes an snmp query and gets all the member names of the Equallogic group. This function is then used in all the checks so in case the snmp connection fails for reason XYZ, the checks all return the connection failure. Before that, some of the check types still returned "OK" even though the values from an Equallogic member couldnt be read.

The plan is also to use the information queried by the snmp connectivity check as global information for future checks (e.g. to check the values of only one member).

So again to summarize: The new snmp connectivity check is built-in and you don't need to change your configurations to enable it. Simply replace the plugin with the new version and you're good to go. 

Enjoy.

And I'll enjoy my birthday now. 

 


Go to Homepage home
Linux Howtos how to's
Nagios Plugins nagios plugins
Links links

Valid HTML 4.01 Transitional
Valid CSS!
[Valid RSS]

8525 Days
until Death of Computers
Why?