Header RSS Feed
 
If you only want to see the articles of a certain category, please click on the desired category below:
ALL Android Backup BSD Database Hacks Hardware Internet Linux Mail MySQL Monitoring Network Personal PHP Proxy Shell Solaris Unix Virtualization VMware Windows Wyse

Permission denied when writing on NTFS mount, even as root
Thursday - Oct 18th 2018 - by - (0 comments)

To create an offsite backup, I plugged an external hard drive via USB to my home NAS server. The external hdd has one partition and is formatted with NTFS (to allow create some backups from Windows hosts, too).

I mounted the partition to /mnt2 and wanted to sync the data from NAS, but it failed:

# rsync -rtuP /mnt/data/Movies/ /mnt2/Movies/
sending incremental file list
Test1.mp4
  1,956,669,762 100%  108.73MB/s    0:00:17 (xfr#1, to-chk=1040/1042)
Test2.mp4
    436,338,688  20%  104.03MB/s    0:00:16  ^C
rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(638) [sender=3.1.1]
rsync: mkstemp "/mnt2/Movies/.Test1.mp4.mwuEtR" failed: Permission denied (13)
rsync: mkstemp "/mnt2/Movies/.Test2.mp4.ialsVx" failed: Permission denied (13)
rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at io.c(504) [generator=3.1.1]

Permissions were correct, at least that root was able to write:

# ls -l /mnt2
total 20M
drwx------ 1 root root    0 Jan  2  2018 Family
drwsr-sr-x 1 root root 232K Oct 14 20:00 Movies
drwx------ 1 root root  24K Dec 30  2017 Movies-Kids
drwx------ 1 root root    0 May  6  2017 Pictures

But when I tried to manually create a file, permission denied again:

# touch /mnt2/bla
touch: cannot touch ‘bla’: Permission denied

I checked dmesg and saw the following:

[12867539.697380] ntfs: (device sde1): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.
[12867539.697386] ntfs: (device sde1): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.
[12867539.697392] ntfs: (device sde1): ntfs_setattr(): Changes in user/group/mode are not supported yet, ignoring.

I checked how the partition was mounted:

# mount | grep sde1
/dev/sde1 on /mnt2 type ntfs (rw,relatime,uid=0,gid=0,fmask=0177,dmask=077,nls=utf8,errors=continue,mft_zone_multiplier=1)

"rw is there so it should work", would be my first guess. But I remembered that NTFS mounts are a little bit special on Linux.

In order to "really" mount a NTFS drive and write on it, one needs the ntfs-3g package, which uses fuse in the background.
Note: I wrote a similar article but for MAC OS X back in 2011: How to read and write an NTFS external disk on a MAC OS X.

I installed the package which installed fuse as a dependency:

# apt-get install ntfs-3g
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following extra packages will be installed:
  fuse
The following NEW packages will be installed:
  fuse ntfs-3g

Now I just needed to unmount the external hdd and mount it with ntfs-3g:

# mount -t ntfs-3g /dev/sde1 /mnt2

Checking mount again, the partition is now mounted as type fuseblk:

# mount | grep sde1
/dev/sde1 on /mnt2 type fuseblk (rw,relatime,user_id=0,group_id=0,allow_other,blksize=4096)

And voilà, I can now write to the NTFS partition:

# touch /mnt2/bla && stat /mnt2/bla
  File: ‘/mnt2/bla’
  Size: 0             Blocks: 0          IO Block: 4096   regular empty file
Device: 841h/2113d    Inode: 16406       Links: 1
Access: (0777/-rwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2018-10-18 20:29:45.914775500 +0200
Modify: 2018-10-18 20:29:45.914775500 +0200
Change: 2018-10-18 20:29:45.914775500 +0200
 Birth: -

 

check_esxi_hardware now supports python3!
Tuesday - Oct 2nd 2018 - by - (0 comments)

It has been a long time since python3 was released, yet the monitoring plugin check_esxi_hardware was not compatible with python3. Until yesterday. 

The initial reason for the delay (see issue #13) was the python module pywbem, which at first was only a module for python2. Since a new team took over the maintenance of pywbem, there was life yet again in pywbem and it was also ported to python3.

Second reason for the delay was: life. I already prepared a python3-compatible version a while ago, it just needed some more fine-tuning and testing. Finally this is now also completed and check_esxi_hardware now works on both python2 and python3. Same code, same plugin. That was very important to me to still be able to run the new version of the plugin on whatever environment.

 

Grub2 install fails on USB drive with error: appears to contain a ufs1 filesystem
Friday - Sep 28th 2018 - by - (0 comments)

In my previous article (How to compare speed of USB flash pen drives) I briefly mentioned I had to reinstall the OS of my NAS-server on a USB flash drive. When I did so, the last step in the Debian installer (install grub2) failed, but without a clear error message. Because I was in a hurry back then, I installed LILO as bootloader. This worked and the NAS booted correctly.

Now it was time to investigate and on the running Debian OS I tried to install grub2:

root@nas:~# apt-get install grub2
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following NEW packages will be installed:
  grub2
0 upgraded, 1 newly installed, 0 to remove and 6 not upgraded.
Need to get 2,476 B of archives.
After this operation, 16.4 kB of additional disk space will be used.
Get:1 http://ftp.ch.debian.org/debian stretch/main amd64 grub2 amd64 2.02~beta3-5 [2,476 B]
Fetched 2,476 B in 0s (38.7 kB/s)
Selecting previously unselected package grub2.
(Reading database ... 28456 files and directories currently installed.)
Preparing to unpack .../grub2_2.02~beta3-5_amd64.deb ...
Unpacking grub2 (2.02~beta3-5) ...
Setting up grub2 (2.02~beta3-5) ...

So the installation of the package itself worked. What about the grub install?

root@nas:~# grub-install /dev/sde
Installing for i386-pc platform.
grub-install: error: hostdisk//dev/sde appears to contain a ufs1 filesystem which isn't known to reserve space for DOS-style boot.  Installing GRUB there could result in FILESYSTEM DESTRUCTION if valuable data is overwritten by grub-setup (--skip-fs-probe disables this check, use at your own risk).

When I was searching for this error I came across a Linux mint bug (grub-install fails on drive that previously had ufs2 installed). This is a true fact for the USB drive I'm using in the NAS server, it was previously used as a simple USB pen drive. So I tried to use the --skip-fs-probe parameter:

root@nas:~# grub-install /dev/sde --skip-fs-probe
Installing for i386-pc platform.
grub-install: warning: Attempting to install GRUB to a disk with multiple partition labels.  This is not supported yet..
grub-install: warning: Embedding is not possible.  GRUB can only be installed in this setup by using blocklists.  However, blocklists are UNRELIABLE and their use is discouraged..
grub-install: error: will not proceed with blocklists.

Now I was at the same point as in the mentioned bug report.

I decided to wipe the first 2047 sectors of the flash drive:

root@nas:~# dd if=/dev/zero of=/dev/sde bs=512 seek=1 count=2047
2047+0 records in
2047+0 records out
1048064 bytes (1.0 MB, 1.0 MiB) copied, 0.111457 s, 9.4 MB/s

Now there shouldn't be anything left to cause a grub-install hiccup. Let's try it again:

root@nas:~# grub-install /dev/sde
Installing for i386-pc platform.
Installation finished. No error reported.

Hurray! Oh and wow, the NAS boots so much faster with grub2 than with ILO (haven't used ILO since 2005...)

 

How to compare speed of USB flash pen drives
Wednesday - Sep 26th 2018 - by - (0 comments)

More or less two weeks ago, I tweeted about my NAS (Debian Jessie running on a HP Proliant N40L micro server) is dead because the USB Flash Drive (on which the OS was installed) died.

I removed the supposedly dead USB drive, inserted a new one and installed a fresh Debian (Stretch this time) on it. Took quite some time, given the USB flash drive was very slow to write on to, but eventually my NAS was up again.

The slow write speed led me to buy a new USB flash drive which was supposed to be much faster. Really? How does one compare actual write and read speed of USB flash drives? There's an app tool for it! It's called F3 and its main purpose is to find bad USB flash drives which claim to have a certain capacity but are in fact offering much less than what's written on the drive. This tool writes multiple files onto the flash drive until all space is used and then reads these files again. The comparison from written sectors vs. read sectors would show if a flash drive was advertising a wrong capacity. F3 also shows the actual write and read speeds and shows an average at the end of both steps. That can be used to compare speeds of USB flash drives!

I compared three USB flash/pen drives:

USB Flash Drive Comparison on Linux 

  • TDK TF10 8GB
  • Transcend JetFlash 4GB
  • Sandisk Ultra 32GB

All drives were inserted into the same USB 2.0 port so they can operate with the same bus speed (the Sandisk Ultra would support USB 3.0 which wouldn't be a fair comparison to the older pen drives).

First I installed f3:

# apt-get install f3

A typical test run starts with the f3write command:

# f3write /mnt/Test/
Free space: 2.35 GB
Creating file 1.h2w ... OK!                         
Creating file 2.h2w ... OK!                          
Creating file 3.h2w ... OK!                          
Free space: 16.00 MB
Average writing speed: 1.74 MB/s

Followed by the read operation of these created files:

# f3read /mnt/Test/
                  SECTORS      ok/corrupted/changed/overwritten
Validating file 1.h2w ... 2097152/        0/      0/      0
Validating file 2.h2w ... 2097152/        0/      0/      0
Validating file 3.h2w ...  710064/        0/      0/      0

  Data OK: 2.34 GB (4904368 sectors)
Data LOST: 0.00 Byte (0 sectors)
           Corrupted: 0.00 Byte (0 sectors)
    Slightly changed: 0.00 Byte (0 sectors)
         Overwritten: 0.00 Byte (0 sectors)
Average reading speed: 16.48 MB/s

Here are the results:

 USB Flash Drive
 Average Write Speed
Average Read Speed
 TDK TF10 8GB  1.74 MB/s
 16.48 MB/s
 Transcend JetFlash 4GB  1.89 MB/s
 14.66 MB/s
 Sandisk Ultra 32GB  13.14 MB/s
 34.02 MB/s

Clearly the Sandisk pen drive is much faster on write speed but only twice as fast on read speed. Anyway, thanks to this benchmark test the Sandisk drive will become the new USB flash drive for the NAS server.

 

How to use kubectl on a Rancher 2 managed Kubernetes cluster
Friday - Sep 21st 2018 - by - (0 comments)

The major change from Rancher 1.x to 2.x was the exclusive usage of Kubernetes Engine in the background prior to a choice of multiple orchestration engines (Cattle, Kubernetes, Mesos, Swarm). Rancher pushed their own orchestration engine "Cattle" in Rancher 1.x but now there's only Kubernetes left. 

Another big difference between Rancher 1.x to 2.x is (as of now, using Rancher 2.0.8) the fact that it is sometimes not enough to use the Rancher user interface or the API. To use the full capabilities of the Kubernetes cluster, sometimes it is required to directly talk with the underlying Kubernetes engine. This can be seen often when one researches in the Rancher forums.

The easiest way to start up the "kubectl" command, is to select a cluster in the user interface and then simply click on the button "Launch kubectl":

Rancher 2: Launch kubectl 

This opens up a shell window inside the browser. Kubectl is automatically started and connected with the selected cluster:

Rancher 2 kubectl shell in browser

However the shell has some major limitations (e.g. copy/pasting). It's fine and very helpful (indeed) for quick checks and verifications but for deeper analysis it can be a pain. But there's also the possibility to use kubectl from your own machine and connect to the cluster, even when managed by Rancher. And this is what this article is about.

First you need to install kubectl on your machine. To do so follow the official documentation "Install and Set Up kubectl" which explains it straight forward. There are packages ready for almost every OS/distribution.

On my workstation I currently run Linux Mint 18.3, which runs Ubuntu 16.04 (Xenial) underneath:

ckadm@mintp ~ $ cat /etc/*release* /etc/upstream-release/*
DISTRIB_ID=LinuxMint
DISTRIB_RELEASE=18.3
DISTRIB_CODENAME=sylvia
DISTRIB_DESCRIPTION="Linux Mint 18.3 Sylvia"
NAME="Linux Mint"
VERSION="18.3 (Sylvia)"
ID=linuxmint
ID_LIKE=ubuntu
PRETTY_NAME="Linux Mint 18.3"
VERSION_ID="18.3"
HOME_URL="http://www.linuxmint.com/"
SUPPORT_URL="http://forums.linuxmint.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/linuxmint/"
VERSION_CODENAME=sylvia
UBUNTU_CODENAME=xenial
cat: /etc/upstream-release: Is a directory
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04 LTS"

To install kubectl on this Ubuntu 16.04 (Xenial) derivate, the following steps are sufficient:

ckadm@mintp ~ $ sudo apt-get update && sudo apt-get install -y apt-transport-https
ckadm@mintp ~ $ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
ckadm@mintp ~ $ sudo touch /etc/apt/sources.list.d/kubernetes.list
ckadm@mintp ~ $ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
ckadm@mintp ~ $ sudo apt-get update
ckadm@mintp ~ $ sudo apt-get install -y kubectl

 The kubectl command can now be used:

ckadm@mintp ~ $ kubectl version
Client Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.0-rc.1", GitCommit:"3e4aee86dfaf933f03e052859c0a1f52704d4fef", GitTreeState:"clean", BuildDate:"2018-09-18T21:08:06Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

So far so good, but how to connect to the cluster?

Remember the button "Launch kubectl" from above? There's a second button next to it: Kubeconfig File. Click on this button and you will see a config in yaml format appearing in the browser:

Rancher kubectl config  

Copy the content starting with "apiVersion" until the end. Note that at the end of the config file the "contexts" are configured.

This is because the Rancher cluster itself serves as a Kubernetes Federation cluster. Basically this means that the Kubernetes cluster running the Rancher application itself is kind of a "parent" cluster. All other clusters are connected to this parent cluster and are talked to using contexts (a bit like SNMPv3 contexts if you know about them). Edit: See edit note at the end of the article.
The advantage is clearly that you have one cluster to manage all the other clusters. But there's a downside: Kubernetes Federation is not yet considered mature. From the official documentation:

"Maturity: The federation project is relatively new and is not very mature. Not all resources are available and many are still alpha. Issue 88 enumerates known issues with the system that the team is busy solving."

The referenced issue 88 itself still has a lot of open tasks and problems.

Back to the topic: Copy the config content from the browser and save it into your user's kubectl config folder (which is located at $HOME/.kube or ~/.kube) as "config" file. You might need to create the folder first.

ckadm@mintp ~ $ mkdir ~/.kube
ckadm@mintp ~ $ vi .kube/config

You can now launch kubectl commands:

ckadm@mintp ~ $ kubectl get all
Unable to connect to the server: x509: certificate signed by unknown authority

Oh! What's this? Actually this error shows up because the certificates, which are used to connect to the cluster created by Rancher, are self-signed. Ergo kubectl wants to play safe and doesn't let you connect. But there's a parameter to disable the certificate validation check:

ckadm@mintp ~ $ kubectl get all --insecure-skip-tls-verify=true
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.43.0.1            443/TCP   29d

Here we go, that's the same output as from the kubectl command launched in the browser shell.

From now on you're able to quickly connect to your Kubernetes cluster created/managed by Rancher and investigate and get more information, for example details about a pod:

ckadm@mintp ~ $ kubectl get pod importer-84484c757b-gbqcm --namespace gamma --insecure-skip-tls-verify=true
NAME                        READY   STATUS    RESTARTS   AGE
importer-84484c757b-gbqcm   1/1     Running   0          5h

Edit: A few hours after I already published this article, I stumbled across a post in the Rancher forums, which essentially asks for Kubernetes Federation in Rancher 2. It was denied with the same reason I wrote above: It is not mature enough. So this would mean Rancher 2.x does in fact NOT use Federation. Unfortunately it is not written in the documentation how exactly this "parent-child-clustering" is setup in the background.

 

Adapt Roundcube managesieve plugin to dynamically lookup sieve host
Friday - Sep 7th 2018 - by - (0 comments)

I've been using Roundcube webmail since a very early release (0.2.1) since 2009. And still now I think it's the best open source webmail project available.

On a very particular mail server setup using dedicated mailbox servers yet centralized and highly available mail proxies, I came across a problem with Roundcube's "managesieve" plugin.

To explain the setup a bit: Public IMAP/POP3/SMTP listeners are configured on central/HA mail proxies using Postfix transport maps for internal relaying and SASL authentication with a central MySQL database. Nginx is used as IMAP/POP3 reverse proxy. On the same host(s) Roundcube is installed using Nginx+PHP-FPM.

While the IMAP and SMTP connection works of course fine with a "localhost" connection. IMAP connects to localhost - which is the Nginx reverse proxy, which forwards the IMAP-login to the mailbox server (dynamical lookup from the central MySQL database). Same for SMTP: Connect to localhost where Postfix listens and authenticates with SASL.

But Sieve is a different story. It has its own listener (by default tcp/4190) and its own protocol. Something which Nginx is not able to proxy. Hence I got the following error when I tried to access the "Filter" settings in Roundcube:

Roundcube managesieve error 

An error occured. Unable to connect to managesieve server.

Well yes, makes sense because there is no sieve listening on localhost. But the problem is, the managesieve plugin only supports a single entry as sieve host in the config:

 // managesieve server address, default is localhost.
// Replacement variables supported in host name:
// %h - user's IMAP hostname
// %n - http hostname ($_SERVER['SERVER_NAME'])
// %d - domain (http hostname without the first part)
// For example %n = mail.domain.tld, %d = domain.tld
$config['managesieve_host'] = 'localhost';

None of the possible values would help me in this case. Even %h, which looked promising, points at the end to localhost again. So I digged through the source code and found the "connect" function in lib/Roundcube/rcube_sieve_engine.php (see source code in public repo):

    /**
     * Connect to configured managesieve server
     *
     * @param string $username User login
     * @param string $password User password
     *
     * @return int Connection status: 0 on success, >0 on failure
     */
    public function connect($username, $password)
    {
        // Get connection parameters
        $host = $this->rc->config->get('managesieve_host', 'localhost');
        $port = $this->rc->config->get('managesieve_port');
        $tls  = $this->rc->config->get('managesieve_usetls', false);

        $host = rcube_utils::parse_host($host);
        $host = rcube_utils::idn_to_ascii($host);

        // remove tls:// prefix, set TLS flag
        if (($host = preg_replace('|^tls://|i', '', $host, 1, $cnt)) && $cnt) {
            $tls = true;
        }

        if (empty($port)) {
            $port = getservbyname('sieve', 'tcp') ?: self::PORT;
        }

        $plugin = $this->rc->plugins->exec_hook('managesieve_connect', array(
            'user'      => $username,
            'password'  => $password,
            'host'      => $host,
            'port'      => $port,
            'usetls'    => $tls,
            'auth_type' => $this->rc->config->get('managesieve_auth_type'),
            'disabled'  => $this->rc->config->get('managesieve_disabled_extensions'),
            'debug'     => $this->rc->config->get('managesieve_debug', false),
            'auth_cid'  => $this->rc->config->get('managesieve_auth_cid'),
            'auth_pw'   => $this->rc->config->get('managesieve_auth_pw'),
            'socket_options' => $this->rc->config->get('managesieve_conn_options')
        ));
[...]

The relevant part is the $host variable. It will read the value from the config file's "managesieve_host" and fallback to "localhost". To use a dynamical lookup of the managesieve host, I modified the code:

// Get connection parameters
//$host = $this->rc->config->get('managesieve_host', 'localhost'); // this is the default
// Infiniroot added dynamic lookup of managesieve_host:
$domain=substr(strrchr($username, "@"), 1);
$dbh=mysqli_connect("dbhost", "dbuser", "dbpass", "dbname") or die ('I cannot connect to the database because: ' . mysqli_connect_error());
$anfrage=mysqli_query($dbh, "SELECT targetserver FROM transport_maps WHERE domain = '$domain' limit 1");
while ($row = mysqli_fetch_assoc($anfrage)) {
       $resultip = $row[authserver];
}
$host = $resultip;
// End Infiniroot modifications

This will now make a lookup in the central database based on the user login (which is an e-mail address). The domain name is taken from the user's e-mail address, looked up in the transport table (the same table which is also used by Postfix for relaying mails to the target mailbox server) and the resulting IP address is returned as new $host value. From there, the managesieve plugin does what it does and connects.

In Roundcube, the result is a success:

Roundcube Managesieve Dynamic Sieve Host Lookup 

PS: As you can see from the code comments above, the provider is www.infiniroot.com ;-)

 

Install a newer Valgrind version on Ubuntu 14.04 using alternatives
Monday - Sep 3rd 2018 - by - (0 comments)

Although Valgrind is part of the default Ubuntu repositories, the version can sometimes lack behind. In this case a developer required a newer version of Valgrind on an Ubuntu 14.04 server.

The installed version (from the official repos) is 3.10.1:

# dpkg -l|grep valgrind | awk '{print $2" "$3}'
valgrind 1:3.10.1-1ubuntu3~14.5

# valgrind --version
valgrind-3.10.1

 The current release (as of this writing) is 3.13.0. So let's get this new version on board! Luckily this is pretty easy on Debian based systems (like Ubuntu) when using "alternatives".

First download the new release, unpack it, and change into the unpacked folder:

$ wget ftp://sourceware.org/pub/valgrind/valgrind-3.13.0.tar.bz2
$ tar -xjf valgrind-3.13.0.tar.bz2
$ cd valgrind-3.13.0/

Compile the source code:

$ ./configure
$ make

Install the newly compiled files. By default (using ./configure without any parameters) this will install the valgrind binary in /usr/local/bin:

$ sudo make install

At this moment we have two different installations of Valgrind on the system:

# whereis valgrind
valgrind: /usr/bin/valgrind.bin /usr/bin/valgrind /usr/lib/valgrind /usr/bin/X11/valgrind.bin /usr/bin/X11/valgrind /usr/local/bin/valgrind /usr/local/lib/valgrind /usr/include/valgrind /usr/share/man/man1/valgrind.1.gz

As you can see, the first valgrind appearing in the list is /usr/bin/valgrind, somewhat later /usr/local/bin/valgrind is in the list. Now let's tell the system to use an "alternative installation" (hence the "alternatives" word) of Valgrind:

$ sudo update-alternatives --install /usr/bin/valgrind valgrind /usr/local/bin/valgrind 1 --force
update-alternatives: using /usr/local/bin/valgrind to provide /usr/bin/valgrind (valgrind) in auto mode

This command tells Ubuntu to use an alternative for /usr/bin/valgrind - it should now use the binary found in path /usr/local/bin/valgrind.
To expain this on a file level:

$ ll /usr/bin/valgrind
lrwxrwxrwx 1 root root 26 Sep  3 09:34 /usr/bin/valgrind -> /etc/alternatives/valgrind

/usr/bin/valgrind is now a symlink to /etc/alternatives/valgrind

$ ll /etc/alternatives/valgrind
lrwxrwxrwx 1 root root 23 Sep  3 09:34 /etc/alternatives/valgrind -> /usr/local/bin/valgrind

And /etc/alternatives/valgrind is itself another symlink to the final destination /usr/local/bin/valgrind. From now on, the system uses the new Valgrind version:

$ valgrind --version
valgrind-3.13.0

 

Install/Upgrade cmake 3.12.1 on Ubuntu 14.04 using alternatives
Monday - Sep 3rd 2018 - by - (0 comments)

In a previous article, I described how it's possible to Install/Upgrade cmake 3.10.1 in Ubuntu 14.04 using alternatives.

Since then a couple of new versions were released and the same procedure can still be used to install cmake 3.12.1.

Download and compile:

$ wget http://www.cmake.org/files/v3.12/cmake-3.12.1.tar.gz
$ tar -xvzf cmake-3.12.1.tar.gz
$ cd cmake-3.12.1/
$ ./configure
$ make

Make's install command installs cmake by default in /usr/local/bin/cmake, shared files are installed into /usr/local/share/cmake-3.10.

Now it's time to create a backup, in case you need to roll back to the old version:

$ /usr/local/bin/cmake --version
cmake version 3.10.1

CMake suite maintained and supported by Kitware (kitware.com/cmake).

$ sudo cp -p /usr/local/bin/cmake{,.3.10.1}

$ ll /usr/local/bin/cmake*
-rwxr-xr-x 1 root root 16509675 Dez 22  2017 /usr/local/bin/cmake
-rwxr-xr-x 1 root root 16509675 Dez 22  2017 /usr/local/bin/cmake.3.10.1

To install (copy) the binary and libraries to the new destination, run:

sudo make install

If you haven't already installed a newer cmake installation, run the following command to tell Ubuntu that the cmake command is now being replaced by an alternative installation:

sudo update-alternatives --install /usr/bin/cmake cmake /usr/local/bin/cmake 1 --force

If you already have a custom cmake version installed (in my case I still had the 3.10.1 version active), the update-alternatives command is not necessary.
The make install command will replace the existing binary in /usr/local/bin/cmake. This can be verified using:

cmake --version
cmake version 3.12.1

CMake suite maintained and supported by Kitware (kitware.com/cmake).

 

Change check source in an Icinga 2 distributed master-master setup
Tuesday - Aug 21st 2018 - by - (0 comments)

In my new Icinga 2 architecture I run a distributed setup using a master-master configuration. Both master nodes reside in two different data centers but are connected through internal LAN. Almost all host and service objects are within the "master" zone. And both master nodes (called icinga1 and icinga2) are used as endpoints for this master zone.

root@icinga1:~# cat /etc/icinga2/zones.conf
object Endpoint "icinga1" {
  host = "icinga1"
}

object Endpoint "icinga2" {
  host = "icinga2"
}

object Zone "master" {
    endpoints = [ "icinga1", "icinga2" ]
}

object Zone "global-templates" {
    global = true
}

object Zone "director-global" {
    global = true
}

Icinga automatically distributes checks across both both endpoints, therefore balancing the checks. Sometimes the checks are executed on icinga1, sometimes on icinga2. For most of the checks, this turned out to be ok.
But I came across certain checks where I needed to specifically tell Icinga from where/on which node the check must be executed. In this scenario I needed to ping the interface of the central firewall to determine differences in latency between the two locations.

Icinga 2 master-master setup 

In my previous Icinga setup I used a master-satellite setup to "balance" the checks based on the physical location of the servers to achieve a "different view" of both locations. But in the master-master setup, this is balanced and the graphs contain mixed results over both locations.

So the question is: How can I force a check to be executed on a certain node?

First I tried to create two additional zones called "locationa" and "locationb" and assigned endpoint "icinga1" to "locationa" and endpoint "icinga2" to "locationb" in zones.conf:

object Zone "locationa" {
    endpoints = [ "icinga1" ]
}

object Zone "locationb" {
    endpoints = [ "icinga2" ]
}

And then moved the two service objects into the new zone folders (/etc/icinga2/zones.d/locationa and /etc/icinga2/zones.d/locationb).
But a check config showed that this didn't work and resulted in the following error:

# /etc/init.d/icinga2 checkconfig
 * checking Icinga2 configuration                                                                                     
information/cli: Icinga application loader (version: r2.8.2-1)
information/cli: Loading configuration file(s).
information/ConfigItem: Committing config item(s).
information/ApiListener: My API identity: icinga1
critical/config: Error: Endpoint 'icinga2' is in more than one zone.
Location: in /etc/icinga2/zones.conf: 5:1-5:30
/etc/icinga2/zones.conf(3): }
/etc/icinga2/zones.conf(4):
/etc/icinga2/zones.conf(5): object Endpoint "icinga2" {
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/etc/icinga2/zones.conf(6):   host = "icinga2"
/etc/icinga2/zones.conf(7): }

critical/config: Error: Endpoint 'icinga1' is in more than one zone.
Location: in /etc/icinga2/zones.conf: 1:0-1:29
/etc/icinga2/zones.conf(1): object Endpoint "icinga1" {
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/etc/icinga2/zones.conf(2):   host = "icinga1"
/etc/icinga2/zones.conf(3): }

critical/config: 2 errors
 * checking Icinga2 configuration. Check '/var/log/icinga2/startup.log' for details.

So back to step one and I started from scratch: RTFM. And indeed, I came across this: "Pin Checks in a Zone".

In case you want to pin specific checks to their endpoints in a given zone you’ll need to use the command_endpoint attribute. This is reasonable if you want to execute a local disk check in the master Zone on a specific endpoint then.

Wow. That sounds exactly what I need. So I added the "command_endpoint" in the two config files:

# cat /etc/icinga2/zones.d/master/network/FW/firewall-locationa.conf
object Host "firewall-locationa" {
  import "dummy-host"
}

# check ping
object Service "PING FW Interface VLAN X" {
  command_endpoint = "icinga1"
  import "generic-service"
  host_name = "firewall-locationa"
  check_command = "ping"
  vars.ping_address = "192.168.99.1"
}

# cat /etc/icinga2/zones.d/master/network/FW/firewall-locationb.conf
object Host "firewall-locationb" {
  import "dummy-host"
}

# check ping
object Service "PING FW Interface VLAN X" {
  command_endpoint = "icinga2"
  import "generic-service"
  host_name = "firewall-locationb"
  check_command = "ping"
  vars.ping_address = "192.168.99.1"
}

Check config didn't report any errors, so I went ahead.

The check "PING FW Interface VLAN X" on host "firewall-locationb" worked immediately and I could see "check source" was set to "icinga2" in the UI.
But the same check on "firewall-locationa" resulted in an UNKNOWN state and output: Endpoint does not accept commands.

But this is actually quite easy to fix. The "command_endpoint" uses the Icinga 2 API in the background. Because the node icinga2 is actually a slave (although called master-master setup, the second master is setup like a satellite, simply receiving all configs), it is already configured to accept commands in the API feature:

root@icinga2:~# cat /etc/icinga2/features-enabled/api.conf
/**
 * The API listener is used for distributed monitoring setups.
 */
object ApiListener "api" {
  accept_config = true
  accept_commands = true
}

But this line (accept_commands) was missing on node icinga1. Once I added this and restarted Icinga 2, the check for for host "firewall-locationa" was working too.

With these configs I have now the same ping check running to the same destination but from two different sources. Thanks to the graphs of the ping checks I can now see the differences of RTA and Packet Losses.

 

Another way to append a text in sed using ampersand
Wednesday - Aug 15th 2018 - by - (0 comments)

I love such situations when I accidentally stumble across something which turns out to be cool and pretty useful!

I wanted to replace an umlaut (ö) in a text file with the html equivalent (ö):

# cat /tmp/xxx.html
This is a text containing an ö umlaut.
Because in German we use ä ö ü.

For this I wanted to use a simple sed command:

# cat /tmp/xxx.html | sed "s/ö/ö/g"
This is a text containing an öouml; umlaut.
Because in German we use ä öouml; ü.

As you can see above, instead of replacing all ö's, the character was appended by 'ouml;'.

Turns out that the ampersand (&) has a special meaning in sed and is, in this case, being used to "append characters after found element".

Practical example:

# cat /tmp/xxx.html | sed "s/text/& I wrote myself/g"
This is a text I wrote myself containing an ö umlaut.
Because in German we use ä ö ü.

Can be quite handy actually!

To achieve my original goal (replace ö) the special ampersand character needs to be escaped:

# cat /tmp/xxx.html | sed "s/ö/\ö/g"
This is a text containing an ö umlaut.
Because in German we use ä ö ü.


 


Go to Homepage home
Linux Howtos how to's
Monitoring Plugins monitoring plugins
Links links

Valid HTML 4.01 Transitional
Valid CSS!
[Valid RSS]

7028 Days
until Death of Computers
Why?