Header RSS Feed
If you only want to see the articles of a certain category, please click on the desired category below:
ALL Android Backup BSD Database Hacks Hardware Internet Linux Mail MySQL Nagios/Monitoring Network Personal PHP Proxy Shell Solaris Unix Virtualization VMware Windows Wyse

How to force a hard reboot your Sony Xperia Tablet Z
Saturday - Jan 24th 2015 - by - (0 comments)

Yesterday I installed the app "Image Converter" to convert png images to jpg. After having successfully converted 5 images, a pop up appeared in the app asking me to rate the app. When I pressed cancel, my tablet froze. A first.

Not even staying on the power button for at least 20 seconds helped. To figure out how to do a hard reboot I had to fiddle around a bit. Lucky I immediately found the combo on the first try.

Press the POWER + Volume Up buttons together for around 5 seconds. The tablet will then reboot.  


Will I still be root if I install an update on Sony Xperia Android?
Friday - Jan 23rd 2015 - by - (0 comments)

A while ago I wrote about the KitKat issue which prevents to (manually) write data to the SD card (Sony Xperia Tablet Z with Kitkat: Fix for permission denied on sd card). To solve that issue, I installed the towelroot hack to become root of the Android operating system and then installed the permission fix (see the mentioned post for more details).

Shortly after that, an update for my Sony Xperia Tablet Z was released. I wondered if the update would remove the previous hacks (fixes in my eyes) and I would therefore lose the root privileges? Will I still be root if I install the Sony update?

Sony Xperia Z Update Sony Xperia Z update Sony Xperia Tablet Z Version 

Sony Xperia Tablet Z Create folder in SD Card Sony Xperia Z write permission sd card 

The answer is: Yes. Hurray!

After the update I created a new folder in the SD card mount, as you can see in the screenshots above. And I am still able to become root in the Terminal app.


Overwrite variables in template file with chef template command
Wednesday - Jan 21st 2015 - by - (0 comments)

A typical chef template definition would look like this:

template "/etc/motd" do
  source "motd.erb"
  owner "root"
  group "root"
  mode "0644"

While the template itself could look like this:

Hello and welcome to <%= node[:mycookbook][:systemname] %> !

To make this work, the variable [:mycookbook][:systemname] has to be defined somewhere. This could be in several places:

  • In the cookbooks (mycookbook) attribute file (for example attributes/default.rb) as a default attribute definition
  • As a default_attribute definition in a role or node definition (doesn't make sense in an environment definition for a systemname)
  • As an override_attribute in a role, environment or even node definition

But in certain scenarios this could cause a headache. For example when we have several nodes and all of them run the exact same role. The only difference is the recipe which is used to differ the nodes. When the variable is defined in the attribute files (attributes/host1.rb, attributes/host2.rb, etc.) the variables get mixed up.

cat attributes/host1.rb
default[:mycookbook][:systemname] = "host1"

cat attributes/host2.rb
default[:mycookbook][:systemname] = "host2"

All attribute files of the cookbook are read when chef runs and because the variable is always defined with the same name (node[:mycookbook][:systemname]), it cannot be guaranteed that the correct value is taken for the system. They overwrite themselves according to the order in which they are parsed by the chef run.

A fast and easy possibility to solve this is to set the variables directly in the recipe within the template command:

template "/etc/motd" do
  source "motd.erb"
  variables( :sysname => "host1" )
  owner "root"
  group "root"
  mode "0644"

With the variables line, special variables can be defined (here the variable sysname is set and given the value "host1") and sent directly to the template file. The template file in return needs to look a bit different to handle direct input from the template command:

Hello and welcome to <%= @sysname %> !

Now every node has its correct system name in the motd output, because every node uses its own recipe (in which the variable for the sysname is set).


Disable AHBL (Abuse Hosts) DNS blocklist in Spamassassin
Tuesday - Jan 20th 2015 - by - (0 comments)

Since this month, mails are often wrongly tagged as spam by Spamassassin because of a wrong lookup in the AHBL DNSBL:

Content analysis details:   (10.0 points, 5.0 required)

 pts rule name              description
---- ----------------------
 2.4 DNS_FROM_AHBL_RHSBL    RBL: Envelope sender listed in dnsbl.ahbl.org
 3.0 CK_DIVERS_BODY         BODY: Mail contents one of the words
 1.4 FUZZY_CREDIT           BODY: Attempt to obfuscate words in spam
 0.7 HTML_TAG_BALANCE_BODY  BODY: HTML has unbalanced "body" tags
 0.0 HTML_MESSAGE           BODY: HTML included in message
 1.1 MIME_HTML_ONLY         BODY: Message only has text/html MIME parts
 1.3 RDNS_NONE              Delivered to internal network by a host with no
 0.0 T_FILL_THIS_FORM_SHORT Fill in a short form with personal information

According to the ahbl website, the dnsbl has stopped its services and this may cause false positives in the lookups:

If you are still using these services, this may cause you to incorrectly tag e-mail as spam, or create other unintended consequences.  Fix and maintain your servers, now.  Do not contact us about 'removing' your domain or IP address from our lists, as there is nothing we can do for you.

OK, the message is clear. Let's maintain the servers.

First of all it is important to know, that the AHBL is a default DNSBL used by Spamassassin. So that configuration doesn't come from the end user but from Spamassassin itself. This is mentioned on https://wiki.apache.org/spamassassin/DnsBlocklists :

Black Lists

Support for the following DNSBLs is built-in, and shipped in the default configuration.

    AHBL http://www.ahbl.org/

    NJABL http://www.njabl.org/

    SORBS http://www.sorbs.net/

    SPAMCOP http://www.spamcop.net/

    Spamhaus PBL+SBL+XBL http://www.spamhaus.org/ NOTE: Spamhaus is enabled as a "free for most" provider. See: http://www.spamhaus.org/organization/dnsblusage.html.


So the AHBL has to be manually disabled in the default Spamassassin rules. In a Debian installation these can be found in /usr/share/spamassasin. Let's grep for the AHBL:

grep ahbl /usr/share/spamassassin/*
/usr/share/spamassassin/20_dnsbl_tests.cf:header DNS_FROM_AHBL_RHSBL      eval:check_rbl_envfrom('ahbl', 'rhsbl.ahbl.org.')
/usr/share/spamassassin/20_dnsbl_tests.cf:describe DNS_FROM_AHBL_RHSBL    Envelope sender listed in dnsbl.ahbl.org
/usr/share/spamassassin/30_text_de.cf:lang de describe DNS_FROM_AHBL_RHSBL Absenderadresse in Liste von dnsbl.ahbl.org

The following section can be commented or deleted from /usr/share/spamassassin/20_dnsbl_tests.cf:

# Now, single zone BLs follow:

# another domain-based blacklist
header DNS_FROM_AHBL_RHSBL      eval:check_rbl_envfrom('ahbl', 'rhsbl.ahbl.org.')
describe DNS_FROM_AHBL_RHSBL    Envelope sender listed in dnsbl.ahbl.org
tflags DNS_FROM_AHBL_RHSBL      net

IMHO this should be fixed directly from Spamassassin or in the Spamassassin Debian package instead of manually fiddling around in the default rules. But hey - there's already an open bug for this issue: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=774768. According to this bug, this problem could/can be prevented by regularly updating the Spamassassin rules with a cronjob. The cronjob can be enabled by setting "CRON=1" in /etc/default/spamassassin. However even with a manual launch of "sa-update", no rules were updated.

This leaves only two options:

1) Comment or delete the AHBL from the default rule definition in /usr/share/spamassassin/20_dnsbl_tests.cf or
2) Overwrite the scoring of "DNS_FROM_AHBL_RHSBL" in /etc/spamassassin/local.cf .


Bug (cannot concatenate) fixed in new version of check_esxi_hardware plugin
Monday - Jan 19th 2015 - by - (0 comments)

Several users of the Nagios/Monitoring plugin check_esxi_hardware.py have informed me in the past few weeks, that the following error is shown when used against an ESXi server running IBM hardware which uses IBM iso image of the ESXi intallation.

Traceback (most recent call last): 
  File "./check_esxi_hardware.py", line 625, in 
   verboseoutput("  Element Name = "+elementName) 
TypeError: cannot concatenate 'str' and 'NoneType' objects

This error is also described in the FAQ and only affected IBM servers which use the IBM ESXi image (the regular ESXi intallations from VMware worked fine).

Andreas Gottwald has sent me a patch which fixes that issue. Thanks!

Therefore with todays release, the current version number of check_esxi_hardware.py is now 20150119. 


MySQL replication not working - but in SHOW SLAVE STATUS everything is OK
Friday - Jan 16th 2015 - by - (0 comments)

A strange problem has hit me recently where a MySQL replication on Solaris zones failed and the slave did not get any new log files from the master anymore.

The slave is of course being monitored (with the Nagios plugin check_mysql_slavestatus.sh) but everything was always OK... until it suddenly became CRITICAL because of the following error:

Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'

What happened? It seems that for a couple of days, the replication silently failed and the master and slave didn't communicate correctly with each other anymore. While the master continued to update its binary log files, the slave did not retrieve the changed binary logs from the master. However there was no error indicated in the SHOW SLAVE STATUS output:

mysql> show slave status\G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_User: replica
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: bin.000408
          Read_Master_Log_Pos: 24547311
               Relay_Log_File: relay-log.000330
                Relay_Log_Pos: 4
        Relay_Master_Log_File: bin.000408
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
          Exec_Master_Log_Pos: 24547311
              Relay_Log_Space: 120
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
1 row in set (0.00 sec)

check_mysql_slavestatus reads all these values, and because everything seems to be OK according to the 'show slave status' output, no issues were found.

But the non-working synchronisation could easily be checked, by doing a simple write operation on the master and check the result on the slave. Here I create a new database on the master and then check for it appearing on the slave:

mysql> create database claudiotest;
Query OK, 1 row affected (0.02 sec)

mysql> show master status;
| File       | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
| bin.000408 | 47461189 |              |                  |                   |

#On SLAVE nothing was done
[root@slave ~]# ll /var/lib/mysql/ | grep claudio
[root@slave ~]# mysql -e "show databases" | grep claudio

#... and nothing moved either!
mysql> show slave status\G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_User: replica
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: bin.000408
          Read_Master_Log_Pos: 24547311
               Relay_Log_File: relay-log.000331
                Relay_Log_Pos: 4
        Relay_Master_Log_File: bin.000408
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

So although everything seems to be in order according to the slave status output, nothing was actually done. The slave didn't even get the relevant information from the master, that the master log file position has changed.
This particular MySQL (5.6) replication runs on two virtual Solaris servers (zones), each with two virtual nics. The replication happens over the secondary interface (backend). Now I strongly suspect a networking issue/bug of some kind of the operating system, although telnet and ping show a correct communication between master and slave. A restart of the MySQL server on the slave didn't help either.

I finally got the replication working again, by using the primary network interface of the zone.
To catch such replication/connectivity issues, I have modified check_mysql_slavestatus with a new check type. The change will be published soon.


Server does not detect network card anymore (as if the card was gone)
Wednesday - Jan 14th 2015 - by - (0 comments)

In September 2010 I wrote a post about a weird phenomenon on a network interface which somehow seemed "stuck" and which caused the operating system (SLES11) to not recognize the network card anymore. 

Now its 2015 and something very similar, if not the same, happened to me again. 

On a newly racked server I tried to install RHEL7 and booted from the installation dvd. In the graphical installation routine, the network card was not detected. I could confirm this in the BIOS where the network card was not listed under PCI devices:

No network card detected

After physical verification on site, the network card was very much there and two nics of four were patched. The nic led's were steady green. Yet the server still didn't recognize the card.

Steady NIC LED

I have tried a couple of reboots without success until I unplugged the RJ45 cables. The nic led's went off, I waited for a couple of seconds and replugged the cables. Now the leds started blinking. I launched another reboot and hey - the network card was finally being "seen" by the server. In BIOS and in the RHEL installation as well.

Network card detected in BIOS 

It seems that there was a similar issue as I experienced back in 2010. The network card somehow froze after a signal from the network cables, so it couldn't communicate with the motherboard anymore. I'm looking forward to hearing some other theories or even a confirmation. :)


Cannot register RHEL 7 server (certificate verify failed) due to wrong time
Tuesday - Jan 13th 2015 - by - (0 comments)

I tried to register a newly installed Red Hat Enterprise Linux 7 server with the subscription-manager command but got the following error:

subscription-manager register
Username: xxx
Password: xxx
Unable to verify server's identity: certificate verify failed

In the rhsm.log file there is the same error with a bit of more details but unfortunately nothing to point me in the right direction:

cat /var/log/rhsm/rhsm.log
[DEBUG] subscription-manager @connection.py:450 - Making request: GET https://subscription.rhn.redhat.com:443/subscription/users/xxx/owners
[ERROR] subscription-manager @managercli.py:156 - Error during registration: certificate verify failed
[ERROR] subscription-manager @managercli.py:157 - certificate verify failed

After some searching, I came across a Red Hat Solution, which mentions to check the current time.

Indeed, after having fixed the time, the registration worked:

date --set="13 JAN 2015 12:20:15"

subscription-manager register
Username: xxx
Password: xxx
The system has been registered with ID: xxx


check_esxi_hardware: Support for multiple serial numbers for blade servers
Friday - Jan 9th 2015 - by - (0 comments)

There is a new version for the monitoring plugin check_esxi_hardware available! 

Today's release (version 20150109) allows handling of multiple serial numbers. This makes particularily sense when a Blade Server is checked, because in many cases the serial number of the chassis AND of the blade server is returned. 

The new output will now look like this:

OK - Server: Cisco Systems Inc R210-2121605W s/n: XXXXXXXX Chassis S/N: XXXXXXXX  System BIOS: C200.1.4.3h.0.071820120442 2012-07-18

The relevant code modification was already done several months ago, but I only merged to the master tree today. I'm sorry for the delay.  

Big thanks to Helmut Eckstein and Andreas Daubner for testing the new version.  


Install LXC from source on Ubuntu 14.04 Trusty
Wednesday - Nov 26th 2014 - by - (0 comments)

For debugging or testing new upstream versions of LXC, it is handy to install LXC from source and overwrite (if it exists) a LXC version installed through apt.

The following steps explain how to compile and install LXC from source on an Ubuntu 14.04 LTS where the same directory paths (as known from the installation through the package) are used.

Clone git repository or download current zip of master branch:

git clone https://github.com/lxc/lxc.git


wget https://github.com/lxc/lxc/archive/master.zip; unzip master.zip

Install required packages to build LXC:

apt-get install build-essential automake autoconf pkg-config docbook2x libapparmor-dev libselinux1-dev libcgmanager-dev libpython3-dev python3-dev libcap-dev

Change into the cloned or unzipped repository and compile:

if [ -d lxc ]; then cd lxc; elif [ -d lxc-master ]; then cd lxc-master; fi


./configure --prefix=/usr --libdir=/usr/lib/x86_64-linux-gnu --libexecdir=/usr/lib/x86_64-linux-gnu --with-rootfs-path=/usr/lib/x86_64-linux-gnu/lxc --sysconfdir=/etc --localstatedir=/var --with-config-path=/var/lib/lxc --enable-python --enable-doc --disable-rpath --enable-apparmor --enable-selinux --disable-lua --enable-tests --enable-cgmanager --enable-capabilities --with-distro=ubuntu

make; make install

You can then verify that the lxc scripts are built against the new version (as of today the current version in master is 1.1.0-alpha2):

ldd /usr/bin/lxc-start | grep liblxc
        liblxc.so.1 => /usr/lib/x86_64-linux-gnu/liblxc.so.1 (0x00007f2c9e58b000)

ll /usr/lib/x86_64-linux-gnu/liblxc.so.1
lrwxrwxrwx 1 root root 22 Nov 26 13:31 /usr/lib/x86_64-linux-gnu/liblxc.so.1 -> liblxc.so.1.1.0.alpha2


Go to Homepage home
Linux Howtos how to's
Nagios Plugins nagios plugins
Links links

Valid HTML 4.01 Transitional
Valid CSS!
[Valid RSS]

8394 Days
until Death of Computers