Header RSS Feed
 
If you only want to see the articles of a certain category, please click on the desired category below:
ALL Android Backup BSD Database Hacks Hardware Internet Linux Mail MySQL Nagios/Monitoring Network Personal PHP Proxy Shell Solaris Unix Virtualization VMware Windows Wyse

Troubleshooting Wondershare Filmora Video Editor Crash
Friday - Aug 28th 2015 - by - (0 comments)

For a couple of months now I've been working on a video to document my daughter growing up. It's a work in progress and when I have the time, I continue to work on the video.

When I needed to choose a video editor I wondered what I'd take. The video cutting software which came with the HD-Camera (Panasonic) was too simple and didn't even have some basic effects like fade. Back in the days when I was a Multimedia Producer for a couple of months I was using Avid, but that's way too expensive (and also too complex) for a simple home video. By chance I came across Wondershare Video Editor which could be installed for free to test. Besides the name, which honestly sounds anything but serious, I got hooked and was surprised by the interface, the possibilities of effects and yet it still featured the "expert" view to combine several multmedia layers in a video.

However when I recently updated to the latest Wondershare Filmora 6.6.0 version (in the meantime Wondershare Video Editor was renamed to Filmora), the application crashed. At first I thought this is due to the just updated version and I downgraded back to Wondershare Video Editor 5.1.1 - with the same result. The application crashed whenever I tried to go into the "Full Feature Mode" (even for a new project). Only the "Easy Mode" loaded fine.

I then started to keep track of the application log file, which was written in WondershareVideoEditorInstallFolder\log\log.txt.
The end of the log file contained a lot of such entries:

#2015-08-28 20:53:27#  [MPDecSrc]: Mplayer Get ProgramInfo dMediaLength = 20.000000
#2015-08-28 20:53:27#  [MPDecSrc]: Begin use Mediainfo
#2015-08-28 20:53:27#  MPDecSrc Use Mediainfo timelen,pProgramInfo dMediaLength = 19.966000
#2015-08-28 20:53:27#  MPDEC PrepareDecoder get mediainfo
#2015-08-28 20:53:30#  [MPDecSrc]: Mplayer Get ProgramInfo dMediaLength = 4.000000
#2015-08-28 20:53:30#  [MPDecSrc]: Begin use Mediainfo
#2015-08-28 20:53:30#  MPDecSrc Use Mediainfo timelen,pProgramInfo dMediaLength = 4.022000
#2015-08-28 20:53:30#  MPDEC PrepareDecoder get mediainfo
#2015-08-28 20:53:30#  [MPDecSrc]: Mplayer Get ProgramInfo dMediaLength = 4.000000
#2015-08-28 20:53:30#  [MPDecSrc]: Begin use Mediainfo
#2015-08-28 20:53:30#  MPDecSrc Use Mediainfo timelen,pProgramInfo dMediaLength = 4.022000
#2015-08-28 20:53:30#  MPDEC PrepareDecoder get mediainfo
#2015-08-28 20:53:30#  [MPDecSrc]: Mplayer Get ProgramInfo dMediaLength = 4.000000
#2015-08-28 20:53:30#  [MPDecSrc]: Begin use Mediainfo
#2015-08-28 20:53:30#  MPDecSrc Use Mediainfo timelen,pProgramInfo dMediaLength = 4.048000
#2015-08-28 20:53:30#  MPDEC PrepareDecoder get mediainfo
#2015-08-28 20:53:30#  [MPDecSrc]: Mplayer Get ProgramInfo dMediaLength = 4.000000
#2015-08-28 20:53:30#  [MPDecSrc]: Begin use Mediainfo
#2015-08-28 20:53:30#  MPDecSrc Use Mediainfo timelen,pProgramInfo dMediaLength = 4.022000
#2015-08-28 20:53:30#  MPDEC PrepareDecoder get mediainfo
#2015-08-28 20:53:30#  [MPDecSrc]: Mplayer Get ProgramInfo dMediaLength = 4.040000
#2015-08-28 20:53:30#  [MPDecSrc]: Begin use Mediainfo
#2015-08-28 20:53:30#  MPDecSrc Use Mediainfo timelen,pProgramInfo dMediaLength = 4.066000
#2015-08-28 20:53:30#  MPDEC PrepareDecoder get mediainfo
#2015-08-28 20:53:30#  [MPDecSrc]: Mplayer Get ProgramInfo dMediaLength = 5.280000
#2015-08-28 20:53:30#  [MPDecSrc]: Begin use Mediainfo
#2015-08-28 20:53:30#  MPDecSrc Use Mediainfo timelen,pProgramInfo dMediaLength = 5.300000
#2015-08-28 20:53:30#  MPDEC PrepareDecoder get mediainfo

Which seems to indicate a problem of a decoder - but only the application developers know what exactly these log entries mean...
On my Windows 7 64bit, I currently have the following Codec Packs and Media Players installed:

- X Codec Pack 2.7.1
- Xvid v1.3.0 CVS

I decided to uninstall these completely, reboot and try again.

After the reboot, I tried to open my project which opened Wondershare Video Editor 5.1.1.
After loading for around 1.5mins, the loading window disappeared. In the log file I now saw that the program has sent
a crash report "back home" to the Wondershare API:

#2015-08-28 21:05:41#  MPDecSrc Use Mediainfo timelen,pProgramInfo dMediaLength = 1.844000
#2015-08-28 21:05:41#  MPDEC PrepareDecoder get mediainfo
#2015-08-28 21:05:45#  >>>>>>>>>>>>>>>>>>>>>>WSLogInit>>>>>>>>>>>>>>>>>>>>>>>
#2015-08-28 21:05:45#  FLogPath= D:\Program Files\Multimedia\Wondershare Video Editor\Video Editor\Log
#2015-08-28 21:05:45#  gConfig= D:\Program Files\Multimedia\Wondershare Video Editor\Video Editor\VideoEditor.ini
#2015-08-28 21:05:45#  FDownloadFileURL= http://api.wondershare.com/interface.php?m=smtpinfo
#2015-08-28 21:05:46#  XMLPath=D:\Program Files\Multimedia\Wondershare Video Editor\Video Editor\SMTP-xml.txt
#2015-08-28 21:05:46#  Fetch(XMLPath) finished!
#2015-08-28 21:05:48#  SendEmailAPI.Login Success!
#2015-08-28 21:05:48#  GetSubject= T#hidden@claudiokuenzler.com#846#client:Contact from product#No Reply, crash log auto collected for R&D
#2015-08-28 21:05:50#  SendEmailAPI.Send Success!

Now I gave the K-Lite Codec Pack a shot and installed the current version 11.4.0, followed by a reboot.

But the same issue happened. So I uninstalled the Wondershare Video Editor again and installed the newer Wondershare Filmora 6.0.1.
Here at least I was able to start into the "Full Feature Mode", although it took about 2mins to load (for a new empty project).

When I tried to load my existing project, the program now finally started up and it seemed that it was able to load the project. But after around 2-3 seconds, Filmora crashed and a window (Error Report) appeared where I could send an error report.

Last hope: Started up Filmora again, opened a new project in Full Feature Mode. Then went on Help -> Check for Update and clicked on Update now to launch the update.

Filmora Update

So - the update installed and I'm back to where I was at the begin with Filmora 6.6.0. A new project in Full Feature Mode could be opened. Will it load my existing project?
To my big surprise: YES! The project loaded again. And I was able to follow the logfile to see when Filmora stopped loading the video files used for the import of the video sequences.

Filmora Project

However strange is that the update to 6.6.0 automatically installed itself into a completely different path than the original installation path.
Original path was D:\Program Files\Multimedia\Filmora.
New path after I launched the update from 6.0.1 to 6.6.0 was C:\Program Files (x86)\Wondershare\Filmora.

But that's most likely not the source of the problem. So far I am able to continue on my project again with Filmora 6.6.0 but it's very slow, probably because there are many huge source video files (Full HD) to read in every time, create thumbnails and previews from, which is probably too much for the program. To handle this, I have now decided to continue on a new project, put together the next few months in there (splitting the whole video into several projects) and at the end combine the projects (without having to do any cutting anymore) and make one final video.

 

Icinga 2: Advanced usage of arrays/dictionaries for monitoring of partition
Friday - Aug 21st 2015 - by - (0 comments)

In the previous post (Using arrays in Icinga 2 custom attributes to monitor partitions) I wrote about how arrays can be used in custom attributes which are then used by apply rules.

Now I went a step further and created dictionaries (different sub-values in each array) in the custom attributes. The goal in this case was to quickly define new warning and critical thresholds for each partition - if wanted.

To summarize the first post, the host object contained an array "vars.partitions" as a custom attribute which defined the partitions to be monitored:

object Host "linux-host" {
  import "generic-host"
  address = "192.168.1.45"
  vars.os = "Linux"
  # Define partitions of this host:
  vars.partitions = [ "/", "/var", "/srv" ]
}

An apply rule then read the array and applied a service object for each value of this array (therefore four different partition checks):

apply Service "Diskspace " for (partition in host.vars.partitions) {
  import "generic-service"
  check_command = "nrpe"
  vars.nrpe_command = "check_disk"
  vars.nrpe_arguments = [ "15%", "5%", partition ]

  assign where host.address && host.vars.os == "Linux"
}

Great so far. But there is one minor issue: All of these partitions are now using the fixed NRPE arguments 15% and 5% for the warning and critical thresholds. Wouldn't it be nice (if we were older... lalala) to define on the fly thresholds which overwrite the defaults? Yes it would...

Dictionaries (and a lot of try'n'err) to the rescue! By rewriting the vars.partition array with values for each partition, additional parameters can be set:

object Host "linux-host" {
  import "generic-host"
  address = "192.168.1.45"
  vars.os = "Linux"
  # Define partitions of this host:
  vars.partitions.slash = { mountpoint = "/" }
  vars.partitions.slashtmp = { mountpoint = "/tmp" }
  vars.partitions.slashsrv = { mountpoint = "/srv", warn = "95%", crit = "90%" }
  vars.partitions.slashvar = { mountpoint = "/var" }
}

For each partition I created a custom attribute "vars.partitions.partitionname" which itself contains several variables.
The mountpoint variable is mandatory and as you can see only one partition (/srv) defines special warning and critical thresholds.
Note: If you ask why I added the word slash in every attribute name, just try it with "vars.partitions.var" and you will find out like me, that Icinga really doesn't like the word var appearing there...
For Icinga itself the "vars.partitions" attribute is still somewhat of an array, so I am still able to use one and the same apply rule for all partitions found in this array:

apply Service "Apply-Diskchecks" for (partition_name => config in host.vars.partitions) {
  import "generic-service"

  vars += config
  if (!vars.warn) { vars.warn = "15%" }
  if (!vars.crit) { vars.crit = "5%" }

  display_name = "Diskspace " + vars.mountpoint
  check_command = "nrpe"
  vars.nrpe_command = "check_disk"
  vars.nrpe_arguments = [ vars.warn, vars.crit, vars.mountpoint ]

  assign where host.address && host.vars.os == "Linux"
  ignore where host.vars.applyignore.partitions == true
}

OK, now it gets more complicated but hear me out, it's worth it!
First the definition of the apply rule which I named "Apply-Diskchecks" for each partition found in the array "host.vars.partitions"  (as defined in the Host object above).
The word "config" here seems to be a fixed alias in the dictionary usage which stands for the main value (or the array name); in this case slash, slashtmp, slashsrv and slashvar.
Now to something important: Each variable (vars) now uses the config as a prefix. Ergo if I use vars.mountpoint now it is in reality vars.partitions.slash.mountpoint if we're in the loop for the slash partition.
Right after that I define the default thresholds, if the thresholds were not set within the dictionary.
The display_name is a combination of the string "Diskspace" followed by the value of vars.mountpoint.
In vars.nrpe_arguments I am now submitting the dynamically created values of vars.warn, vars.crit and vars.mountpoint. Which means: If I didn't use special thresholds, the ones just defined a few lines above are applied. vars.mountpoint is a mandatory variable (as I wrote above), therefore this is coming from the custom attributes of the host object itself.

The thresholds and actual NRPE arguments can be checked and verified in the Icinga Classic UI (in newer Icingaweb2 as well):

Icinga 2: Custom Variables Partition Icinga 2 Custom Attributes used for partition checks

As you see, the /srv partition contains the special thresholds defined in the host object. Success!

The NRPE check command on the server to be monitored looks like this by the way:

command[check_disk]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -W $ARG1$ -K $ARG2$ -p $ARG3$

 

Using arrays in Icinga 2 custom attributes to monitor partitions
Thursday - Aug 20th 2015 - by - (0 comments)

In the past few weeks I've been heavily involved in Icinga 2, including consulting at an external company and a couple of new setups. Thanks to the custom attributes which exist in Icinga 2, a lot of programmatic possibilities are now possible. 

Today I was wondering, if I was able to use arrays in custom attributes to monitor partitions. Let's take the following example: We have a Linux host which has the following partitions to be monitored: /, /tmp, /var and /srv.

Now I could of course define several service checks for each single partition like this:

# check disk /
object Service "Diskspace /" {
  import "generic-service"
  host_name = "linux-host"
  check_command = "nrpe"
  vars.nrpe_command = "check_disk"
  vars.nrpe_arguments = [ "15%", "5%", "/" ]
}

# check disk /tmp
object Service "Diskspace /tmp" {
  import "generic-service"
  host_name = "linux-host"
  check_command = "nrpe"
  vars.nrpe_command = "check_disk"
  vars.nrpe_arguments = [ "15%", "5%", "/tmp" ]
}

[...] and so on

This works but means that for each host the service checks for each partition needs to be defined. 

Another possibility would be to generally use Apply Rules to "force" all partition checks on all Linux hosts like this:

# check disk /
apply Service "Diskspace /" {
  import "generic-service"
  check_command = "nrpe"
  vars.nrpe_command = "check_disk"
  vars.nrpe_arguments = [ "15%", "5%", "/" ]

  assign where host.address && host.vars.os == "Linux"
  ignore where host.vars.applyignore.linuxdisk.slash == true
}

# check disk /tmp
apply Service "Diskspace /tmp" {
  import "generic-service"
  check_command = "nrpe"
  vars.nrpe_command = "check_disk"
  vars.nrpe_arguments = [ "15%", "5%", "/tmp" ]

  assign where host.address && host.vars.os == "Linux"
  ignore where host.vars.applyignore.linuxdisk.tmp == true
}

[...] and so on

This works, too. But here we have the problem that some partitions (for example /usr) are not a separate partiton to be monitored and therefore would use the same partition as the slash (/) partition. Double-alert and confusion will happen, when a warning or critical threshold is reached. Naah, I don't want that.

With arrays in custom attributes I found another way. With a new custom attribute added in the host definition, the partitions of this particular host can be set:

object Host "linux-host" {
  import "generic-host"
  address = "192.168.1.45"
  vars.os = "Linux"
  # Define partitions of this host:
  vars.partitions = [ "/", "/var", "/srv" ]
}

The apply rule which uses host.vars.partitions then looks like this:

apply Service "Diskspace " for (partition in host.vars.partitions) {
  import "generic-service"
  check_command = "nrpe"
  vars.nrpe_command = "check_disk"
  vars.nrpe_arguments = [ "15%", "5%", partition ]

  assign where host.address && host.vars.os == "Linux"
}

The final result in Icinga 2 looks like this:

Icinga 2 Custom Attributes used for partition checks 

Icinga 2 automatically applies three "Diskspace" checks on the host where I only defined /, /var and /srv partitions.
Very nice is also the fact, that the service name is automatically adapted with the partition variable (hence I left an empty space character in "Diskspace ").

There are most likely other possibilities how to handle this, but it's nice to know that multiple choices are there to achieve the goal.

 

SLES 11: Where is my beloved htop?
Thursday - Aug 6th 2015 - by - (0 comments)

In my Linux career I started more than 10 years ago with SUSE Linux and SuSE Linux Enterprise Server (SLES). In more recent years I have switched more and more to Debian and Ubuntu systems for Linux servers.

However on my new job I come across some SLES servers again and although zypper is an ok alternative to Debian's apt, the original and supported repositories still lacks a lot of packages.

For example htop. Wait, what?! htop? It's not part of the official repo? Meh...

In order to install htop on a SLES 11, one has to add a third party repository first. For this I chose the server:monitoring repository.

zypper ar http://download.opensuse.org/repositories/server:/monitoring/SLE_11_SP3/server:monitoring.repo
Adding repository 'Server Monitoring Software (SLE_11_SP3)' [done]
Repository 'Server Monitoring Software (SLE_11_SP3)' successfully added
Enabled: Yes
Autorefresh: No
GPG check: Yes
URI: http://download.opensuse.org/repositories/server:/monitoring/SLE_11_SP3/

zypper se htop
Refreshing service 'SDKs'.

New repository or package signing key received:
Key ID: A5C23697EE454F98
Key Name: server:monitoring OBS Project <server:monitoring@build.opensuse.org>
Key Fingerprint: 8F3BC8EFF549CDCDA918D981A5C23697EE454F98
Key Created: Fri Apr 18 18:35:12 2014
Key Expires: Sun Jun 26 18:35:12 2016
Repository: Server Monitoring Software (SLE_11_SP3)

Do you want to reject the key, trust temporarily, or trust always? [r/t/a/? shows all options] (r): a
Building repository 'Server Monitoring Software (SLE_11_SP3)' cache [done]
Loading repository data...
Reading installed packages...

S | Name             | Summary                              | Type      
--+------------------+--------------------------------------+-----------
  | htop             | Interactive Process Viewer for Linux | package   
  | htop             | Interactive Process Viewer for Linux | srcpackage
  | htop-debuginfo   | Debug information for package htop   | package   
  | htop-debugsource | Debug sources for package htop       | package   

zypper in htop
Refreshing service 'SDKs'.
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following NEW package is going to be installed:
  htop

1 new package to install.
Overall download size: 83.0 KiB. After the operation, additional 179.0 KiB will be used.
Continue? [y/n/? shows all options] (y): y
Retrieving package htop-1.0.3-1.1.x86_64 (1/1), 83.0 KiB (179.0 KiB unpacked)
Retrieving: htop-1.0.3-1.1.x86_64.rpm [done]
Installing: htop-1.0.3-1.1 [done]

Go the extra mile...

 

Read or count number of lines of an Excel xlsx file in Linux on the cli
Wednesday - Jul 29th 2015 - by - (0 comments)

I needed to figure out, how many lines an Excel XSLX file contained, in order to use the number of lines for a division by a second number. The result of the division would then decide, how often a script needs to run.

So to write it down more clearly:

$numlinesExcelSheet / $divisor = $result
i=0
while [ $i -lt $result ] ; do /tmp/runscript.sh; let i++; done

Got it? OK!

But how am I able to find out the number of lines of the Excel sheet? As it's common knowledge, the xlsx file is a binary file, so I cannot use a simple command as "wc".

On my research I came across two possibilities (there are probably even more), which are easily installed and deployed. I will explain how to use them and how to quickly read the number of lines of the Excel sheet.

But first, let's create an XLSX file with some basic content:

XLSX File

The Perl way

I came across the CPAN perl module Spreadsheet::XLSX, which seems to do the job. As a requirement I needed to install the perl module Text::Iconv (perl-Text-Iconv package on openSUSE and libtext-iconv-perl in Debian/Ubuntu). Furthermore I installed the Spreadsheet::XLSX perl module through cpan:

cpan[1]> install Spreadsheet::XLSX

By running the example script, shown in the synopsis, the output looks like this:

# perl xlsx.pl
Sheet: Sheet1
( 0 , 0 ) => sfd
( 1 , 0 ) => fdasfd
( 2 , 0 ) => dsfafd
( 3 , 0 ) => dfadfw
( 4 , 0 ) => afda
( 5 , 0 ) => asdf
( 6 , 0 ) => test

Only bummer here is that the first row is counted in a typical array manner, starting at 0. So when I use the $sheet -> {MaxRow} variable, it actually outputs 6, instead of 7 as seen in the screenshot.

To handle this, I slightly modified the perl script and simply use $sheet -> {MaxRow} + 1:

# cat test.pl
#!/bin/perl
use Text::Iconv;
 my $converter = Text::Iconv -> new ("utf-8", "windows-1251");
 
 # Text::Iconv is not really required.
 # This can be any object with the convert method. Or nothing.

 use Spreadsheet::XLSX;
 
 my $excel = Spreadsheet::XLSX -> new ('test.xlsx', $converter);
 
 foreach my $sheet (@{$excel -> {Worksheet}}) {
 
        printf("Number of lines: %d\n", $sheet -> {MaxRow} + 1);
       
 }

# perl test.pl
Number of lines: 7

 

The Bash / Shell way

In Bash there is, to my knowledge, no module or extension which would be able to directly read the contents of an Excel file. But there is however an interesting package which can be installed (at least on Debian and Ubuntu): xlsx2csv.

With this tool, an Excel xlsx file can be converted into a CSV. A CSV is a normal text file again and therefore the number of lines can simply be read:

# xlsx2csv test.xlsx test.csv

# cat test.csv
sfd
fdasfd
dsfafd
dfadfw
afda
asdf
test

# cat test.csv  | wc -l
7

 

What now?

Both ways are working, that's the good news! The perl way seems to be faster to me (because the conversion from xlsx to csv and then read the lines takes longer) but I got into problems trying to read the number of lines on a large Excel file (26MB). The following errors were shown thousands of times:

Use of uninitialized value $t in concatenation (.) or string at /usr/lib/perl5/site_perl/5.20.1/Spreadsheet/XLSX.pm line 49.

Eventually, after running for more than 5minutes, the process was killed:

# time perl test.pl
[...]
Use of uninitialized value $t in concatenation (.) or string at /usr/lib/perl5/site_perl/5.20.1/Spreadsheet/XLSX.pm line 49.
Killed

real    5m12.927s
user    2m49.066s
sys    0m29.516s

On the other hand, the bash way was also working fine with the same large Excel file:

# time xlsx2csv test2.xlsx test2.csv; time wc -l test2.csv

real    1m1.378s
user    1m1.133s
sys    0m0.231s
170768 test2.csv

real    0m0.018s
user    0m0.001s
sys    0m0.017s

 So for my scenario I'll therefore choose the xlsx2csv command.  

 

fatal: Access denied for user by PAM account configuration
Wednesday - Jul 29th 2015 - by - (0 comments)

Today I got a strange ssh problem which got me scratching my head a couple of times. 

On a CentOS 5 server I tried to use a ssh key exchange for a ssh login. The key was correctly installed and the permissions on .ssh and the authorized_keys file were set correctly.

But as soon as I tried to log in from the remote machine, I got the following error:

$ ssh nagios@centosmachine
Connection closed by centosmachine

On the centosmachine, I followed the logs and in /var/log/secure the following error messages were logged:

Jul 29 08:24:14 centosmachine sshd[9827]: pam_access(sshd:account): access denied for user `nagios' from `nagiosserver'
Jul 29 08:24:14 centosmachine sshd[9828]: fatal: Access denied for user nagios by PAM account configuration

At first I expected a missing "AllowUsers" entry in /etc/ssh/sshd_config, but there were no such entries, meaning all local users should be allowed. I also tested if I could locally switch to the nagios user and simulate a login, which was working fine. So there are no permission problems on the home directory either.

Eventualy I came across a blog entry on andyhan.net. It seems that he had a similar issue a while ago and he pointed me to the correct file: /etc/security/access.conf.

I compared this file with other CentOS servers to which nagios was able to connect to and indeed, there was the following line missing:

+ : nagios : nagiosserver

As soon as I added this line, therefore allowing the nagios user from nagiosserver, the nagios user was able to connect via ssh again.

 

Improving Gmail spam filter: All it needs is a barking Linus Torvalds.
Wednesday - Jul 22nd 2015 - by - (0 comments)

A while ago, already in December 2013, I wrote a post about Gmail's spam filter which went worse with every day (see Dear Gmail: What is going on with your spam filter?!). 

In the past few months the spam filter got to a point where 20-30% of declared spams were actually real mails (hams). Especially targeted by the spam filter seemed to be e-mails of mailing list. Suricata (OISF) and Tomcat mailing list mails seemed to be the favourites to be tagged as spam.

Now Linus Torvalds to the rescue! In his public Google Plus post on July 17th 2015, he sharply attacked the Google Mail team:

Of the roughly 1000 spam threads I've gone through so far, right now 228 threads were incorrectly marked as spam.
That's not the 0.1% false positive rate you tried to make such a big deal about last week. [...]
You dun goofed. Badly. Get your shit together, because a 20% error rate for spam detection is making your spam filter useless.

Yeah, some might think that his words are not very nice. That's a classic Linus. But guess what? It actually worked. His post was making noise, the Gmail team (in form of Google employee Sri Somanchi responded and confirmed they have improved the filters.

I waited a couple of days to make sure. And really: Since Sri claimed that the Gmail spam filters were adapted I had no ham mails in the spam folder anymore. Not even a single mailing list thread was tagged as spam. On the other side I didn't get any spams into my inbox either.

So thank you Linus for making noise, when it's necessary!

Update July 30th 2015: And we're back to the status before. Today alone at least 4 mails from the OISF mailing list were tagged as spam - the same amount as the actual amount of real spam.

 

pptp client on Linux: Disable crazy anon logging (callmgr)
Tuesday - Jul 21st 2015 - by - (0 comments)

When I needed to transfer some data to an offsite using a PPTP VPN, I followed these two tutorials to get the connection running on my Debian Wheezy server:

  • http://websistent.com/how-to-configure-a-linux-pptp-vpn-client/
  • http://www.vionblog.com/debian-pptp-client-configuration/

Basically the steps are the following.

Install pptp client:

apt-get install pptp-linux

Then enter the vpn credentials in /etc/ppp/chap-secrets:

myvpnuser      PPTP      myvpnpass     *

Then create a new config file for the VPN connection in the /etc/ppp/peers folder. Here I used vpnconn1 as name (/etc/ppp/peers/vpnconn1):

pty "pptp ip.address.remote.site --nolaunchpppd"
name myvpnuser
remotename PPTP
require-mppe-128
file /etc/ppp/options.pptp
maxfail 0
persist
ipparam vpnconn1

pty: The command line options to launch the pptp client and therefore the connection. ip.address.remote.site is of course the IP address or DNS Name of the VPN Server.

name: It's the username again, which must be the same as defined in chap-secrets.

ipparam: Use the same naming again, as your vpn connection (vpnconn1)

After that I manually launched the VPN connection with the following command:

pppd call smartdev

In /var/log/syslog the following entries appeared:

Jul 21 11:27:01 irnsrvp01 pppd[117523]: pppd 2.4.5 started by root, uid 0
Jul 21 11:27:01 irnsrvp01 pppd[117523]: Using interface ppp0
Jul 21 11:27:01 irnsrvp01 pppd[117523]: Connect: ppp0 <--> /dev/pts/57
Jul 21 11:27:01 irnsrvp01 pptp[117527]: anon log[main:pptp.c:314]: The synchronous pptp option is NOT activated
Jul 21 11:27:01 irnsrvp01 pptp[117553]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request'
Jul 21 11:27:01 irnsrvp01 pptp[117553]: anon log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply
Jul 21 11:27:01 irnsrvp01 pptp[117553]: anon log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established.
Jul 21 11:27:02 irnsrvp01 pptp[117553]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request'
Jul 21 11:27:02 irnsrvp01 pptp[117553]: anon log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply.
Jul 21 11:27:02 irnsrvp01 pptp[117553]: anon log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 1640).
Jul 21 11:27:06 irnsrvp01 pppd[117523]: CHAP authentication succeeded
Jul 21 11:27:06 irnsrvp01 pppd[117523]: MPPE 128-bit stateless compression enabled
Jul 21 11:27:09 irnsrvp01 pppd[117523]: local  IP address 10.0.0.11
Jul 21 11:27:09 irnsrvp01 pppd[117523]: remote IP address 10.0.0.10

Success! The VPN connection was established. 

But then the crazyness started! Several times per second I got such log entries:

Jul 21 11:31:14 irnsrvp01 pptp[98247]: anon fatal[open_callmgr:pptp.c:487]: Call manager exited with error 256
Jul 21 11:31:14 irnsrvp01 pptp[98259]: anon log[main:pptp.c:314]: The synchronous pptp option is NOT activated
Jul 21 11:31:14 irnsrvp01 pptp[98260]: anon warn[open_inetsock:pptp_callmgr.c:329]: connect: Connection refused
Jul 21 11:31:14 irnsrvp01 pptp[98260]: anon fatal[callmgr_main:pptp_callmgr.c:127]: Could not open control connection to ip.address.remote.site

Altogether pptp logged more than 99k lines into /var/log/syslog. Now if you use OSSEC on that server, and I do, then you can imagine how many alert e-mails you get. I stopped counting after the 1500th alert e-mail.

The question now is: How can I tell pptp to stop logging? Eventually I found a very old (from 12 years ago!) mailing list post in which a command line parameter is mentioned (--loglevel):

should allow you to reduce the verbosity of logging by adding the option "--loglevel 0" to your pptp command line.

As I described above, the command line options are actually defined in the VPN connection's config file (/etc/ppp/peers/vpnconn1) in the "pty" line.

# cat /etc/ppp/peers/vpnconn1
pty "pptp ip.address.remote.site --nolaunchpppd --loglevel 0"
name myvpnuser
remotename PPTP
require-mppe-128
file /etc/ppp/options.pptp
maxfail 0
persist
ipparam vpnconn1

By adding the "--loglevel 0" option into that line, the crazy logging stopped and only a few "Echo Reply received" entries appeared from time to time.

 

Mount a volume with different ownership permissions with bindfs
Thursday - Jul 16th 2015 - by - (0 comments)

Today I got asked by a colleague, if it were possible to mount a volume where certain ownerships (in this case www-data owner) are automatically adapted to his own user, so he can modify files belonging to www-data.

One way to solve the file permission write issue is to put the local user to the same group of www-data and change the permissions that the group www-data is able to write. On the other hand this also requires, that the www-data group also exists on the localhost, not only inside the container. With www-data this is the case (because it is already a fixed part of an Ubuntu installation), but for special users this won't work or requires further modifications.

Another possibility is to use bindfs. Bindfs allows you to "remap" ownerships and creates a virtual mount point.

sudo bindfs --map=www-data/claudio /var/lib/lxc/container001/rootfs /tmp/container001

The above command explained:

--map: Allows to "remap" the ownership permissions. In this example, the www-data owner is rewritten to the user claudio (meaning: claudio becomes the new owner on the bindfs mounted path)

/var/lib/lxc/container001/rootfs: The original folder/path

/tmp/container001: The destination path, where the remapping happens. This is a virtual path and the mountpoint (/tmp/container001) must exist.

So if I look into the original folder, the owner is www-data for the cache files of this web application:

ll /var/lib/lxc/container001/rootfs/srv/webapp/cache/
total 40
-rw-r--r-- 1 www-data 1005   469 Apr  2 13:39 clear_cache.sh
drwxr-xr-x 5 www-data 1005  4096 Apr  2 13:39 doctrine
drwxr-xr-x 6 www-data 1005  4096 Jul 10 11:55 general
drwxr-xr-x 2 www-data 1005  4096 Apr  2 13:40 html
drwxr-xr-x 4 www-data 1005  4096 Apr  2 13:39 mpdf
drwxr-xr-x 2 www-data 1005  4096 Apr  2 13:40 productexport
drwxr-xr-x 2 www-data 1005 12288 Jul 11 09:10 proxies
drwxr-xr-x 3 www-data 1005  4096 Jul 10 11:55 templates

In the bindfs mountpoint, where the ownerships were remapped, the file structure looks like this:

ll /tmp/container001/srv/webapp/cache/
total 40
-rw-r--r-- 1 claudio 1005   469 Apr  2 13:39 clear_cache.sh
drwxr-xr-x 5 claudio 1005  4096 Apr  2 13:39 doctrine
drwxr-xr-x 6 claudio 1005  4096 Jul 10 11:55 general
drwxr-xr-x 2 claudio 1005  4096 Apr  2 13:40 html
drwxr-xr-x 4 claudio 1005  4096 Apr  2 13:39 mpdf
drwxr-xr-x 2 claudio 1005  4096 Apr  2 13:40 productexport
drwxr-xr-x 2 claudio 1005 12288 Jul 11 09:10 proxies
drwxr-xr-x 3 claudio 1005  4096 Jul 10 11:55 templates

So now my local user "claudio" is able to modify files in the bindfs mount which belong to www-data in the original path:

claudio@mymachine:~$ touch /tmp/container001/srv/webapp/cache/claudiofile.php

claudio@mymachine:~$ ll /tmp/test/srv/shopware/cache/
total 40
-rw-rw-r-- 1 claudio claudio     0 Jul 16 11:39 claudiofile.php
-rw-r--r-- 1 claudio  1005   469 Apr  2 13:39 clear_cache.sh
drwxr-xr-x 5 claudio  1005  4096 Apr  2 13:39 doctrine
drwxr-xr-x 6 claudio  1005  4096 Jul 10 11:55 general
drwxr-xr-x 2 claudio  1005  4096 Apr  2 13:40 html
drwxr-xr-x 4 claudio  1005  4096 Apr  2 13:39 mpdf
drwxr-xr-x 2 claudio  1005  4096 Apr  2 13:40 productexport
drwxr-xr-x 2 claudio  1005 12288 Jul 11 09:10 proxies
drwxr-xr-x 3 claudio  1005  4096 Jul 10 11:55 templates

In the original path the file was created, too, but belongs to www-data:

ll /var/lib/lxc/container001/rootfs/srv/webapp/cache/
total 40
-rw-rw-r-- 1 www-data claudio     0 Jul 16 11:39 claudiofile.php
-rw-r--r-- 1 www-data  1005   469 Apr  2 13:39 clear_cache.sh
drwxr-xr-x 5 www-data  1005  4096 Apr  2 13:39 doctrine
drwxr-xr-x 6 www-data  1005  4096 Jul 10 11:55 general
drwxr-xr-x 2 www-data  1005  4096 Apr  2 13:40 html
drwxr-xr-x 4 www-data  1005  4096 Apr  2 13:39 mpdf
drwxr-xr-x 2 www-data  1005  4096 Apr  2 13:40 productexport
drwxr-xr-x 2 www-data  1005 12288 Jul 11 09:10 proxies
drwxr-xr-x 3 www-data  1005  4096 Jul 10 11:55 templates



 

Samsung Galaxy S5 (G901F): Hello CM 12.1, bye bye Touchwiz! Install guide.
Wednesday - Jul 15th 2015 - by - (1 comments)

In a previous post (Samsung Galaxy S5 (G901F): Pain to install custom recovery or Cyanogenmod) I described the difficulties to install a custom recovery image or another ROM (CyanogenMod in this case). The main problem is, that the Samsung Galaxy S5 with the Samsung model number G-901F is not a klte device, for which CyanogenMod downloads are available, it's a kccat6 device.

In the CyanogenMod forums someone pointed fellow G-901F owners, wanting to install CM, to an XDA Forums post. This is basically the first unofficial CyanogenMod ROM ported to the G-901F device by the Sayanogenmod Project.

I decided to give it a shot and install CM 12.1. The following steps explain how.

1. You understand that you most likely void your warranty of your Samsung device. As with all other tutorials, you are responsible for your own actions etc bla bla. Let me just add that I created a full backup before I installed CM, so I would be able to switch back to the original/stock Samsung ROM (Touchwiz).

2. Install a custom recovery first. See post Samsung Galaxy S5 (G901F): Pain to install custom recovery or Cyanogenmod on which you can just follow the steps.

3. Download CM 12.1 zip file from http://fsrv1.sayanogen.com/KCCAT6/NIGHTLY/CM12.1/. I downloaded the currently newest nightly version (20150705).

4. Download GApps (Google Apps) zip file for Android 5.1 on https://github.com/cgapps/vendor_google/find/builds . The kccat6 device is running on an ARM processor (not arm64!).

5. Transfer both zip files to your phone. I saved them directly in the "SD Card" folder, not in a subfolder.

6. Power off the Galaxy S5.

7. Boot the phone into the Recovery Mode by pressing the following button combination: Volume Up + Home + Power. Keep it pressed until you see a small blue text appearing on top of the screen.

8. Backup time! The CWM recovery allows you to create a full backup. This can be a lifesaver if the installation of CM fails.
In the recovery menu, navigate to "backup and restore", then select "backup to /sdcard". Confirm with "Yes - Backup". This will create a zip file of the current recovery image and the complete Android system in /sdcard/clockworkmod/backup/.

9. Back in the recovery main menu, select wipe data/factory reset and confirm with Yes - Wipe all user data. Now do the same with the cache partition by selecting wipe cache partition. To make sure you should also wipe the Dalvik cache. You can find this in the advanced submenu.

Install CyanogenMod 12.1 on Samsung Galaxy S5 G-901F Install CyanogenMod 12.1 on Samsung Galaxy S5 G-901F Install CyanogenMod 12.1 on Samsung Galaxy S5 G-901F Install CyanogenMod 12.1 on Samsung Galaxy S5 G-901F Install CyanogenMod 12.1 on Samsung Galaxy S5 G-901F

10. Now it's finally time to install CyanogenMod. Select install zip from the recovery menu, select choose zip from /sdcard and select the cm-12.1-XXXX zip file you downloaded in step 3. Confirm the installation with Yes - Install cm-12.1-XXXX..

Install CyanogenMod 12.1 on Samsung Galaxy S5 G-901F Install CyanogenMod 12.1 on Samsung Galaxy S5 G-901F Install CyanogenMod 12.1 on Samsung Galaxy S5 G-901F Install CyanogenMod 12.1 on Samsung Galaxy S5 G-901F Install CyanogenMod 12.1 on Samsung Galaxy S5 G-901F

11. Do exactly the same installation steps, but this time for the GApps zip file.

Install CyanogenMod 12.1 on Samsung Galaxy S5 G-901F Install CyanogenMod 12.1 on Samsung Galaxy S5 G-901F

12. After you've successfully installed both zip files you can reboot the device (reboot system now option in recovery main menu).

Voilą, CM 12.1 is now booting!

Install CyanogenMod 12.1 on Samsung Galaxy S5 G-901F Install CyanogenMod 12.1 on Samsung Galaxy S5 G-901F Install CyanogenMod 12.1 on Samsung Galaxy S5 G-901F

Congratulations!

Update July 24th 2015: Now that I run CM 12.1 for a couple of days, I unfortunately see a huge battery drain from the Android system. It only takes a couple of hours (without using the phone at all) and the battery is empty.

CM 12.1 on G-901F: Battery drain 

I will try it with CM 12 (not 12.1), too. Hopefully the battery is holding longer there. In the worst case I will have to go back to the stock ROM.

 


Go to Homepage home
Linux Howtos how to's
Nagios Plugins nagios plugins
Links links

Valid HTML 4.01 Transitional
Valid CSS!
[Valid RSS]

8176 Days
until Death of Computers
Why?