LL      IIIII NN   NN KK  KK EEEEEEE RRRRRR  RRRRRR   OOOOO  RRRRRR 
LL       III  NNN  NN KK KK  EE      RR   RR RR   RR OO   OO RR   RR
LL       III  NN N NN KKKK   EEEEE   RRRRRR  RRRRRR  OO   OO RRRRRR 
LL       III  NN  NNN KK KK  EE      RR  RR  RR  RR  OO   OO RR  RR 
LLLLLLL IIIII NN   NN KK  KK EEEEEEE RR   RR RR   RR  OOOOO  RR   RR
                                                           ramblings
____________________________________________________________________

This might be useful for any sysadmins doomed with the terrible fate of administering plesk servers.
I wrote a little script you can stick in cron for emailing reports of overuse in plesk.
It can optionally also run with the -i (interactive) option for console output without emailing.

The script: http://linkerror.com/programs/check_overuse.pl.txt


 
____________________________________________________________________

Being a sysadmin you end up running ssh to multiple servers at the same time, all the time. Being a paranoid sysadmin you also have different (long) passwords for every one of these servers.

Unless you want to spend more time entering passwords than doing actual work, you probably have some kind of master-password system setup.

Most people will use an ssh key uploaded to the servers in order to accomplish this. – (hopefully one that is password protected.)

However there are some situations where this is not preferred, for example, when an account is shared by multiple people, or when you simply cannot leave ssh public keys lingering around. Or when you simply don’t want to have to re-upload the key every time the home directory gets wiped…

It sure would be nice to have a password manager, protected with a master password, remember passwords you enter for ssh, in those cases.

This is possible with kdewallet and a small expect script wrapper around ssh.

I don’t personally use kde, but I do use some of the utilities it ships with from time to time, kdewalet being one of them. Kdewallet uses dbus for ipc. The qdbus utility lets you interact with dbus applications from the command line (and from shell scripts), so that’s what this script makes use of. The KDE Wallet password management system consists of system daemon (kwalletd) and a front-end gui application to view the password database, create folders, etc, called kwalletmanager. You don’t have to have kwalletmanager running for this to work. The script will automatically start kwalletd if it’s not running.

You can use kwalletmanager to create a separate folder to store your ssh passwords “eg, a folder called “ssh”) and specify the folder in which to store the passwords at the top of the script, where some other constants can be adjusted such as the location of the needed binaries…

If a password was not found in kwallet, it will prompt for the password and store it. (If you entered the wrong password you’ll have to remove it using kwalletmanager.)

The script is implemented using ‘expect’ which can be obtained here : http://expect.nist.gov/ – Which uses TCL syntax.

#!/usr/bin/expect -f

# Entry point -----------------------------------------------------------------

# Constants
set kwalletd "/usr/bin/kwalletd"
set qdbus "/usr/bin/qdbus"
set kdialog "/usr/bin/kdialog"
set appid "ssh"
set passwdfolder "ssh"

# Get commandline args.

set user [lindex $argv 0]
set host [lindex $argv 1]
set port [lindex $argv 2]

# Check arg sanity
if { $user == "" || $host == "" } {
  puts "Usage: user host \[port\] \n"
  exit 1
}

# Use a sane default port if not specified by the user.
if { $port == "" } {
  set port "22"
}

# Run kde wallet daemon if it's not already running.
set kwalletrunning [ 
  exec "$qdbus" "org.kde.kwalletd" "/modules/kwalletd" "org.kde.KWallet.isEnabled" 
]
if { $kwalletrunning == "false" } {
  puts "kwalletd is not running, starting it...\n"
  exec "$kwalletd&"
  sleep 2
} else {
  puts "Found kwalletd running.\n"
}

# Get wallet id 
set walletid [
  exec "$qdbus" "org.kde.kwalletd" "/modules/kwalletd" "org.kde.KWallet.open" "kdewallet" "0" "$appid"
]

# Get password from kde wallet.
set passw [
  exec "$qdbus" "org.kde.kwalletd" "/modules/kwalletd" "org.kde.KWallet.readPassword" "$walletid" "$passwdfolder" "$user@$host" "$appid"
]

# If no password was found, ask for one.
if { $passw == "" } {
  set passw [
    exec "$kdialog" "--title" "ssh" "--password" "Please enter the ssh password for $user@$host"
  ]
  if { $passw == "" } {
    puts "You need to enter a password.\n"
    exit 1
  }
  # Now save the newly entered password into kde wallet
  exec "$qdbus" "org.kde.kwalletd" "/modules/kwalletd" "org.kde.KWallet.writePassword" "$walletid" "$passwdfolder" "$user@$host" "$passw" "$appid"
}

# Run ssh.
if [
  catch {
    spawn ssh -p $port $user@$host 
  } reason
] {
  puts " Failed to spawn SSH: $reason\n"
  exit 1
}

# Wait for password prompt and send the password.
# Add key to known hosts if asked.
# Resume after successful login.
expect {
  -re ".*assword:" {
    exp_send "$passw\r"
    exp_continue;
  }
  -re ".* (yes/no?)" {
    send -- "yes\r" {
      exp_continue
    }
    -re ".*Warning: Permanently .*known hosts.\r\r\n" exp_continue
  }
  -re ".*Last login" exp_continue;
}

# Send a blank line
send -- "\r"

# Now finally let the user interact with ssh.
interact

 
____________________________________________________________________

When you’re running any type of shared hosting server, with hundreds of clients that have the ability to run php scripts, send emails, etc,… How do you make sure you’re not setting yourself up to be one big spam haven? (the true answer is: you don’t, since shared hosting is one big mess.- You’re screwed.) – A compromised script of a client could be sending out spam mail without using your MTA, so it would not show up in your logs or mailqueue.

For this reason I wrote a little perl script which sniffs all outgoing SMTP traffic and dumps it to a file. You could then set up a cron job which scans the file for known keywords used by spammers (viagra/v1agra/Vi4Gr4/etc…….) and alerts you when something is found; or you could make it extract the emails and run them through spamassassin.

This way, even if the outgoing traffic is sent by some script using sockets to connect to port25 of some external mail server, bypassing your mta, you will still know about it.

Just change the settings on top of the script to reflect the ip address(es) you’re using and the network interface open to the internet.

Download/View it here


 
____________________________________________________________________

For a while now, trojans, and the botnets they work for, have employed several techniques for stealing FTP credentials. Whether it be through sniffing unencrypted FTP traffic, grabbing credentials from saved password files from popular FTP clients, or brute-forcing weak passwords on the server, as a server administrator, having a user’s FTP account compromised is something you would want to detect as early as humanly possible.

I have witnessed quite a few of these compromised accounts, and every time, it seems that there are many logins from many different countries into the account, presumably, by botnet drones dropping all sorts of malware or who knows what else.

This actually makes it fairly easy to write a little script to detect whether an account has been compromised, simply by looking at from how many different countries it has been accessed.

Granted, some people may travel, but most people will not travel to more than 10 countries in a short amount of time.

Thus I have written a perl script that can be used as a nagios sensor, which will grab the `last` output, and do a geoip lookup (using the geoiplookup utility) for each IP address. Then count the number of different countries, and depending on the warning / critical flags, will return the appropriate return value.

Example output:

# ./check_login

User 'weakling' has logins from 33 countries: Egypt Bolivia Taiwan
 Australia Sweden Switzerland Pakistan Dominican Canada China Peru
 Indonesia Vietnam Honduras Portugal Trinidad Grenada Turkey Serbia 
Korea, Mexico United Colombia Brazil Bahrain Japan France Mali South 
Poland Slovenia India - CRITICAL

Grab it here.


 
____________________________________________________________________

When migrating complex web applications between versions from one server to another, the database isn’t always 100% compatible.

Usually what happens is that table fields have been added, causing a mysqldump being imported to fail due to the number of columns in the .sql file not matching the number of columns in the database.

ERROR 1136 (21S01) at line 18: Column count doesn't match value count at row 1

This is because mysqldump by default uses the INSERT INTO table VALUES (); syntax.

There is an alternative syntax for mysql which will let you import data when even tables have new columns. (at least, as long as these columns are allowed to have NULL data, or have default values.)

Recently, after a long nightmarish experience, migrating between versions of a certain control panel (*cough* plesk *cough*) between two servers , I have used a little script which automaticlly converts the VALUES insert syntax to SET field=value syntax.

It’s a last resort type of thing. In our case, the migration tools that shipped with the web application fail miserably in countless ways, and this script little script i threw together actually did the trick and saved us.

It’s a perl script, of course.

#!/usr/bin/perl
use strict;
use DBI;

# Settings --------------------------------------------------------------------

my $src_db_user = 'someuser';
my $src_db_pass = 'supersecret';
my $src_db_name = 'somedatabase';
my $src_db_host = 'localhost';
my $src_db_port = 3306;
my @tables_to_export = qw(hosting clients Cards Limits Permissions Templates accounts
data_bases db_users dns_recs dns_recs_t dns_zone dom_level_usrs domainaliases
domains forwarding hosting itmpl mail mail_aliases mail_redir
mail_resp pd_users protected_dirs resp_forward resp_freq spamfilter
spamfilter_preferences subdomains sys_users web_users);

# Globals ---------------------------------------------------------------------

my $src_db;

# Functions -------------------------------------------------------------------

# Connects to the source database.
sub connect_src_db()
{
  $src_db = DBI->connect("DBI:mysql:$src_db_name",$src_db_user,$src_db_pass)
    or die "Could not connect to database: $DBI::errstr";
}

# Disconnects from all databases in use.
sub disconnect_dbs()
{
  $src_db->disconnect();
}

# Gets column names from a table 
# Returns each column name as an element in a returned array.
sub get_columns_from_table($)
{
  my ($table) = @_;
  my @columns;
  my $qh = $src_db->prepare("SHOW COLUMNS FROM `$table`");
  $qh->execute();
  while (my $ref = $qh->fetchrow_arrayref())
  {  
    my @cols = @{$ref};
    push @columns,@cols[0];
  }
  $qh->finish();
  return @columns;
}

# Returns a string of SQL statements which effectively
# export all data in a table.
sub export_table($)
{
  my ($table) = @_;
  my @columns = get_columns_from_table($table);
  my $qh = $src_db->prepare("SELECT * FROM `$table`");
  my $output='';
  $qh->execute();
  # TODO: Line below, deleting original records should probably be an option.
  $output .= "DELETE FROM `$table`;\n";
  while (my $ref = $qh->fetchrow_arrayref())
  {
    my $colnum = 0;
    my @col_values = @{$ref};
    $output .= "INSERT INTO `$table` SET ";
    foreach my $col_value(@col_values)
    {
      my $col_name = @columns[$colnum];  
      $col_value = $src_db->quote($col_value);
      $output .= "`$col_name` = $col_value,";
      $colnum += 1;
    }
    $output =~ s/,$/;/g;
    $output .= "\n";
  }
  $qh->finish();
  return $output;
}

# Entry point -----------------------------------------------------------------

my $export_sql = '';
connect_src_db();
foreach my $table(@tables_to_export)
{
  $export_sql .= export_table($table);
}
disconnect_dbs();
print $export_sql;
exit 0;

usage: ./thescript > mydump.sql

You obviously have to edit the list of tables to export and the db authentication info on top of the script.

I left in the list of tables there, as the provided list actually works for migrating most of plesk 8.x to 9.x – it may help someone.

If you’re using this with plesk, you may have to make a few more small changes to the database after migrating.

You will probably have problems with the IP pool, if the ip addresses are different on the new server. This actually causes an error when you click the hosting config button in the control panel for a domain: You can reset the ip tool for each client with this sql query on the target server:

update clients set pool_id='1';

Depending on how complex your plesk setup is, and which modules you have installed, you may or may not have other issues, but if you’re using plesk you should be used to that by now (and probably you’re slowly losing all your hair).

Like I said, it’s a last resort type of thing, but it was actually faster than trying to resolve all the issues with plesk’s migration manager and/or backup system.

Of course, you can use the script with anything else. All it does is convert:

  INSERT INTO `foo` VALUES('foo','bar');

to

  INSERT INTO `foo` SET bleh='foo', blah='bar';

Another advantage of the SET syntax is that it preserves the column mappings, so even when columns switch places (ie, bleh becomes the second column instead of the first) the import will still work correctly.


 
____________________________________________________________________

One thing that seems to happen a lot on FreeBSD with php is that it tends to segfault when some wrong combination of modules is installed, or you have a module from a previous version of php installed that doesn’t play nice with your newly upgraded version…

You’ll usually notice things like this in your daily security mails:

+pid 44994 (httpd), uid 80: exited on signal 11
+pid 44992 (httpd), uid 80: exited on signal 11
+pid 50351 (httpd), uid 80: exited on signal 11
+pid 51432 (httpd), uid 80: exited on signal 11
+pid 89423 (httpd), uid 80: exited on signal 11
... etc...

In the case of mismatched modules with the current php version I once came up with this super overkill script that completely wipes anything php from the system and reinstall’s it. That works fine in most cases, but won’t save you if you select two modules that don’t play nice together.

So I wrote this little perl script that will detect a malfunctioning module by enabling modules one by one in extensions.ini and testing them. You could just comment them all out and uncomment one by one, but that is a pain, especially if you have to do it on multiple servers.

Sample output:

# php_module_detective.pl

Note: A backup of your extensions.ini was created as /usr/local/etc/php/extensions.ini.bak ...

Testing extension=bcmath.so ... 
Testing extension=bz2.so ... 
Testing extension=calendar.so ... 
Testing extension=ctype.so ... 
Testing extension=curl.so ... 
Testing extension=dba.so ... 
Testing extension=pcre.so ... 
Testing extension=simplexml.so ... 
Testing extension=spl.so ... 
Testing extension=dom.so ... 
Testing extension=exif.so ... 
Testing extension=filter.so ... 
Testing extension=gd.so ... 
Testing extension=gettext.so ... 
Testing extension=gmp.so ... 
Testing extension=hash.so ... 
Testing extension=iconv.so ... 
Testing extension=json.so ... 
Testing extension=mbstring.so ... 
Testing extension=mcrypt.so ... 
Testing extension=mhash.so ... 
Segmentation fault (core dumped)


!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
extension=mhash.so is broken.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Backup restored.

Died at /usr/local/bin/php_module_detective.pl line 69.

Here’s the script:

#!/usr/local/bin/perl

use strict;
use File::Copy;

# Settings --------------------------------------------------------------------

# Where to find extensions.ini ...
my $module_ini_file = '/usr/local/etc/php/extensions.ini';

# Globals ---------------------------------------------------------------------

# Holds a list of all modules found in the ini file.
my @modules;

# Functions -------------------------------------------------------------------

# Creates a backup of the ini file.
sub make_backup()
{
  copy($module_ini_file,"$module_ini_file.bak") or 
    die ("Could not create a backup.");
  print "Note: A backup of your extensions.ini was created as $module_ini_file.bak ...\n\n"
}

# Restores backup.
sub restore_backup()
{
  copy("$module_ini_file.bak","$module_ini_file") or
    die ("Failed to restore backup.");
  print "Backup restored.\n\n"
}


# Reads the ini file and fills the modules array.
sub read_ini()
{
  open(FH,$module_ini_file) or 
    die "Could not open $module_ini_file" ;
  while ()
  {
    chomp($_);
    push(@modules,$_);
  }
  close(FH);
}

# Tries the modules one by one
sub test_modules()
{
  my $module_list = "";
  my @args = ("php -r '\$a=\$a;'"); 
  foreach(@modules)
  {
    my $current_module = $_;
    print "Testing $current_module ... \n";
    $module_list .= "$current_module\n";
    open(FILEOUT,">$module_ini_file") or
      die "Could not open $module_ini_file for writing.";
    print FILEOUT "$module_list\n";
    close(FILEOUT);    
    my $retval = system(@args);
    if ($retval != 0)
    {
      print "\n\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n";
      print "$current_module is broken.\n";
      print "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n\n";
      restore_backup();
      die('');
    }   
  }
}

# Functions -------------------------------------------------------------------

make_backup();
read_ini();
test_modules();
restore_backup();


 
____________________________________________________________________