LL      IIIII NN   NN KK  KK EEEEEEE RRRRRR  RRRRRR   OOOOO  RRRRRR 
LL       III  NNN  NN KK KK  EE      RR   RR RR   RR OO   OO RR   RR
LL       III  NN N NN KKKK   EEEEE   RRRRRR  RRRRRR  OO   OO RRRRRR 
LL       III  NN  NNN KK KK  EE      RR  RR  RR  RR  OO   OO RR  RR 
LLLLLLL IIIII NN   NN KK  KK EEEEEEE RR   RR RR   RR  OOOOO  RR   RR
                                                           ramblings
____________________________________________________________________

This might be useful for any sysadmins doomed with the terrible fate of administering plesk servers.
I wrote a little script you can stick in cron for emailing reports of overuse in plesk.
It can optionally also run with the -i (interactive) option for console output without emailing.

The script: http://linkerror.com/programs/check_overuse.pl.txt


 
____________________________________________________________________

When you’re running any type of shared hosting server, with hundreds of clients that have the ability to run php scripts, send emails, etc,… How do you make sure you’re not setting yourself up to be one big spam haven? (the true answer is: you don’t, since shared hosting is one big mess.- You’re screwed.) – A compromised script of a client could be sending out spam mail without using your MTA, so it would not show up in your logs or mailqueue.

For this reason I wrote a little perl script which sniffs all outgoing SMTP traffic and dumps it to a file. You could then set up a cron job which scans the file for known keywords used by spammers (viagra/v1agra/Vi4Gr4/etc…….) and alerts you when something is found; or you could make it extract the emails and run them through spamassassin.

This way, even if the outgoing traffic is sent by some script using sockets to connect to port25 of some external mail server, bypassing your mta, you will still know about it.

Just change the settings on top of the script to reflect the ip address(es) you’re using and the network interface open to the internet.

Download/View it here


 
____________________________________________________________________

For a while now, trojans, and the botnets they work for, have employed several techniques for stealing FTP credentials. Whether it be through sniffing unencrypted FTP traffic, grabbing credentials from saved password files from popular FTP clients, or brute-forcing weak passwords on the server, as a server administrator, having a user’s FTP account compromised is something you would want to detect as early as humanly possible.

I have witnessed quite a few of these compromised accounts, and every time, it seems that there are many logins from many different countries into the account, presumably, by botnet drones dropping all sorts of malware or who knows what else.

This actually makes it fairly easy to write a little script to detect whether an account has been compromised, simply by looking at from how many different countries it has been accessed.

Granted, some people may travel, but most people will not travel to more than 10 countries in a short amount of time.

Thus I have written a perl script that can be used as a nagios sensor, which will grab the `last` output, and do a geoip lookup (using the geoiplookup utility) for each IP address. Then count the number of different countries, and depending on the warning / critical flags, will return the appropriate return value.

Example output:

# ./check_login

User 'weakling' has logins from 33 countries: Egypt Bolivia Taiwan
 Australia Sweden Switzerland Pakistan Dominican Canada China Peru
 Indonesia Vietnam Honduras Portugal Trinidad Grenada Turkey Serbia 
Korea, Mexico United Colombia Brazil Bahrain Japan France Mali South 
Poland Slovenia India - CRITICAL

Grab it here.


 
____________________________________________________________________

When migrating complex web applications between versions from one server to another, the database isn’t always 100% compatible.

Usually what happens is that table fields have been added, causing a mysqldump being imported to fail due to the number of columns in the .sql file not matching the number of columns in the database.

ERROR 1136 (21S01) at line 18: Column count doesn't match value count at row 1

This is because mysqldump by default uses the INSERT INTO table VALUES (); syntax.

There is an alternative syntax for mysql which will let you import data when even tables have new columns. (at least, as long as these columns are allowed to have NULL data, or have default values.)

Recently, after a long nightmarish experience, migrating between versions of a certain control panel (*cough* plesk *cough*) between two servers , I have used a little script which automaticlly converts the VALUES insert syntax to SET field=value syntax.

It’s a last resort type of thing. In our case, the migration tools that shipped with the web application fail miserably in countless ways, and this script little script i threw together actually did the trick and saved us.

It’s a perl script, of course.

#!/usr/bin/perl
use strict;
use DBI;

# Settings --------------------------------------------------------------------

my $src_db_user = 'someuser';
my $src_db_pass = 'supersecret';
my $src_db_name = 'somedatabase';
my $src_db_host = 'localhost';
my $src_db_port = 3306;
my @tables_to_export = qw(hosting clients Cards Limits Permissions Templates accounts
data_bases db_users dns_recs dns_recs_t dns_zone dom_level_usrs domainaliases
domains forwarding hosting itmpl mail mail_aliases mail_redir
mail_resp pd_users protected_dirs resp_forward resp_freq spamfilter
spamfilter_preferences subdomains sys_users web_users);

# Globals ---------------------------------------------------------------------

my $src_db;

# Functions -------------------------------------------------------------------

# Connects to the source database.
sub connect_src_db()
{
  $src_db = DBI->connect("DBI:mysql:$src_db_name",$src_db_user,$src_db_pass)
    or die "Could not connect to database: $DBI::errstr";
}

# Disconnects from all databases in use.
sub disconnect_dbs()
{
  $src_db->disconnect();
}

# Gets column names from a table 
# Returns each column name as an element in a returned array.
sub get_columns_from_table($)
{
  my ($table) = @_;
  my @columns;
  my $qh = $src_db->prepare("SHOW COLUMNS FROM `$table`");
  $qh->execute();
  while (my $ref = $qh->fetchrow_arrayref())
  {  
    my @cols = @{$ref};
    push @columns,@cols[0];
  }
  $qh->finish();
  return @columns;
}

# Returns a string of SQL statements which effectively
# export all data in a table.
sub export_table($)
{
  my ($table) = @_;
  my @columns = get_columns_from_table($table);
  my $qh = $src_db->prepare("SELECT * FROM `$table`");
  my $output='';
  $qh->execute();
  # TODO: Line below, deleting original records should probably be an option.
  $output .= "DELETE FROM `$table`;\n";
  while (my $ref = $qh->fetchrow_arrayref())
  {
    my $colnum = 0;
    my @col_values = @{$ref};
    $output .= "INSERT INTO `$table` SET ";
    foreach my $col_value(@col_values)
    {
      my $col_name = @columns[$colnum];  
      $col_value = $src_db->quote($col_value);
      $output .= "`$col_name` = $col_value,";
      $colnum += 1;
    }
    $output =~ s/,$/;/g;
    $output .= "\n";
  }
  $qh->finish();
  return $output;
}

# Entry point -----------------------------------------------------------------

my $export_sql = '';
connect_src_db();
foreach my $table(@tables_to_export)
{
  $export_sql .= export_table($table);
}
disconnect_dbs();
print $export_sql;
exit 0;

usage: ./thescript > mydump.sql

You obviously have to edit the list of tables to export and the db authentication info on top of the script.

I left in the list of tables there, as the provided list actually works for migrating most of plesk 8.x to 9.x – it may help someone.

If you’re using this with plesk, you may have to make a few more small changes to the database after migrating.

You will probably have problems with the IP pool, if the ip addresses are different on the new server. This actually causes an error when you click the hosting config button in the control panel for a domain: You can reset the ip tool for each client with this sql query on the target server:

update clients set pool_id='1';

Depending on how complex your plesk setup is, and which modules you have installed, you may or may not have other issues, but if you’re using plesk you should be used to that by now (and probably you’re slowly losing all your hair).

Like I said, it’s a last resort type of thing, but it was actually faster than trying to resolve all the issues with plesk’s migration manager and/or backup system.

Of course, you can use the script with anything else. All it does is convert:

  INSERT INTO `foo` VALUES('foo','bar');

to

  INSERT INTO `foo` SET bleh='foo', blah='bar';

Another advantage of the SET syntax is that it preserves the column mappings, so even when columns switch places (ie, bleh becomes the second column instead of the first) the import will still work correctly.


 
____________________________________________________________________

I figured I would share with you, a setup I am using on all my BSD servers to monitor changes to the filesystem.

The idea is to be notified by email at a certain interval (eg: once a day) with a list of all files that have changed since last time the check ran.

This, allows you to be notified when files change without your knowledge, for example, in the event of a cracker breaking into the server or if you accidentally, recursively chowned /, and you managed to interrupt the command; mtree allows you to see how many of the files were affected, and fix them.
As mtree also reports HOW the files were changed. For example, in the chown scenario it would mention the expected uid/gid and what it changed to. This would allow for an automated recovery of such a disaster.

In addition to the e-mail notifications it will also keep a log file (by default in /var/log/mtree.log)

The utility we’ll use for this on FreeBSD is mtree (On GNU/Linux you’d have to use tripwire or auditd).
I wrote a perl script which uses mtree to accomplish what I described above: download it.

So basically, to set it up, you can do the following:

mkdir /usr/mtree
cd /usr/mtree
touch fs.mtree fs.exclude
wget http://linkerror.com/programs/automtree
chmod +x automtree

Now, if you run ./automtree -h you’ll see a list of valid options with some documentation:

  Usage: ./automtree [OPTION] ...
  Show or E-mail out a list of changes to the file system.

  mtree operation options:

    -u,  --update        Updates the file checksum database after 
                         showing/mailing changes.
    -uo, --update-only   Only update the file checksum database.
    -p,  --path          Top level folder to monitor (default: /)
    -q,  --quiet         Do not output scan results to stdout or any
                         other output.

  Path configuration options:

    -l,  --log           Logfile location 
                         (default: /var/log/mtree.log)
         --mtree         Set the location of the mtree executable. 
                         (default is /usr/sbin/mtree)
         --checksum-file Set the location of the file containing the 
                         mtree file checksums. 
                         (defaul: /usr/mtree/fs.mtree)
         --exclude-file  Set the location of the file containing the 
                         list of files and folders to exclude from the 
                         mtree scan. (default is /usr/mtree/fs.exclude)

  E-mail options:

    -e,  --email         Adds specified e-mail address as destination.
         --sendmail      Set the location of the sendmail executable. 
                         (default: /usr/sbin/sendmail)
         --reply-to      Set the e-mail reply-to address.
         --subject       Sets The e-mail subject. 

  Misc options:

    -h,  --help          Display this help text.
 

  Example usage:

    ./automtree -uo
    ./automtree -u -q -e foo@example.com -e bar@example.com
    ./automtree /var/www --mtree /usr/local/sbin/mtree

As you can see, by default, the script will just index the entire filesystem, as the default for the -p option is / … In order to do this you’ll want to ignore some folders, so edit the fs.exclude file, and stick at least this into it:


./dev
./proc
./var
./tmp
./usr/mtree
./usr/share/man
./usr/share/openssl/man
./usr/local/man
./usr/local/lib/perl5/5.8.8/man
./usr/local/lib/perl5/5.8.8/perl/man

Note that you have to prefix folders with ./
So now, in order to automatically scan and receive notifications, the command which will go into crontab is:


./automtree -u -q -e foo@example.com

(It is possible to add multiple -e options for multiple e-mail destinations.)

The command above will not output to stdout (-q), email filesystem changes to foo@example.com (-e foo@example.com), and automatically update the checksum file with the newly discovered changes (-u).

An example crontab line, to check every 3 hours (type crontab -e to edit your crontab):


0 */3 * * * /usr/mtree/automtree -u -q -e youremail@example.com &> /dev/null

The script won’t send an e-mail if there are no changes to report.


 
____________________________________________________________________