LL      IIIII NN   NN KK  KK EEEEEEE RRRRRR  RRRRRR   OOOOO  RRRRRR 
LL       III  NNN  NN KK KK  EE      RR   RR RR   RR OO   OO RR   RR
LL       III  NN N NN KKKK   EEEEE   RRRRRR  RRRRRR  OO   OO RRRRRR 
LL       III  NN  NNN KK KK  EE      RR  RR  RR  RR  OO   OO RR  RR 
LLLLLLL IIIII NN   NN KK  KK EEEEEEE RR   RR RR   RR  OOOOO  RR   RR
                                                           ramblings
____________________________________________________________________

Being a sysadmin you end up running ssh to multiple servers at the same time, all the time. Being a paranoid sysadmin you also have different (long) passwords for every one of these servers.

Unless you want to spend more time entering passwords than doing actual work, you probably have some kind of master-password system setup.

Most people will use an ssh key uploaded to the servers in order to accomplish this. – (hopefully one that is password protected.)

However there are some situations where this is not preferred, for example, when an account is shared by multiple people, or when you simply cannot leave ssh public keys lingering around. Or when you simply don’t want to have to re-upload the key every time the home directory gets wiped…

It sure would be nice to have a password manager, protected with a master password, remember passwords you enter for ssh, in those cases.

This is possible with kdewallet and a small expect script wrapper around ssh.

I don’t personally use kde, but I do use some of the utilities it ships with from time to time, kdewalet being one of them. Kdewallet uses dbus for ipc. The qdbus utility lets you interact with dbus applications from the command line (and from shell scripts), so that’s what this script makes use of. The KDE Wallet password management system consists of system daemon (kwalletd) and a front-end gui application to view the password database, create folders, etc, called kwalletmanager. You don’t have to have kwalletmanager running for this to work. The script will automatically start kwalletd if it’s not running.

You can use kwalletmanager to create a separate folder to store your ssh passwords “eg, a folder called “ssh”) and specify the folder in which to store the passwords at the top of the script, where some other constants can be adjusted such as the location of the needed binaries…

If a password was not found in kwallet, it will prompt for the password and store it. (If you entered the wrong password you’ll have to remove it using kwalletmanager.)

The script is implemented using ‘expect’ which can be obtained here : http://expect.nist.gov/ – Which uses TCL syntax.

#!/usr/bin/expect -f

# Entry point -----------------------------------------------------------------

# Constants
set kwalletd "/usr/bin/kwalletd"
set qdbus "/usr/bin/qdbus"
set kdialog "/usr/bin/kdialog"
set appid "ssh"
set passwdfolder "ssh"

# Get commandline args.

set user [lindex $argv 0]
set host [lindex $argv 1]
set port [lindex $argv 2]

# Check arg sanity
if { $user == "" || $host == "" } {
  puts "Usage: user host \[port\] \n"
  exit 1
}

# Use a sane default port if not specified by the user.
if { $port == "" } {
  set port "22"
}

# Run kde wallet daemon if it's not already running.
set kwalletrunning [ 
  exec "$qdbus" "org.kde.kwalletd" "/modules/kwalletd" "org.kde.KWallet.isEnabled" 
]
if { $kwalletrunning == "false" } {
  puts "kwalletd is not running, starting it...\n"
  exec "$kwalletd&"
  sleep 2
} else {
  puts "Found kwalletd running.\n"
}

# Get wallet id 
set walletid [
  exec "$qdbus" "org.kde.kwalletd" "/modules/kwalletd" "org.kde.KWallet.open" "kdewallet" "0" "$appid"
]

# Get password from kde wallet.
set passw [
  exec "$qdbus" "org.kde.kwalletd" "/modules/kwalletd" "org.kde.KWallet.readPassword" "$walletid" "$passwdfolder" "$user@$host" "$appid"
]

# If no password was found, ask for one.
if { $passw == "" } {
  set passw [
    exec "$kdialog" "--title" "ssh" "--password" "Please enter the ssh password for $user@$host"
  ]
  if { $passw == "" } {
    puts "You need to enter a password.\n"
    exit 1
  }
  # Now save the newly entered password into kde wallet
  exec "$qdbus" "org.kde.kwalletd" "/modules/kwalletd" "org.kde.KWallet.writePassword" "$walletid" "$passwdfolder" "$user@$host" "$passw" "$appid"
}

# Run ssh.
if [
  catch {
    spawn ssh -p $port $user@$host 
  } reason
] {
  puts " Failed to spawn SSH: $reason\n"
  exit 1
}

# Wait for password prompt and send the password.
# Add key to known hosts if asked.
# Resume after successful login.
expect {
  -re ".*assword:" {
    exp_send "$passw\r"
    exp_continue;
  }
  -re ".* (yes/no?)" {
    send -- "yes\r" {
      exp_continue
    }
    -re ".*Warning: Permanently .*known hosts.\r\r\n" exp_continue
  }
  -re ".*Last login" exp_continue;
}

# Send a blank line
send -- "\r"

# Now finally let the user interact with ssh.
interact

 
____________________________________________________________________

When you’re running any type of shared hosting server, with hundreds of clients that have the ability to run php scripts, send emails, etc,… How do you make sure you’re not setting yourself up to be one big spam haven? (the true answer is: you don’t, since shared hosting is one big mess.- You’re screwed.) – A compromised script of a client could be sending out spam mail without using your MTA, so it would not show up in your logs or mailqueue.

For this reason I wrote a little perl script which sniffs all outgoing SMTP traffic and dumps it to a file. You could then set up a cron job which scans the file for known keywords used by spammers (viagra/v1agra/Vi4Gr4/etc…….) and alerts you when something is found; or you could make it extract the emails and run them through spamassassin.

This way, even if the outgoing traffic is sent by some script using sockets to connect to port25 of some external mail server, bypassing your mta, you will still know about it.

Just change the settings on top of the script to reflect the ip address(es) you’re using and the network interface open to the internet.

Download/View it here


 
____________________________________________________________________

For a while now, trojans, and the botnets they work for, have employed several techniques for stealing FTP credentials. Whether it be through sniffing unencrypted FTP traffic, grabbing credentials from saved password files from popular FTP clients, or brute-forcing weak passwords on the server, as a server administrator, having a user’s FTP account compromised is something you would want to detect as early as humanly possible.

I have witnessed quite a few of these compromised accounts, and every time, it seems that there are many logins from many different countries into the account, presumably, by botnet drones dropping all sorts of malware or who knows what else.

This actually makes it fairly easy to write a little script to detect whether an account has been compromised, simply by looking at from how many different countries it has been accessed.

Granted, some people may travel, but most people will not travel to more than 10 countries in a short amount of time.

Thus I have written a perl script that can be used as a nagios sensor, which will grab the `last` output, and do a geoip lookup (using the geoiplookup utility) for each IP address. Then count the number of different countries, and depending on the warning / critical flags, will return the appropriate return value.

Example output:

# ./check_login

User 'weakling' has logins from 33 countries: Egypt Bolivia Taiwan
 Australia Sweden Switzerland Pakistan Dominican Canada China Peru
 Indonesia Vietnam Honduras Portugal Trinidad Grenada Turkey Serbia 
Korea, Mexico United Colombia Brazil Bahrain Japan France Mali South 
Poland Slovenia India - CRITICAL

Grab it here.


 
____________________________________________________________________

When migrating complex web applications between versions from one server to another, the database isn’t always 100% compatible.

Usually what happens is that table fields have been added, causing a mysqldump being imported to fail due to the number of columns in the .sql file not matching the number of columns in the database.

ERROR 1136 (21S01) at line 18: Column count doesn't match value count at row 1

This is because mysqldump by default uses the INSERT INTO table VALUES (); syntax.

There is an alternative syntax for mysql which will let you import data when even tables have new columns. (at least, as long as these columns are allowed to have NULL data, or have default values.)

Recently, after a long nightmarish experience, migrating between versions of a certain control panel (*cough* plesk *cough*) between two servers , I have used a little script which automaticlly converts the VALUES insert syntax to SET field=value syntax.

It’s a last resort type of thing. In our case, the migration tools that shipped with the web application fail miserably in countless ways, and this script little script i threw together actually did the trick and saved us.

It’s a perl script, of course.

#!/usr/bin/perl
use strict;
use DBI;

# Settings --------------------------------------------------------------------

my $src_db_user = 'someuser';
my $src_db_pass = 'supersecret';
my $src_db_name = 'somedatabase';
my $src_db_host = 'localhost';
my $src_db_port = 3306;
my @tables_to_export = qw(hosting clients Cards Limits Permissions Templates accounts
data_bases db_users dns_recs dns_recs_t dns_zone dom_level_usrs domainaliases
domains forwarding hosting itmpl mail mail_aliases mail_redir
mail_resp pd_users protected_dirs resp_forward resp_freq spamfilter
spamfilter_preferences subdomains sys_users web_users);

# Globals ---------------------------------------------------------------------

my $src_db;

# Functions -------------------------------------------------------------------

# Connects to the source database.
sub connect_src_db()
{
  $src_db = DBI->connect("DBI:mysql:$src_db_name",$src_db_user,$src_db_pass)
    or die "Could not connect to database: $DBI::errstr";
}

# Disconnects from all databases in use.
sub disconnect_dbs()
{
  $src_db->disconnect();
}

# Gets column names from a table 
# Returns each column name as an element in a returned array.
sub get_columns_from_table($)
{
  my ($table) = @_;
  my @columns;
  my $qh = $src_db->prepare("SHOW COLUMNS FROM `$table`");
  $qh->execute();
  while (my $ref = $qh->fetchrow_arrayref())
  {  
    my @cols = @{$ref};
    push @columns,@cols[0];
  }
  $qh->finish();
  return @columns;
}

# Returns a string of SQL statements which effectively
# export all data in a table.
sub export_table($)
{
  my ($table) = @_;
  my @columns = get_columns_from_table($table);
  my $qh = $src_db->prepare("SELECT * FROM `$table`");
  my $output='';
  $qh->execute();
  # TODO: Line below, deleting original records should probably be an option.
  $output .= "DELETE FROM `$table`;\n";
  while (my $ref = $qh->fetchrow_arrayref())
  {
    my $colnum = 0;
    my @col_values = @{$ref};
    $output .= "INSERT INTO `$table` SET ";
    foreach my $col_value(@col_values)
    {
      my $col_name = @columns[$colnum];  
      $col_value = $src_db->quote($col_value);
      $output .= "`$col_name` = $col_value,";
      $colnum += 1;
    }
    $output =~ s/,$/;/g;
    $output .= "\n";
  }
  $qh->finish();
  return $output;
}

# Entry point -----------------------------------------------------------------

my $export_sql = '';
connect_src_db();
foreach my $table(@tables_to_export)
{
  $export_sql .= export_table($table);
}
disconnect_dbs();
print $export_sql;
exit 0;

usage: ./thescript > mydump.sql

You obviously have to edit the list of tables to export and the db authentication info on top of the script.

I left in the list of tables there, as the provided list actually works for migrating most of plesk 8.x to 9.x – it may help someone.

If you’re using this with plesk, you may have to make a few more small changes to the database after migrating.

You will probably have problems with the IP pool, if the ip addresses are different on the new server. This actually causes an error when you click the hosting config button in the control panel for a domain: You can reset the ip tool for each client with this sql query on the target server:

update clients set pool_id='1';

Depending on how complex your plesk setup is, and which modules you have installed, you may or may not have other issues, but if you’re using plesk you should be used to that by now (and probably you’re slowly losing all your hair).

Like I said, it’s a last resort type of thing, but it was actually faster than trying to resolve all the issues with plesk’s migration manager and/or backup system.

Of course, you can use the script with anything else. All it does is convert:

  INSERT INTO `foo` VALUES('foo','bar');

to

  INSERT INTO `foo` SET bleh='foo', blah='bar';

Another advantage of the SET syntax is that it preserves the column mappings, so even when columns switch places (ie, bleh becomes the second column instead of the first) the import will still work correctly.


 
____________________________________________________________________

One thing that seems to happen a lot on FreeBSD with php is that it tends to segfault when some wrong combination of modules is installed, or you have a module from a previous version of php installed that doesn’t play nice with your newly upgraded version…

You’ll usually notice things like this in your daily security mails:

+pid 44994 (httpd), uid 80: exited on signal 11
+pid 44992 (httpd), uid 80: exited on signal 11
+pid 50351 (httpd), uid 80: exited on signal 11
+pid 51432 (httpd), uid 80: exited on signal 11
+pid 89423 (httpd), uid 80: exited on signal 11
... etc...

In the case of mismatched modules with the current php version I once came up with this super overkill script that completely wipes anything php from the system and reinstall’s it. That works fine in most cases, but won’t save you if you select two modules that don’t play nice together.

So I wrote this little perl script that will detect a malfunctioning module by enabling modules one by one in extensions.ini and testing them. You could just comment them all out and uncomment one by one, but that is a pain, especially if you have to do it on multiple servers.

Sample output:

# php_module_detective.pl

Note: A backup of your extensions.ini was created as /usr/local/etc/php/extensions.ini.bak ...

Testing extension=bcmath.so ... 
Testing extension=bz2.so ... 
Testing extension=calendar.so ... 
Testing extension=ctype.so ... 
Testing extension=curl.so ... 
Testing extension=dba.so ... 
Testing extension=pcre.so ... 
Testing extension=simplexml.so ... 
Testing extension=spl.so ... 
Testing extension=dom.so ... 
Testing extension=exif.so ... 
Testing extension=filter.so ... 
Testing extension=gd.so ... 
Testing extension=gettext.so ... 
Testing extension=gmp.so ... 
Testing extension=hash.so ... 
Testing extension=iconv.so ... 
Testing extension=json.so ... 
Testing extension=mbstring.so ... 
Testing extension=mcrypt.so ... 
Testing extension=mhash.so ... 
Segmentation fault (core dumped)


!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
extension=mhash.so is broken.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Backup restored.

Died at /usr/local/bin/php_module_detective.pl line 69.

Here’s the script:

#!/usr/local/bin/perl

use strict;
use File::Copy;

# Settings --------------------------------------------------------------------

# Where to find extensions.ini ...
my $module_ini_file = '/usr/local/etc/php/extensions.ini';

# Globals ---------------------------------------------------------------------

# Holds a list of all modules found in the ini file.
my @modules;

# Functions -------------------------------------------------------------------

# Creates a backup of the ini file.
sub make_backup()
{
  copy($module_ini_file,"$module_ini_file.bak") or 
    die ("Could not create a backup.");
  print "Note: A backup of your extensions.ini was created as $module_ini_file.bak ...\n\n"
}

# Restores backup.
sub restore_backup()
{
  copy("$module_ini_file.bak","$module_ini_file") or
    die ("Failed to restore backup.");
  print "Backup restored.\n\n"
}


# Reads the ini file and fills the modules array.
sub read_ini()
{
  open(FH,$module_ini_file) or 
    die "Could not open $module_ini_file" ;
  while ()
  {
    chomp($_);
    push(@modules,$_);
  }
  close(FH);
}

# Tries the modules one by one
sub test_modules()
{
  my $module_list = "";
  my @args = ("php -r '\$a=\$a;'"); 
  foreach(@modules)
  {
    my $current_module = $_;
    print "Testing $current_module ... \n";
    $module_list .= "$current_module\n";
    open(FILEOUT,">$module_ini_file") or
      die "Could not open $module_ini_file for writing.";
    print FILEOUT "$module_list\n";
    close(FILEOUT);    
    my $retval = system(@args);
    if ($retval != 0)
    {
      print "\n\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n";
      print "$current_module is broken.\n";
      print "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n\n";
      restore_backup();
      die('');
    }   
  }
}

# Functions -------------------------------------------------------------------

make_backup();
read_ini();
test_modules();
restore_backup();


 
____________________________________________________________________

This post is kind of a sequel to that post….

One big problem with suexec and suphp on Apache imho is that files run as their owner, thus an accidental chown might break things. A more logical thing would be to assign a user/group to each VirtualHost, which is exactly what the ITK MPM does.

On top of that it has some additional handy features, such as limiting the maximum number of concurrent requests per VirtualHost and setting a niceness value so you can define a cpu affinity per virtual host.

Now the dc member server finally has users properly isolated from one another.

Setting up mpm-itk was a lot easier than suphp,suexec,or peruser-mpm. (I tried peruser-mpm first, and apache just segfaulted :S).
With only a few lines of additional configuration, I was easily able to automate the migration of our 100+ accounts with a quick and dirty perl script.

mpm-itk is included in the default apache install on FreeBSD. There is no separate port for it (like there is for peruser). To use it, compile apache like this:


cd /usr/ports/www/apache22
make WITH_MPM=itk
make install

And that’s it. Apache will now use the itk mpm, and you can add the
AssignUserID line to your VirtualHost. Anything running on it will run as the specified user/group, whether it’s plain html, php, or cgi. That’s another advantage, since with suexec you end up configuring each web-scripting language individually, and then risk still not covering everything.


 
____________________________________________________________________
MTR

If you have ever experienced packet loss or bad connectivity between yourself and some other server, and then wondered where exactly the problem is (your network, the server you are trying to reach, or is it just the internet ‘acting up’)?

Usually, the way you determine this, is by running a traceroute, and check at which hop the latency or packet loss issues begin.

If the high latency is only noticeable at the destination, then the server you are trying to reach is most likely at fault.

If the high latency starts at your first hop, it is probably your own network that is to blame.

Anything in between typically is a problem in the route your data takes to it’s destination, and thus usually not under your control.

The only problem is, that traceroute shows latency, not packet loss.
Thus the solution is to ping each hop in the traceroute, and see what the packet loss is.

It so happens that there is a really neat forgotten (by the masses, anyway) tool called MTR which combines ping and traceroute to do exactly that. It has been around since the dawn of time, and is thus in the package management repositories of most GNU/Linux distributions, and is also present in the FreeBSD ports collection if you want to install it. (Windows users will have to compile it in cygwin).

MTR also has a really neat curses gui which lets you watch the packet loss and lots of other statistics in real-time, making it an awesome tool for debugging networking issues.

MTR curses UI screenshot.

In the example above, it seems the hosting company of the destination server is to blame.

On top of the curses console UI, it also has a GUI for X, for you rodent-addicts.

If you want to use it in a script, or without the curses UI, you can put it in report mode, and specify a number of ping cycles, for plain stdout output.

mtr combines the functionality of the traceroute and ping programs in a
single network diagnostic tool.

As mtr starts, it investigates the network connection between the host
mtr runs on and HOSTNAME. by sending packets with purposly low TTLs.
It continues to send packets with low TTL, noting the response time of
the intervening routers. This allows mtr to print the response per-
centage and response times of the internet route to HOSTNAME. A sudden
increase in packetloss or response time is often an indication of a bad
(or simply overloaded) link.

MTR screenshot

http://www.bitwizard.nl/mtr/


 
____________________________________________________________________

After a conversation with a fellow admin about how to properly wipe data from a hard drive, he decided to run a little experiment with his newly acquired dedi server.

As we suspected, it appears that all data from the previous owner of the hard drive was up for grabs just by browsing through `strings /dev/sda`.

He was able to tell the previous owner ran windows, he was able to fetch registry data, view emails, and even determine some browsing habbits of the previous owner.

Not that big of a surprise, though when you really think about it, the implications of this are rather serious:

Not only can the next owner of hard drive/server read all your data if you don’t properly wipe your hd before leaving the hosting provider, but say you move to a new server, and you don’t wipe your hd, all the old data from the previous owner is still there; if your server ever were the subject of a criminal investigation for whatever reason, any illegal material the previous owner had could easily be blamed on you, seen as deleted files.

Thus it is important to not only properly wipe the hard drive before you leave a host, but also when getting a new server.

He was kind enough to post this on the donationcoder.com forums, so all of this can be discussed here.


 
____________________________________________________________________

I had mentioned before that I was experiencing some problems when using natd with ipfw; more specifically traffic slowing down gradually until reaching a standstill.

I had always suspected that this is due to some recursive loop in the firewall, or natd diverting more than it should…

I finally solved the issue by making the ipfw divert rule more strict about what traffic to divert to natd.

I also added some rules that detect diverted traffic, and skip the normal allow rules, to prevent further mixups.

I am still using ipfw’s fwd feature to do the actual port forwarding, since it is always going to be faster than the natd daemon, which isn’t running inside the kernel space. (Note that there is support for in-kernel nat in FreeBSD, but I need further testing to set that up, since the last time I tried it, it caused a kernel panic, and having no kvm access on that machine makes these kinds of experiments undesirable.)

So, this is what the firewall rules ended up looking like:

# Initialize script -----------------------------------------------------------

# ip address(es) exposed to internet

inet="xxx.xxx.xxx.xxx/xx"

# jails

jail1="xxx.xxx.xxx.xxx"
# ... add more jail ip's here

any_jail="xxx.xxx.xxx.xxx/xxx"

# define how we call ipfw

IPF="ipfw -q add"

# Flush the firewall rules. We want a clean slate.

ipfw -q -f flush


# Port forwarding from internet to jails. --------------------------------------

$IPF 2 fwd $jail1,80 tcp from any to $inet 80
$IPF 3 fwd $jail1,443 tcp from any to $inet 443

# Allow local to local --------------------------------------------------------

$IPF 8 allow ip from any to any via lo0

# NATD out divert. This allows internet access from within jails. -------------

$IPF 10 divert natd ip from $any_jail to not me out via msk1
$IPF 11 skipto 10000 ip from any to any diverted

# Allow out traffic.

$IPF 12 allow ip from $inet to any out

# Services. -------------------------------------------------------------------

# DNS

$IPF 100 allow ip from any to $inet 53 in via msk1

# Apache

$IPF 101 allow tcp from any to $inet 80 in via msk1
$IPF 101 allow tcp from any to $inet 443 in via msk1

# Mail (pop3,pop3s,imap,imaps,smtp,smtps)

$IPF 102 allow tcp from any to $inet 25 in via msk1
$IPF 102 allow tcp from any to $inet 110 in via msk1
$IPF 102 allow tcp from any to $inet 143  in via msk1
$IPF 102 allow tcp from any to $inet 456 in via msk1
$IPF 102 allow tcp from any to $inet 993 in via msk1
$IPF 102 allow tcp from any to $inet 995 in via msk1

# SSH

$IPF 103 allow ip from any to $inet 22 in via msk1

# FTP

$IPF 104 allow tcp from any to $inet 21 in via msk1
$IPF 104 allow tcp from any to $inet 20 in via msk1
$IPF 104 allow tcp from any to $inet dst-port 9000-9040 in via msk1

# etc... add more services as needed

# Natd in divert. this allows internet access from within jails. --------------

$IPF 8000 divert natd ip from not me to any in via msk1
$IPF 8001 skipto 10000 ip from any to any diverted
 
# Default deny ----------------------------------------------------------------

$IPF 9000 deny log logamount 10000 ip from any to any

# Anything after 10000 is traffic re-inserted from natd. ----------------------

$IPF 10000 allow ip from any to any

If you look up almost any natd example, a divert from all to all via $iface is depicted.

In the end, for some reason when you’re diverting from a local interface to aliases on another local interface (as is typically the case with jails), in both directions, diverting from any to any is way too generic, and will cause trouble.

Try to define the divert rule as specific as possible, and keep in mind that you can match any diverted traffic with the diverted keyword.

Some debugging tips:

Install cmdwatch from ports, and run:

cmdwatch -n1 'ipfw -a list'

This allows you to view the number of packets matched by each firewall rule in real time.
You could run this in a screen session, with a split screen setup, while in the other screen running atail -f /var/log/ipfw.log  and perhaps a tcpdump session.

Also, when working remotely it’s probably a good idea to add something to your crontab that shuts down ipfw every 10 minutes or so, just in case you lock yourself out ( which is something very common while debugging firewalls remotely, no matter who you are ;) )

Example temporary failsafe crontab entry for a debug session:

*/10 * * * * /etc/rc.d/ipfw stop

However, it’s also frustrating when you’re thinking your nat is broken, when it’s really your crontab that just disabled your firewall. Therefore it’s a good idea to keep an eye on the firewall status during debugging, and run something like this in one of your screens :


cmdwatch -n1 'sysctl net.inet.ip.fw.enable'

Or, you can combine the above cmdwatch lines with:


cmdwatch -n1 'sysctl net.inet.ip.fw.enable ; ipfw -a list'

(If you’re a GNU/Linux user, the cmdwatch utility on BSD is the same as the watch command on GNU/Linux. The only difference, besides the name, is that the GNU/Linux version allows for refresh intervals below a second. The watch command on FreeBSD is actually an utility to snoop on tty sessions.)


 
____________________________________________________________________

I figured I would share with you, a setup I am using on all my BSD servers to monitor changes to the filesystem.

The idea is to be notified by email at a certain interval (eg: once a day) with a list of all files that have changed since last time the check ran.

This, allows you to be notified when files change without your knowledge, for example, in the event of a cracker breaking into the server or if you accidentally, recursively chowned /, and you managed to interrupt the command; mtree allows you to see how many of the files were affected, and fix them.
As mtree also reports HOW the files were changed. For example, in the chown scenario it would mention the expected uid/gid and what it changed to. This would allow for an automated recovery of such a disaster.

In addition to the e-mail notifications it will also keep a log file (by default in /var/log/mtree.log)

The utility we’ll use for this on FreeBSD is mtree (On GNU/Linux you’d have to use tripwire or auditd).
I wrote a perl script which uses mtree to accomplish what I described above: download it.

So basically, to set it up, you can do the following:

mkdir /usr/mtree
cd /usr/mtree
touch fs.mtree fs.exclude
wget http://linkerror.com/programs/automtree
chmod +x automtree

Now, if you run ./automtree -h you’ll see a list of valid options with some documentation:

  Usage: ./automtree [OPTION] ...
  Show or E-mail out a list of changes to the file system.

  mtree operation options:

    -u,  --update        Updates the file checksum database after 
                         showing/mailing changes.
    -uo, --update-only   Only update the file checksum database.
    -p,  --path          Top level folder to monitor (default: /)
    -q,  --quiet         Do not output scan results to stdout or any
                         other output.

  Path configuration options:

    -l,  --log           Logfile location 
                         (default: /var/log/mtree.log)
         --mtree         Set the location of the mtree executable. 
                         (default is /usr/sbin/mtree)
         --checksum-file Set the location of the file containing the 
                         mtree file checksums. 
                         (defaul: /usr/mtree/fs.mtree)
         --exclude-file  Set the location of the file containing the 
                         list of files and folders to exclude from the 
                         mtree scan. (default is /usr/mtree/fs.exclude)

  E-mail options:

    -e,  --email         Adds specified e-mail address as destination.
         --sendmail      Set the location of the sendmail executable. 
                         (default: /usr/sbin/sendmail)
         --reply-to      Set the e-mail reply-to address.
         --subject       Sets The e-mail subject. 

  Misc options:

    -h,  --help          Display this help text.
 

  Example usage:

    ./automtree -uo
    ./automtree -u -q -e foo@example.com -e bar@example.com
    ./automtree /var/www --mtree /usr/local/sbin/mtree

As you can see, by default, the script will just index the entire filesystem, as the default for the -p option is / … In order to do this you’ll want to ignore some folders, so edit the fs.exclude file, and stick at least this into it:


./dev
./proc
./var
./tmp
./usr/mtree
./usr/share/man
./usr/share/openssl/man
./usr/local/man
./usr/local/lib/perl5/5.8.8/man
./usr/local/lib/perl5/5.8.8/perl/man

Note that you have to prefix folders with ./
So now, in order to automatically scan and receive notifications, the command which will go into crontab is:


./automtree -u -q -e foo@example.com

(It is possible to add multiple -e options for multiple e-mail destinations.)

The command above will not output to stdout (-q), email filesystem changes to foo@example.com (-e foo@example.com), and automatically update the checksum file with the newly discovered changes (-u).

An example crontab line, to check every 3 hours (type crontab -e to edit your crontab):


0 */3 * * * /usr/mtree/automtree -u -q -e youremail@example.com &> /dev/null

The script won’t send an e-mail if there are no changes to report.


 
____________________________________________________________________