LL       III  NNN  NN KK KK  EE      RR   RR RR   RR OO   OO RR   RR
LL       III  NN  NNN KK KK  EE      RR  RR  RR  RR  OO   OO RR  RR 

When you’re running any type of shared hosting server, with hundreds of clients that have the ability to run php scripts, send emails, etc,… How do you make sure you’re not setting yourself up to be one big spam haven? (the true answer is: you don’t, since shared hosting is one big mess.- You’re screwed.) – A compromised script of a client could be sending out spam mail without using your MTA, so it would not show up in your logs or mailqueue.

For this reason I wrote a little perl script which sniffs all outgoing SMTP traffic and dumps it to a file. You could then set up a cron job which scans the file for known keywords used by spammers (viagra/v1agra/Vi4Gr4/etc…….) and alerts you when something is found; or you could make it extract the emails and run them through spamassassin.

This way, even if the outgoing traffic is sent by some script using sockets to connect to port25 of some external mail server, bypassing your mta, you will still know about it.

Just change the settings on top of the script to reflect the ip address(es) you’re using and the network interface open to the internet.

Download/View it here


One thing that seems to happen a lot on FreeBSD with php is that it tends to segfault when some wrong combination of modules is installed, or you have a module from a previous version of php installed that doesn’t play nice with your newly upgraded version…

You’ll usually notice things like this in your daily security mails:

+pid 44994 (httpd), uid 80: exited on signal 11
+pid 44992 (httpd), uid 80: exited on signal 11
+pid 50351 (httpd), uid 80: exited on signal 11
+pid 51432 (httpd), uid 80: exited on signal 11
+pid 89423 (httpd), uid 80: exited on signal 11
... etc...

In the case of mismatched modules with the current php version I once came up with this super overkill script that completely wipes anything php from the system and reinstall’s it. That works fine in most cases, but won’t save you if you select two modules that don’t play nice together.

So I wrote this little perl script that will detect a malfunctioning module by enabling modules one by one in extensions.ini and testing them. You could just comment them all out and uncomment one by one, but that is a pain, especially if you have to do it on multiple servers.

Sample output:

# php_module_detective.pl

Note: A backup of your extensions.ini was created as /usr/local/etc/php/extensions.ini.bak ...

Testing extension=bcmath.so ... 
Testing extension=bz2.so ... 
Testing extension=calendar.so ... 
Testing extension=ctype.so ... 
Testing extension=curl.so ... 
Testing extension=dba.so ... 
Testing extension=pcre.so ... 
Testing extension=simplexml.so ... 
Testing extension=spl.so ... 
Testing extension=dom.so ... 
Testing extension=exif.so ... 
Testing extension=filter.so ... 
Testing extension=gd.so ... 
Testing extension=gettext.so ... 
Testing extension=gmp.so ... 
Testing extension=hash.so ... 
Testing extension=iconv.so ... 
Testing extension=json.so ... 
Testing extension=mbstring.so ... 
Testing extension=mcrypt.so ... 
Testing extension=mhash.so ... 
Segmentation fault (core dumped)

extension=mhash.so is broken.

Backup restored.

Died at /usr/local/bin/php_module_detective.pl line 69.

Here’s the script:


use strict;
use File::Copy;

# Settings --------------------------------------------------------------------

# Where to find extensions.ini ...
my $module_ini_file = '/usr/local/etc/php/extensions.ini';

# Globals ---------------------------------------------------------------------

# Holds a list of all modules found in the ini file.
my @modules;

# Functions -------------------------------------------------------------------

# Creates a backup of the ini file.
sub make_backup()
  copy($module_ini_file,"$module_ini_file.bak") or 
    die ("Could not create a backup.");
  print "Note: A backup of your extensions.ini was created as $module_ini_file.bak ...\n\n"

# Restores backup.
sub restore_backup()
  copy("$module_ini_file.bak","$module_ini_file") or
    die ("Failed to restore backup.");
  print "Backup restored.\n\n"

# Reads the ini file and fills the modules array.
sub read_ini()
  open(FH,$module_ini_file) or 
    die "Could not open $module_ini_file" ;
  while ()

# Tries the modules one by one
sub test_modules()
  my $module_list = "";
  my @args = ("php -r '\$a=\$a;'"); 
    my $current_module = $_;
    print "Testing $current_module ... \n";
    $module_list .= "$current_module\n";
    open(FILEOUT,">$module_ini_file") or
      die "Could not open $module_ini_file for writing.";
    print FILEOUT "$module_list\n";
    my $retval = system(@args);
    if ($retval != 0)
      print "\n\n!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n";
      print "$current_module is broken.\n";
      print "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n\n";

# Functions -------------------------------------------------------------------



If you have ever experienced packet loss or bad connectivity between yourself and some other server, and then wondered where exactly the problem is (your network, the server you are trying to reach, or is it just the internet ‘acting up’)?

Usually, the way you determine this, is by running a traceroute, and check at which hop the latency or packet loss issues begin.

If the high latency is only noticeable at the destination, then the server you are trying to reach is most likely at fault.

If the high latency starts at your first hop, it is probably your own network that is to blame.

Anything in between typically is a problem in the route your data takes to it’s destination, and thus usually not under your control.

The only problem is, that traceroute shows latency, not packet loss.
Thus the solution is to ping each hop in the traceroute, and see what the packet loss is.

It so happens that there is a really neat forgotten (by the masses, anyway) tool called MTR which combines ping and traceroute to do exactly that. It has been around since the dawn of time, and is thus in the package management repositories of most GNU/Linux distributions, and is also present in the FreeBSD ports collection if you want to install it. (Windows users will have to compile it in cygwin).

MTR also has a really neat curses gui which lets you watch the packet loss and lots of other statistics in real-time, making it an awesome tool for debugging networking issues.

MTR curses UI screenshot.

In the example above, it seems the hosting company of the destination server is to blame.

On top of the curses console UI, it also has a GUI for X, for you rodent-addicts.

If you want to use it in a script, or without the curses UI, you can put it in report mode, and specify a number of ping cycles, for plain stdout output.

mtr combines the functionality of the traceroute and ping programs in a
single network diagnostic tool.

As mtr starts, it investigates the network connection between the host
mtr runs on and HOSTNAME. by sending packets with purposly low TTLs.
It continues to send packets with low TTL, noting the response time of
the intervening routers. This allows mtr to print the response per-
centage and response times of the internet route to HOSTNAME. A sudden
increase in packetloss or response time is often an indication of a bad
(or simply overloaded) link.

MTR screenshot



I had mentioned before that I was experiencing some problems when using natd with ipfw; more specifically traffic slowing down gradually until reaching a standstill.

I had always suspected that this is due to some recursive loop in the firewall, or natd diverting more than it should…

I finally solved the issue by making the ipfw divert rule more strict about what traffic to divert to natd.

I also added some rules that detect diverted traffic, and skip the normal allow rules, to prevent further mixups.

I am still using ipfw’s fwd feature to do the actual port forwarding, since it is always going to be faster than the natd daemon, which isn’t running inside the kernel space. (Note that there is support for in-kernel nat in FreeBSD, but I need further testing to set that up, since the last time I tried it, it caused a kernel panic, and having no kvm access on that machine makes these kinds of experiments undesirable.)

So, this is what the firewall rules ended up looking like:

# Initialize script -----------------------------------------------------------

# ip address(es) exposed to internet


# jails

# ... add more jail ip's here


# define how we call ipfw

IPF="ipfw -q add"

# Flush the firewall rules. We want a clean slate.

ipfw -q -f flush

# Port forwarding from internet to jails. --------------------------------------

$IPF 2 fwd $jail1,80 tcp from any to $inet 80
$IPF 3 fwd $jail1,443 tcp from any to $inet 443

# Allow local to local --------------------------------------------------------

$IPF 8 allow ip from any to any via lo0

# NATD out divert. This allows internet access from within jails. -------------

$IPF 10 divert natd ip from $any_jail to not me out via msk1
$IPF 11 skipto 10000 ip from any to any diverted

# Allow out traffic.

$IPF 12 allow ip from $inet to any out

# Services. -------------------------------------------------------------------


$IPF 100 allow ip from any to $inet 53 in via msk1

# Apache

$IPF 101 allow tcp from any to $inet 80 in via msk1
$IPF 101 allow tcp from any to $inet 443 in via msk1

# Mail (pop3,pop3s,imap,imaps,smtp,smtps)

$IPF 102 allow tcp from any to $inet 25 in via msk1
$IPF 102 allow tcp from any to $inet 110 in via msk1
$IPF 102 allow tcp from any to $inet 143  in via msk1
$IPF 102 allow tcp from any to $inet 456 in via msk1
$IPF 102 allow tcp from any to $inet 993 in via msk1
$IPF 102 allow tcp from any to $inet 995 in via msk1


$IPF 103 allow ip from any to $inet 22 in via msk1


$IPF 104 allow tcp from any to $inet 21 in via msk1
$IPF 104 allow tcp from any to $inet 20 in via msk1
$IPF 104 allow tcp from any to $inet dst-port 9000-9040 in via msk1

# etc... add more services as needed

# Natd in divert. this allows internet access from within jails. --------------

$IPF 8000 divert natd ip from not me to any in via msk1
$IPF 8001 skipto 10000 ip from any to any diverted
# Default deny ----------------------------------------------------------------

$IPF 9000 deny log logamount 10000 ip from any to any

# Anything after 10000 is traffic re-inserted from natd. ----------------------

$IPF 10000 allow ip from any to any

If you look up almost any natd example, a divert from all to all via $iface is depicted.

In the end, for some reason when you’re diverting from a local interface to aliases on another local interface (as is typically the case with jails), in both directions, diverting from any to any is way too generic, and will cause trouble.

Try to define the divert rule as specific as possible, and keep in mind that you can match any diverted traffic with the diverted keyword.

Some debugging tips:

Install cmdwatch from ports, and run:

cmdwatch -n1 'ipfw -a list'

This allows you to view the number of packets matched by each firewall rule in real time.
You could run this in a screen session, with a split screen setup, while in the other screen running atail -f /var/log/ipfw.log  and perhaps a tcpdump session.

Also, when working remotely it’s probably a good idea to add something to your crontab that shuts down ipfw every 10 minutes or so, just in case you lock yourself out ( which is something very common while debugging firewalls remotely, no matter who you are ;) )

Example temporary failsafe crontab entry for a debug session:

*/10 * * * * /etc/rc.d/ipfw stop

However, it’s also frustrating when you’re thinking your nat is broken, when it’s really your crontab that just disabled your firewall. Therefore it’s a good idea to keep an eye on the firewall status during debugging, and run something like this in one of your screens :

cmdwatch -n1 'sysctl net.inet.ip.fw.enable'

Or, you can combine the above cmdwatch lines with:

cmdwatch -n1 'sysctl net.inet.ip.fw.enable ; ipfw -a list'

(If you’re a GNU/Linux user, the cmdwatch utility on BSD is the same as the watch command on GNU/Linux. The only difference, besides the name, is that the GNU/Linux version allows for refresh intervals below a second. The watch command on FreeBSD is actually an utility to snoop on tty sessions.)


If you’re ever working with vsftpd, and filezilla dumps out this error:

GnuTLS error -8: A record packet with illegal version was received

You’re not finding any relevant error messages in your vsftpd log file, nor in the xferlog, nor in /var/log/messages ?

Well, vsftpd seems to be horribly un-verbose. The cause of this error is not because of some obscure TLS problem. What’s causing it is vsftpd dumping out a plain-text error in the middle of the encrypted data stream, causing the ftp client to pop out this error.

The only way to debug this was by packet sniffing the actual connection with wireshark. Following the TCP stream with wireshark, the error I was looking for in the log files, was clearly visible at the end of the TLS encrypted data, before the connection dropped.

Something like:

dvU2@:M.&.X=:-A*4aUm3:)!)y5Kt$'&"ZQN:'v%X500 OOPS: Cannot change directory: /foo

It turned out to be a simple permissions issue… .
Why vsftpd isn’t logging these to it’s own log file, or even syslogd, who knows. At the most verbose configuration, it is logging all sorts of things, except the actual error causing the problem!

Had encryption not been enabled in vsftpd, the error would have been visible in the FTP client.

So to any one encountering this, I would recommend either temporarily disabling encryption in vsftpd in order to see the error, or if that is not an option, use a packet sniffer to view the error.

I figured I would post this since google didn’t bring up much useful as I was debugging this. :)