LL      IIIII NN   NN KK  KK EEEEEEE RRRRRR  RRRRRR   OOOOO  RRRRRR 
LL       III  NNN  NN KK KK  EE      RR   RR RR   RR OO   OO RR   RR
LL       III  NN N NN KKKK   EEEEE   RRRRRR  RRRRRR  OO   OO RRRRRR 
LL       III  NN  NNN KK KK  EE      RR  RR  RR  RR  OO   OO RR  RR 
LLLLLLL IIIII NN   NN KK  KK EEEEEEE RR   RR RR   RR  OOOOO  RR   RR
                                                           ramblings
____________________________________________________________________

After a conversation with a fellow admin about how to properly wipe data from a hard drive, he decided to run a little experiment with his newly acquired dedi server.

As we suspected, it appears that all data from the previous owner of the hard drive was up for grabs just by browsing through `strings /dev/sda`.

He was able to tell the previous owner ran windows, he was able to fetch registry data, view emails, and even determine some browsing habbits of the previous owner.

Not that big of a surprise, though when you really think about it, the implications of this are rather serious:

Not only can the next owner of hard drive/server read all your data if you don’t properly wipe your hd before leaving the hosting provider, but say you move to a new server, and you don’t wipe your hd, all the old data from the previous owner is still there; if your server ever were the subject of a criminal investigation for whatever reason, any illegal material the previous owner had could easily be blamed on you, seen as deleted files.

Thus it is important to not only properly wipe the hard drive before you leave a host, but also when getting a new server.

He was kind enough to post this on the donationcoder.com forums, so all of this can be discussed here.


 
____________________________________________________________________

I had mentioned before that I was experiencing some problems when using natd with ipfw; more specifically traffic slowing down gradually until reaching a standstill.

I had always suspected that this is due to some recursive loop in the firewall, or natd diverting more than it should…

I finally solved the issue by making the ipfw divert rule more strict about what traffic to divert to natd.

I also added some rules that detect diverted traffic, and skip the normal allow rules, to prevent further mixups.

I am still using ipfw’s fwd feature to do the actual port forwarding, since it is always going to be faster than the natd daemon, which isn’t running inside the kernel space. (Note that there is support for in-kernel nat in FreeBSD, but I need further testing to set that up, since the last time I tried it, it caused a kernel panic, and having no kvm access on that machine makes these kinds of experiments undesirable.)

So, this is what the firewall rules ended up looking like:

# Initialize script -----------------------------------------------------------

# ip address(es) exposed to internet

inet="xxx.xxx.xxx.xxx/xx"

# jails

jail1="xxx.xxx.xxx.xxx"
# ... add more jail ip's here

any_jail="xxx.xxx.xxx.xxx/xxx"

# define how we call ipfw

IPF="ipfw -q add"

# Flush the firewall rules. We want a clean slate.

ipfw -q -f flush


# Port forwarding from internet to jails. --------------------------------------

$IPF 2 fwd $jail1,80 tcp from any to $inet 80
$IPF 3 fwd $jail1,443 tcp from any to $inet 443

# Allow local to local --------------------------------------------------------

$IPF 8 allow ip from any to any via lo0

# NATD out divert. This allows internet access from within jails. -------------

$IPF 10 divert natd ip from $any_jail to not me out via msk1
$IPF 11 skipto 10000 ip from any to any diverted

# Allow out traffic.

$IPF 12 allow ip from $inet to any out

# Services. -------------------------------------------------------------------

# DNS

$IPF 100 allow ip from any to $inet 53 in via msk1

# Apache

$IPF 101 allow tcp from any to $inet 80 in via msk1
$IPF 101 allow tcp from any to $inet 443 in via msk1

# Mail (pop3,pop3s,imap,imaps,smtp,smtps)

$IPF 102 allow tcp from any to $inet 25 in via msk1
$IPF 102 allow tcp from any to $inet 110 in via msk1
$IPF 102 allow tcp from any to $inet 143  in via msk1
$IPF 102 allow tcp from any to $inet 456 in via msk1
$IPF 102 allow tcp from any to $inet 993 in via msk1
$IPF 102 allow tcp from any to $inet 995 in via msk1

# SSH

$IPF 103 allow ip from any to $inet 22 in via msk1

# FTP

$IPF 104 allow tcp from any to $inet 21 in via msk1
$IPF 104 allow tcp from any to $inet 20 in via msk1
$IPF 104 allow tcp from any to $inet dst-port 9000-9040 in via msk1

# etc... add more services as needed

# Natd in divert. this allows internet access from within jails. --------------

$IPF 8000 divert natd ip from not me to any in via msk1
$IPF 8001 skipto 10000 ip from any to any diverted
 
# Default deny ----------------------------------------------------------------

$IPF 9000 deny log logamount 10000 ip from any to any

# Anything after 10000 is traffic re-inserted from natd. ----------------------

$IPF 10000 allow ip from any to any

If you look up almost any natd example, a divert from all to all via $iface is depicted.

In the end, for some reason when you’re diverting from a local interface to aliases on another local interface (as is typically the case with jails), in both directions, diverting from any to any is way too generic, and will cause trouble.

Try to define the divert rule as specific as possible, and keep in mind that you can match any diverted traffic with the diverted keyword.

Some debugging tips:

Install cmdwatch from ports, and run:

cmdwatch -n1 'ipfw -a list'

This allows you to view the number of packets matched by each firewall rule in real time.
You could run this in a screen session, with a split screen setup, while in the other screen running atail -f /var/log/ipfw.log  and perhaps a tcpdump session.

Also, when working remotely it’s probably a good idea to add something to your crontab that shuts down ipfw every 10 minutes or so, just in case you lock yourself out ( which is something very common while debugging firewalls remotely, no matter who you are ;) )

Example temporary failsafe crontab entry for a debug session:

*/10 * * * * /etc/rc.d/ipfw stop

However, it’s also frustrating when you’re thinking your nat is broken, when it’s really your crontab that just disabled your firewall. Therefore it’s a good idea to keep an eye on the firewall status during debugging, and run something like this in one of your screens :


cmdwatch -n1 'sysctl net.inet.ip.fw.enable'

Or, you can combine the above cmdwatch lines with:


cmdwatch -n1 'sysctl net.inet.ip.fw.enable ; ipfw -a list'

(If you’re a GNU/Linux user, the cmdwatch utility on BSD is the same as the watch command on GNU/Linux. The only difference, besides the name, is that the GNU/Linux version allows for refresh intervals below a second. The watch command on FreeBSD is actually an utility to snoop on tty sessions.)


 
____________________________________________________________________

I figured I would share with you, a setup I am using on all my BSD servers to monitor changes to the filesystem.

The idea is to be notified by email at a certain interval (eg: once a day) with a list of all files that have changed since last time the check ran.

This, allows you to be notified when files change without your knowledge, for example, in the event of a cracker breaking into the server or if you accidentally, recursively chowned /, and you managed to interrupt the command; mtree allows you to see how many of the files were affected, and fix them.
As mtree also reports HOW the files were changed. For example, in the chown scenario it would mention the expected uid/gid and what it changed to. This would allow for an automated recovery of such a disaster.

In addition to the e-mail notifications it will also keep a log file (by default in /var/log/mtree.log)

The utility we’ll use for this on FreeBSD is mtree (On GNU/Linux you’d have to use tripwire or auditd).
I wrote a perl script which uses mtree to accomplish what I described above: download it.

So basically, to set it up, you can do the following:

mkdir /usr/mtree
cd /usr/mtree
touch fs.mtree fs.exclude
wget http://linkerror.com/programs/automtree
chmod +x automtree

Now, if you run ./automtree -h you’ll see a list of valid options with some documentation:

  Usage: ./automtree [OPTION] ...
  Show or E-mail out a list of changes to the file system.

  mtree operation options:

    -u,  --update        Updates the file checksum database after 
                         showing/mailing changes.
    -uo, --update-only   Only update the file checksum database.
    -p,  --path          Top level folder to monitor (default: /)
    -q,  --quiet         Do not output scan results to stdout or any
                         other output.

  Path configuration options:

    -l,  --log           Logfile location 
                         (default: /var/log/mtree.log)
         --mtree         Set the location of the mtree executable. 
                         (default is /usr/sbin/mtree)
         --checksum-file Set the location of the file containing the 
                         mtree file checksums. 
                         (defaul: /usr/mtree/fs.mtree)
         --exclude-file  Set the location of the file containing the 
                         list of files and folders to exclude from the 
                         mtree scan. (default is /usr/mtree/fs.exclude)

  E-mail options:

    -e,  --email         Adds specified e-mail address as destination.
         --sendmail      Set the location of the sendmail executable. 
                         (default: /usr/sbin/sendmail)
         --reply-to      Set the e-mail reply-to address.
         --subject       Sets The e-mail subject. 

  Misc options:

    -h,  --help          Display this help text.
 

  Example usage:

    ./automtree -uo
    ./automtree -u -q -e foo@example.com -e bar@example.com
    ./automtree /var/www --mtree /usr/local/sbin/mtree

As you can see, by default, the script will just index the entire filesystem, as the default for the -p option is / … In order to do this you’ll want to ignore some folders, so edit the fs.exclude file, and stick at least this into it:


./dev
./proc
./var
./tmp
./usr/mtree
./usr/share/man
./usr/share/openssl/man
./usr/local/man
./usr/local/lib/perl5/5.8.8/man
./usr/local/lib/perl5/5.8.8/perl/man

Note that you have to prefix folders with ./
So now, in order to automatically scan and receive notifications, the command which will go into crontab is:


./automtree -u -q -e foo@example.com

(It is possible to add multiple -e options for multiple e-mail destinations.)

The command above will not output to stdout (-q), email filesystem changes to foo@example.com (-e foo@example.com), and automatically update the checksum file with the newly discovered changes (-u).

An example crontab line, to check every 3 hours (type crontab -e to edit your crontab):


0 */3 * * * /usr/mtree/automtree -u -q -e youremail@example.com &> /dev/null

The script won’t send an e-mail if there are no changes to report.


 
____________________________________________________________________

If you’re ever working with vsftpd, and filezilla dumps out this error:

GnuTLS error -8: A record packet with illegal version was received

You’re not finding any relevant error messages in your vsftpd log file, nor in the xferlog, nor in /var/log/messages ?

Well, vsftpd seems to be horribly un-verbose. The cause of this error is not because of some obscure TLS problem. What’s causing it is vsftpd dumping out a plain-text error in the middle of the encrypted data stream, causing the ftp client to pop out this error.

The only way to debug this was by packet sniffing the actual connection with wireshark. Following the TCP stream with wireshark, the error I was looking for in the log files, was clearly visible at the end of the TLS encrypted data, before the connection dropped.

Something like:

\5_TXC,[1d.c}$D12N8(,"ndKm:?Y5O\M)5{nj2*Uaiym8-T4rt2c'#/K(
dvU2@:M.&.X=:-A*4aUm3:)!)y5Kt$'&"ZQN:'v%X500 OOPS: Cannot change directory: /foo

It turned out to be a simple permissions issue… .
Why vsftpd isn’t logging these to it’s own log file, or even syslogd, who knows. At the most verbose configuration, it is logging all sorts of things, except the actual error causing the problem!

Had encryption not been enabled in vsftpd, the error would have been visible in the FTP client.

So to any one encountering this, I would recommend either temporarily disabling encryption in vsftpd in order to see the error, or if that is not an option, use a packet sniffer to view the error.

I figured I would post this since google didn’t bring up much useful as I was debugging this. :)


 
____________________________________________________________________

If you have an account on the donationcoder.com member server, you might be aware that I have recently been working on the server intensively. The mysql hostname has changed, and the word ‘jail’ was dropped here and there.In this post I’ll attempt to explain in more detail what is going on.

Quite some time ago now, we had moved the member server accounts from our old, now discontinued vps server to a new dedicated server.

This move had to happen as quickly as possible, since all the websites of our members would be completely down during the transition.
Unfortunately this caused me having to implement certain things that I normally would/should have implemented beforehand later on.
Also, some of the unique problems associated with the member server only came to really stand out the more I was working with it.
The problems can be summarized as following:

  • There are many accounts on the server.
  • Many people are running custom code, or third party code that is not always upgraded to the latest version. This poses a great security risk. Even though we ask that they do, many of our users fail to keep track of security updates for say, wordpress, or other popular software. Another problem is people using code they found on the internet, that may not always be secure. (One example is a script to make pretty directory indexes. It allowed you to pass a ?dir=/some/folder parameter. It had no bounds checking, so it effectively exposed all files the user running the script could access (which is currently the apache www user.), effectively exposing most files on the server. Some of these things are very subtle, and with 100 some accounts, I cannot possibly police them all.
  • Some things are more subtle, such as a malfunctioning php script of a user writing to the database in an infinite loop, thus filling the hard drive. To prevent one user from being able to fill the drive on error, we have a quota system. Unfortunately, by default the mysql binary files are not owned by the user in question, so a database was not protected by this system. This has now been fixed by chowning the database files to the user in question, but it still serves as a great example of the complexities and subtle nature of problems when you’re administring a server shared by many users.

A few realities:

  • Giving a user the ability to run custom php/ruby/perl/cgi scripts pretty much equals giving them shell access for the user running the script.
  • Control over what code runs on the server becomes more and more impossible as the amount of user accounts grows. Leading to reality:
  • Server becomes a hostile environment, and should be treated as such.

Ok, so what can be done to at least try to contain the situation a bit:

  • Visualization, chroot, or jails: Each user gets a virtual system. -> Not advisable because 1) This way you effectively manage n systems instead of just 1, where n is the number of users you have. This leads to maintenance nightmares. Instead of having to apply a patch or security update once, it has to be performed on all the virtual machines simultaniously. In addition the maximum capacity of the server would be greatly reduced due to the additional overhead of virtualization and extra disk-space needed for each self-contained system. It’s possible to pull this off if you build scripts for replicating updates across different jails or virtual machines and you have more time to implement these things and resources(money) for exponentially adding hardware as needed – but we don’t, so this is not a realistic option for us.
  • suphp :  Instead of running all php and cgi scripts as the www user, scripts run as the user who owns the file. This way it is possible to use plain unix permissions to prevent users from accessing eachother’s files or other system files that don’t have to be accessible. A valid uid range can be entered to prevent files owned by root to be run as root (that would be bad.) -> This approach only works as long as users assign proper permissions to their files. Many of them are not familiar with file permissions, let alone the various nuances in security problems of the things they install. If chmod 777 is the easyer way to accomplish something, some may probably just do that, defeating the purpose of suexec. Also, many many files still need to be shared between users to even be able to run php, rails, or cgi scripts. A compromised script is still only a local exploit away from gaining access to the full server.
  • MAC (mandatory access control). With mandatory access control, security policies can be set up for each application. Access is restricted at the kernel level to certain system calls etc. SELinux uses this approach. The TrustedBSD project brought this to the FreeBSD kernel. -> The downside here is that MAC is very time consuming to set up, and tends to lead to a very complex security setup. Arguably, a complex security setup is a security risk in it’s own, since it becomes harder to clearly oversee the big picture. Also, you’re still only a kernel exploit away from being pwned. (There is no real defense against kernel exploits other than keeping the kernel patched and up to date for known exploits and hope that nobody has an unpublished 0-day on hands.)

So, it is clear that each approach is not without it’s problems. As is usually is the case with computer security, the best approach is a layered model. (ie, combining several methods.), there is not one magical perfect solution.

The plan is to put each service exposed to the internet in a FreeBSD jail. (eg, a jail for MySQL, one for Apache, one for E-mail, etc… ). Then, inside the Apache jail, use suphp. This way there is a controlable number of systems to maintain, and users are still able to protect their files from other users. A compromise of a web script is contained within the apache jail, and will not necesarily compromise mysql or the e-mail services, for example. Perhaps later a MAC layer can be added, if I can figure out  a way to not make it overly complex. And all this has to happen with minimal downtime while the system is live.

I have already moved MySQL and Apache into jails. Bind DNS was already jailed before we went live.

Jails each need an IP address assigned with them. For the sake of taking advantage of the jails concept and virtual interfaces, I am not running all of the jails on the public interface (which would be a really bad idea in the case of MySQL to begin with.) – Instead each jail has it’s own virtual LAN ip. (eg: 192.168.0.1 for apache, 192.168.0.2 for mysql) – It is for this reason that I have contacted users to now use mysql.dcwing.com as MySQL server instead of localhost. Each jail having it’s own IP address is handy, for example, if you want to tcpdump (sniff) traffic to/from a specific service, or run stats on it, etc. It’s all nicely isolated. It also allows you to prevent net access on jails that don’t need it, and to prevent certain jails to have network connectivity to services that they shouldn’t have network connectivity to.

In order to redirect traffic from the internet to the public WAN interface, to the virtual LAN interface of the apache jail, I had to add some port forwarding rules to the firewall:


00001 fwd 192.168.0.1,80 tcp from any to 216.180.244.50 dst-port 80
00001 fwd 192.168.0.1,443 tcp from any to 216.180.244.50 dst-port 443

The only problem now is that there is no reliable way to redirect traffic from inside, say the apache jail, to the internet, other than using NAT.


add 2 divert natd ip from any to not 192.168.0.0/16

The above rule works great, internet access from inside jails works. However it seems to introduce a problem I haven’t quite been able to debug yet. Traffic from the internet to the server becomes very slow. For example when downloading a file, it starts at 3 KiB/sec and then gradually slows down to 0, until the connection stalls and dies. Clearly there is something going wrong in the firewall, and I haven’t quite figured out yet what. For this reason, internet access from inside php scripts is currently not working. (I have to leave the nat rule disabled to prevent the slowdown.)
All this will be a lot easier when FreeBSD8 is out. Jails will then be able to be assigned multiple IP addresses, so no NAT is required.
I could just go and apply the jails patch to enable this feature, but I’m puzzled by this NAT problem, and would prefer to figure it out instead of going for a quick fix.

Given the length of this post, now I realize why I haven’t been wanting to talk to people about what I do, it’s just too friggin’ much to explain :D


 
____________________________________________________________________

It was suggested I should share the wordpress theme for this…

Here it is: linkerror-0.1.tar.bz2 (13 KiB).

It’s xhtml strict compliant, text-browser friendly (you can view it relatively comfortabely, and post comments using lynx, links, links2, et all…).

If you disable avatars and emoticons in your wordpress settings, it is 100% image-free. No javascript, no flash, no images… ah bliss. ;)

Insert your own ascii art in header.php to replace the LINKERROR banner on top.

Licensed under the GNU GPL (v3).

<disclaimer>

“Dammit, Jim, I’m a doctor, not a web designer.”
( s/doctor/software developer\/server admin/g )

</disclaimer>


 
____________________________________________________________________

I never felt the urge to write any of my thoughts, findings or experiences on the web in a “web log” (the b-word must die!).

Writing never was something I was good at, and I’m still not a word-wizard. My communication skills suck, and I’m quite “anti-social” when it comes to communicating in words with other humans.

Thus, naturally, I felt that writing should be left to those that are good at it.

So, what happened?

Well, nothing. I still feel the same way. But for a very long time now, some people have been bugging me to write down some things about my server administration and development adventures.

Of course I stubbornly refused, as I usually do. But the drip that flooded the bucket was my wife repeatedly telling me that she has no clue what the heck I do all day long.

So I guess I will try to write something here from time to time, provided everyone is aware of the following disclaimer:

Anything you read here is by no means a reflection, relation or indication of my opinion and/or state of mind. Some things may just be completely wrong and false.

Any piece of information being broadcasted from the human mind, thought, opinion or explanation, will almost always have been outdated even by the time the communication has completed. This is a dynamic universe, not a static one, so if I say xyz is such and so, by the time you read it, it is likely that I already think otherwise. And even if I didn’t, your interpretation of it will probably be no where near what it was in my  ‘reality’.  Any statement is of no value at all, and all this text is completely pointless. :)


 
____________________________________________________________________