LL      IIIII NN   NN KK  KK EEEEEEE RRRRRR  RRRRRR   OOOOO  RRRRRR 
LL       III  NNN  NN KK KK  EE      RR   RR RR   RR OO   OO RR   RR
LL       III  NN N NN KKKK   EEEEE   RRRRRR  RRRRRR  OO   OO RRRRRR 
LL       III  NN  NNN KK KK  EE      RR  RR  RR  RR  OO   OO RR  RR 
LLLLLLL IIIII NN   NN KK  KK EEEEEEE RR   RR RR   RR  OOOOO  RR   RR
                                                           ramblings
____________________________________________________________________

I stumbled upon an article that talks about how over-zealous syntax-highlighting can be counter-productive. It argues for only coloring comments and the difference between a comparison operator (==) and assignment (=).

The concept of such minimalistic coloring seems intriguing, so I wrote a little vim syntax highlighting theme that only colors comments.

It uses 256 colors, so you need a terminal setup to support 256 colors, however you can easily edit the colors in the .vim file.
If you do have a terminal that supports 256 colors, and you want to change the colors, you can use this chart to convert the numbers to colors.

To use the color scheme, just save the file below under ~/.vim/colors and type :color justcomments once you have started vim – or put colorscheme justcomments in your ~/.vimrc

Download the vim colorscheme here


 
____________________________________________________________________

Being a sysadmin you end up running ssh to multiple servers at the same time, all the time. Being a paranoid sysadmin you also have different (long) passwords for every one of these servers.

Unless you want to spend more time entering passwords than doing actual work, you probably have some kind of master-password system setup.

Most people will use an ssh key uploaded to the servers in order to accomplish this. – (hopefully one that is password protected.)

However there are some situations where this is not preferred, for example, when an account is shared by multiple people, or when you simply cannot leave ssh public keys lingering around. Or when you simply don’t want to have to re-upload the key every time the home directory gets wiped…

It sure would be nice to have a password manager, protected with a master password, remember passwords you enter for ssh, in those cases.

This is possible with kdewallet and a small expect script wrapper around ssh.

I don’t personally use kde, but I do use some of the utilities it ships with from time to time, kdewalet being one of them. Kdewallet uses dbus for ipc. The qdbus utility lets you interact with dbus applications from the command line (and from shell scripts), so that’s what this script makes use of. The KDE Wallet password management system consists of system daemon (kwalletd) and a front-end gui application to view the password database, create folders, etc, called kwalletmanager. You don’t have to have kwalletmanager running for this to work. The script will automatically start kwalletd if it’s not running.

You can use kwalletmanager to create a separate folder to store your ssh passwords “eg, a folder called “ssh”) and specify the folder in which to store the passwords at the top of the script, where some other constants can be adjusted such as the location of the needed binaries…

If a password was not found in kwallet, it will prompt for the password and store it. (If you entered the wrong password you’ll have to remove it using kwalletmanager.)

The script is implemented using ‘expect’ which can be obtained here : http://expect.nist.gov/ – Which uses TCL syntax.

#!/usr/bin/expect -f

# Entry point -----------------------------------------------------------------

# Constants
set kwalletd "/usr/bin/kwalletd"
set qdbus "/usr/bin/qdbus"
set kdialog "/usr/bin/kdialog"
set appid "ssh"
set passwdfolder "ssh"

# Get commandline args.

set user [lindex $argv 0]
set host [lindex $argv 1]
set port [lindex $argv 2]

# Check arg sanity
if { $user == "" || $host == "" } {
  puts "Usage: user host \[port\] \n"
  exit 1
}

# Use a sane default port if not specified by the user.
if { $port == "" } {
  set port "22"
}

# Run kde wallet daemon if it's not already running.
set kwalletrunning [ 
  exec "$qdbus" "org.kde.kwalletd" "/modules/kwalletd" "org.kde.KWallet.isEnabled" 
]
if { $kwalletrunning == "false" } {
  puts "kwalletd is not running, starting it...\n"
  exec "$kwalletd&"
  sleep 2
} else {
  puts "Found kwalletd running.\n"
}

# Get wallet id 
set walletid [
  exec "$qdbus" "org.kde.kwalletd" "/modules/kwalletd" "org.kde.KWallet.open" "kdewallet" "0" "$appid"
]

# Get password from kde wallet.
set passw [
  exec "$qdbus" "org.kde.kwalletd" "/modules/kwalletd" "org.kde.KWallet.readPassword" "$walletid" "$passwdfolder" "$user@$host" "$appid"
]

# If no password was found, ask for one.
if { $passw == "" } {
  set passw [
    exec "$kdialog" "--title" "ssh" "--password" "Please enter the ssh password for $user@$host"
  ]
  if { $passw == "" } {
    puts "You need to enter a password.\n"
    exit 1
  }
  # Now save the newly entered password into kde wallet
  exec "$qdbus" "org.kde.kwalletd" "/modules/kwalletd" "org.kde.KWallet.writePassword" "$walletid" "$passwdfolder" "$user@$host" "$passw" "$appid"
}

# Run ssh.
if [
  catch {
    spawn ssh -p $port $user@$host 
  } reason
] {
  puts " Failed to spawn SSH: $reason\n"
  exit 1
}

# Wait for password prompt and send the password.
# Add key to known hosts if asked.
# Resume after successful login.
expect {
  -re ".*assword:" {
    exp_send "$passw\r"
    exp_continue;
  }
  -re ".* (yes/no?)" {
    send -- "yes\r" {
      exp_continue
    }
    -re ".*Warning: Permanently .*known hosts.\r\r\n" exp_continue
  }
  -re ".*Last login" exp_continue;
}

# Send a blank line
send -- "\r"

# Now finally let the user interact with ssh.
interact

 
____________________________________________________________________

When you’re running any type of shared hosting server, with hundreds of clients that have the ability to run php scripts, send emails, etc,… How do you make sure you’re not setting yourself up to be one big spam haven? (the true answer is: you don’t, since shared hosting is one big mess.- You’re screwed.) – A compromised script of a client could be sending out spam mail without using your MTA, so it would not show up in your logs or mailqueue.

For this reason I wrote a little perl script which sniffs all outgoing SMTP traffic and dumps it to a file. You could then set up a cron job which scans the file for known keywords used by spammers (viagra/v1agra/Vi4Gr4/etc…….) and alerts you when something is found; or you could make it extract the emails and run them through spamassassin.

This way, even if the outgoing traffic is sent by some script using sockets to connect to port25 of some external mail server, bypassing your mta, you will still know about it.

Just change the settings on top of the script to reflect the ip address(es) you’re using and the network interface open to the internet.

Download/View it here


 
____________________________________________________________________

For a while now, trojans, and the botnets they work for, have employed several techniques for stealing FTP credentials. Whether it be through sniffing unencrypted FTP traffic, grabbing credentials from saved password files from popular FTP clients, or brute-forcing weak passwords on the server, as a server administrator, having a user’s FTP account compromised is something you would want to detect as early as humanly possible.

I have witnessed quite a few of these compromised accounts, and every time, it seems that there are many logins from many different countries into the account, presumably, by botnet drones dropping all sorts of malware or who knows what else.

This actually makes it fairly easy to write a little script to detect whether an account has been compromised, simply by looking at from how many different countries it has been accessed.

Granted, some people may travel, but most people will not travel to more than 10 countries in a short amount of time.

Thus I have written a perl script that can be used as a nagios sensor, which will grab the `last` output, and do a geoip lookup (using the geoiplookup utility) for each IP address. Then count the number of different countries, and depending on the warning / critical flags, will return the appropriate return value.

Example output:

# ./check_login

User 'weakling' has logins from 33 countries: Egypt Bolivia Taiwan
 Australia Sweden Switzerland Pakistan Dominican Canada China Peru
 Indonesia Vietnam Honduras Portugal Trinidad Grenada Turkey Serbia 
Korea, Mexico United Colombia Brazil Bahrain Japan France Mali South 
Poland Slovenia India - CRITICAL

Grab it here.


 
____________________________________________________________________

This post is kind of a sequel to that post….

One big problem with suexec and suphp on Apache imho is that files run as their owner, thus an accidental chown might break things. A more logical thing would be to assign a user/group to each VirtualHost, which is exactly what the ITK MPM does.

On top of that it has some additional handy features, such as limiting the maximum number of concurrent requests per VirtualHost and setting a niceness value so you can define a cpu affinity per virtual host.

Now the dc member server finally has users properly isolated from one another.

Setting up mpm-itk was a lot easier than suphp,suexec,or peruser-mpm. (I tried peruser-mpm first, and apache just segfaulted :S).
With only a few lines of additional configuration, I was easily able to automate the migration of our 100+ accounts with a quick and dirty perl script.

mpm-itk is included in the default apache install on FreeBSD. There is no separate port for it (like there is for peruser). To use it, compile apache like this:


cd /usr/ports/www/apache22
make WITH_MPM=itk
make install

And that’s it. Apache will now use the itk mpm, and you can add the
AssignUserID line to your VirtualHost. Anything running on it will run as the specified user/group, whether it’s plain html, php, or cgi. That’s another advantage, since with suexec you end up configuring each web-scripting language individually, and then risk still not covering everything.


 
____________________________________________________________________
MTR

If you have ever experienced packet loss or bad connectivity between yourself and some other server, and then wondered where exactly the problem is (your network, the server you are trying to reach, or is it just the internet ‘acting up’)?

Usually, the way you determine this, is by running a traceroute, and check at which hop the latency or packet loss issues begin.

If the high latency is only noticeable at the destination, then the server you are trying to reach is most likely at fault.

If the high latency starts at your first hop, it is probably your own network that is to blame.

Anything in between typically is a problem in the route your data takes to it’s destination, and thus usually not under your control.

The only problem is, that traceroute shows latency, not packet loss.
Thus the solution is to ping each hop in the traceroute, and see what the packet loss is.

It so happens that there is a really neat forgotten (by the masses, anyway) tool called MTR which combines ping and traceroute to do exactly that. It has been around since the dawn of time, and is thus in the package management repositories of most GNU/Linux distributions, and is also present in the FreeBSD ports collection if you want to install it. (Windows users will have to compile it in cygwin).

MTR also has a really neat curses gui which lets you watch the packet loss and lots of other statistics in real-time, making it an awesome tool for debugging networking issues.

MTR curses UI screenshot.

In the example above, it seems the hosting company of the destination server is to blame.

On top of the curses console UI, it also has a GUI for X, for you rodent-addicts.

If you want to use it in a script, or without the curses UI, you can put it in report mode, and specify a number of ping cycles, for plain stdout output.

mtr combines the functionality of the traceroute and ping programs in a
single network diagnostic tool.

As mtr starts, it investigates the network connection between the host
mtr runs on and HOSTNAME. by sending packets with purposly low TTLs.
It continues to send packets with low TTL, noting the response time of
the intervening routers. This allows mtr to print the response per-
centage and response times of the internet route to HOSTNAME. A sudden
increase in packetloss or response time is often an indication of a bad
(or simply overloaded) link.

MTR screenshot

http://www.bitwizard.nl/mtr/


 
____________________________________________________________________

I figured I would share with you, a setup I am using on all my BSD servers to monitor changes to the filesystem.

The idea is to be notified by email at a certain interval (eg: once a day) with a list of all files that have changed since last time the check ran.

This, allows you to be notified when files change without your knowledge, for example, in the event of a cracker breaking into the server or if you accidentally, recursively chowned /, and you managed to interrupt the command; mtree allows you to see how many of the files were affected, and fix them.
As mtree also reports HOW the files were changed. For example, in the chown scenario it would mention the expected uid/gid and what it changed to. This would allow for an automated recovery of such a disaster.

In addition to the e-mail notifications it will also keep a log file (by default in /var/log/mtree.log)

The utility we’ll use for this on FreeBSD is mtree (On GNU/Linux you’d have to use tripwire or auditd).
I wrote a perl script which uses mtree to accomplish what I described above: download it.

So basically, to set it up, you can do the following:

mkdir /usr/mtree
cd /usr/mtree
touch fs.mtree fs.exclude
wget http://linkerror.com/programs/automtree
chmod +x automtree

Now, if you run ./automtree -h you’ll see a list of valid options with some documentation:

  Usage: ./automtree [OPTION] ...
  Show or E-mail out a list of changes to the file system.

  mtree operation options:

    -u,  --update        Updates the file checksum database after 
                         showing/mailing changes.
    -uo, --update-only   Only update the file checksum database.
    -p,  --path          Top level folder to monitor (default: /)
    -q,  --quiet         Do not output scan results to stdout or any
                         other output.

  Path configuration options:

    -l,  --log           Logfile location 
                         (default: /var/log/mtree.log)
         --mtree         Set the location of the mtree executable. 
                         (default is /usr/sbin/mtree)
         --checksum-file Set the location of the file containing the 
                         mtree file checksums. 
                         (defaul: /usr/mtree/fs.mtree)
         --exclude-file  Set the location of the file containing the 
                         list of files and folders to exclude from the 
                         mtree scan. (default is /usr/mtree/fs.exclude)

  E-mail options:

    -e,  --email         Adds specified e-mail address as destination.
         --sendmail      Set the location of the sendmail executable. 
                         (default: /usr/sbin/sendmail)
         --reply-to      Set the e-mail reply-to address.
         --subject       Sets The e-mail subject. 

  Misc options:

    -h,  --help          Display this help text.
 

  Example usage:

    ./automtree -uo
    ./automtree -u -q -e foo@example.com -e bar@example.com
    ./automtree /var/www --mtree /usr/local/sbin/mtree

As you can see, by default, the script will just index the entire filesystem, as the default for the -p option is / … In order to do this you’ll want to ignore some folders, so edit the fs.exclude file, and stick at least this into it:


./dev
./proc
./var
./tmp
./usr/mtree
./usr/share/man
./usr/share/openssl/man
./usr/local/man
./usr/local/lib/perl5/5.8.8/man
./usr/local/lib/perl5/5.8.8/perl/man

Note that you have to prefix folders with ./
So now, in order to automatically scan and receive notifications, the command which will go into crontab is:


./automtree -u -q -e foo@example.com

(It is possible to add multiple -e options for multiple e-mail destinations.)

The command above will not output to stdout (-q), email filesystem changes to foo@example.com (-e foo@example.com), and automatically update the checksum file with the newly discovered changes (-u).

An example crontab line, to check every 3 hours (type crontab -e to edit your crontab):


0 */3 * * * /usr/mtree/automtree -u -q -e youremail@example.com &> /dev/null

The script won’t send an e-mail if there are no changes to report.


 
____________________________________________________________________

It was suggested I should share the wordpress theme for this…

Here it is: linkerror-0.1.tar.bz2 (13 KiB).

It’s xhtml strict compliant, text-browser friendly (you can view it relatively comfortabely, and post comments using lynx, links, links2, et all…).

If you disable avatars and emoticons in your wordpress settings, it is 100% image-free. No javascript, no flash, no images… ah bliss. ;)

Insert your own ascii art in header.php to replace the LINKERROR banner on top.

Licensed under the GNU GPL (v3).

<disclaimer>

“Dammit, Jim, I’m a doctor, not a web designer.”
( s/doctor/software developer\/server admin/g )

</disclaimer>


 
____________________________________________________________________