Friday, 11 April 2008

Onion Routing

I was reading up on Onion Routing the other day and thought that I'd try it out to see if a) it was any good and b) how useful it might be. Before I go any further, perhaps I should explain that I'm not one of the Tin Foil Hat brigade and, frankly, don't care who knows where I visit or what I do when I'm on-line. Be that as it may, my curiosity was piqued and I installed Tor and Privoxy on my laptop to give it a whirl.

A note for Firefox 3 Beta users: at the time of writing, I noticed that a few of the Tor plug-ins wouldn't install for version 3 but FoxyProxy worked just fine.

Having got it all installed and configured, I set about testing it all. As far as I could tell, it did a bloody good job at making me anonymous and my exit node seemed to change approximately every 10 minutes. The performance through some nodes was pretty poor (I was not surprised by this and was expecting it, to be honest - some of the nodes are run by enthusiasts and are very far away).

This made using sites like Google interesting: one minute they thought I was German, the next minute Belgian, the owner of a compromised computer, Chinese and so on. Surfing from behind The Great Fire Wall of China was, erm, interesting and that got me really thinking. How often would this happen? This was a big disadvantage to the whole experience and made me think that Onion Routing should only be used on a need to use basis. (If I'm stating the obvious, so be it).

To surmise, I think that it should only be switched on if there's a desperate need to visit somewhere anonymously.

In order to get a feel for how annoying it could be, I generated a script to monitor the GEO-IP of whatever exit nodes were being used over a period of time so that I could determine how many of the exit nodes were very distant or censored and get a feel for how the surfing experience would be diminished. The results of my labours are here.

This is the code that I wrote to do the analysis and a pie chart of the outcomes is below (you have to have pie charts, you know).

Tor Exit Nodes by Country

Over the 18 hours, 78 unique IP addresses were used as Tor Exit Nodes though I'm not going to publish them here :-)

As an ironic footnote, I should mention that I'm based in .uk. Only one, yes, one exit node was GB. Of course, none of this takes into account of which territories I am passing through during a particular onion session.

Tuesday, 8 April 2008

Adventures with Ubuntu: tunneling into my IMAP server

Having setup dovecot on my home server, the next step was to enable remote access to it so that I could connect my laptop when I was out and about. As my laptop uses all manner of public networks of unproven security, I figured that using a secure tunnel was the way to do it.

Install openssh-server on the server
sudo apt-get install openssh-server
Install openssh-client on the laptop

sudo apt-get install openssh-client
On the laptop, generate ssh keys
Enter the defaults.

This will create, among other things, a file ~/.ssh/ This data needs to be put in a file .ssh/authorized_keys on the server. For a more detailed explanation of SSH key generation, I found this to be a clear and concise reminder.

Install the laptop's public key on the server
cat >> ~/.ssh/authorized_keys
where is the copy of the file from the laptop.

That's got the ssh gubbins sorted out and you should now be able to ssh to your server from your laptop without being prompted for a password. The next thing I needed to arrange was a dynamic DNS entry for my home server. Having created a free account with DynDNS, I needed to install ddclient on the server.

Installing ddclient on the server

If you have a fixed IP address, or your router can talk to dyndns, ignore this section.
sudo apt-get install ddclient
Configuring ddclient

Edit the contents of /etc/ddclient.conf using sudo

The contents of my file looks something like this:
use=web,, web-skip='IP Address'

It is worth checking file /etc/default/ddclient to see if ddclient is run as a daemon.
# Configuration for ddclient scripts
# generated from debconf on Tue Oct 14 15:19:15 BST 2008
# /etc/default/ddclient

# Set to "true" if ddclient should be run every time a
# new ppp connection is
# established. This might be useful, if you are using
# dial-on-demand

# Set to "true" if ddclient should run in daemon mode

# Set the time interval between the updates of the
# dynamic DNS name in seconds.
# This option only takes effect if the ddclient runs in
# daemon mode.

Creating the tunnel
On the laptop, create a tunnel to your home server.
ssh -f -N -q -L 1143:localhost:143 \

What this command does is create a tunnel for port 1143 on localhost (the laptop) and forwards it to the IMAP port (143) on the server (my.dyndns.domain). The reason why I selected the local port number 1143 is that it is greater than 1023 and 1000 more than the standard IMAP port (making it easy for me to remember) and only root can forward port numbers less than 1024. I have this command in a script file and fire it up ad-hoc whenever I am out and need to use it.

To test the connection on the laptop, type:
telnet localhost 1143
You should get a response along these lines:
Connected to localhost.
Escape character is '^]'.
* OK Dovecot ready.

So, although you are using the laptop's local port 1143, you are, in fact, accessing the server's port 143 through the SSH tunnel.

Configuring your e-mail client

Having gone through all that, just configure your laptop mail client's IMAP server to be localhost port 1143, set the user id to be your local/server user id and off you go.

Tunneling at work

You may find that using tunnels contravenes your employer's AUP. If that is the case, don't do it, OK?

Bigging up BigDump

Yesterday, I found myself in the position of having to transfer a medium sized (200 ish MB) MySQL database from one server to another. Ordinarily, I would use SSH on the new server and import the database that way. In this instance, I only had web and FTP access to the new site.

The main problem is that phpMyAdmin limits the size of SQL file that you can upload in one go and I was damned if I was going to split the data up in to ~ 100 files to update the new database. Apart form being time consuming, there was plenty of opportunity for error.

So, before I set about writing my own solution, I thought I'd use my google mojo to see if there was a ready made answer. Half an hour later, I found BigDump which fit the bill perfectly.

My implementation was a follows:

Dump source database using phpMyAdmin to local file localhost.sql. Make sure that you don't use extended inserts and, if necessary, don't use database creation.

Having downloaded it, edit bigdump.php. I amended it thus:

$db_server = 'localhost';
$db_name = 'database_name';
$db_username = 'database_user';
$db_password = 'password';

// Other settings (optional)

$filename = '/full/path/to/dump/localhost.sql'; // Specify the dump filename to suppress the file selection dialog

From the web root on the target server, create a directory dump and upload via ftp the files localhost.sql and bigdump.php

Point your browser at the my.domain/dump/bigdump.php and click on the Start Import link. You should see something like this:

Big Dump Screen Shot

When finished, delete your sql file and bigdump.php from the web server.

That's it. It did the job simply and saved me a load of effort. Yay!