Thursday, 11 December 2008

Install AWStats

My home web server is a very quiet corner of the Internet and the last thing it needs is web stats. It goes without saying that I decided to install some.

Gettingsudo apt-get install awstats
Configuration

Using your favourite editor, create a file/etc/awstats/awstats.local.conf.

HostAliases="localhost 127.0.0.1"
LogFile="/var/log/apache2/access.log"
LogFormat=1
DNSLookup=1
DirData="/var/cache/awstats/"
DirCgi="/cgi-bin"
DirIcons="/icon"
SiteDomain="prawn.mine.nu"
AllowToUpdateStatsFromBrowser=0
AllowFullYearView=3
SkipHosts="REGEX[^127\.0] REGEX[^192\.168\.]"

I have configured mine to ignore any traffic on my subnet using SkipHosts parameter and a simple regex.

Create a directory /var/cache/awstatsmkdir /var/cache/awstats
chmod 700 /var/cache/awstats
chown www-data:www-data /var/cache/awstats

Next Step is copying the awstats icons to the relevant apache directory.sudo cp -r /usr/share/awstats/icon /var/www/icon
Testing the stats updatesudo /usr/lib/cgi-bin/awstats.pl -config=local -updateThe -config= parameter is the middle bit of your config filename. In this instance, the -config=local instructs the program to read the file /etc/awstats/awstats.local.conf.

Now you should be able to view your stats. Point your favourite browser at http://your.host.name/cgi-bin/awstats.pl?config=local and enjoy.

Scheduling updates


Set up crontab to run an update as often a you think sensible.sudo crontab -uroot -e
#mine looks something like this:
#update at 3 am
* 3 * * * /usr/lib/cgi-bin/awstats.pl -config=local \
-update

To admire the full glory of awstats and the lack of traffic on my home web server, here it is...

Further reading

You could do worse than the awstats sourceforge page and here for configuration directives.

Wednesday, 10 December 2008

Deer santah

Oh Hai, santah! Can I has manbag?



kthxbai.

Code::Blocks

Ooh-err, missus.

I have been tinkering around with Code::Blocks which is a cross platform development tool. Embedded in its genes is the wxWidgets set of classes. I have only had a 24 hour tinker with it and, to be frank, have not coded in C++ for over four years - but it looks promising. My only gripe is the faffing about you have to do when converting from C++ string classes to wxString classes and vice versa.

It is early days yet and a more fully formed opinion will out in due course.

On the plus side it will work as an IDE with other idioms than the wx... family.

Lolcat 404

Oh Noes!

I have, despite my age and sensibilities, developed a rather weird fascination for the whole Lolcat meme. As a silent homage to all things Lolcat, I have created a 404 page in honour of the phenomenon. Point your bowser here for the full asinine horror and accept my apologies pro tempore. I'll get over it in the fullness of time.

If you want to see a demonstration of how the Internet could suck up one's time on a biblical scale, you could do worse than peruse the LOLCAT Bible project for a demonstration. I have not the words.

Update: It seems that teh Ceiling Cat has a Twitter account. Follow the tweets here.

AWStats

My home server is a very quiet corner of the Internet which is handy as it gave me the opportunity to install AWStats without upsetting anyone. If you have a prurient disposition, the stats can be viewed here. My first impressions are pretty favourable. I have configured it to ignore any visits via my own subnet so you will see how quiet it really is. Be that as it may, I have rather optimistically set it up to update every fifteen minutes so come back real soon, you hear? /tumbleweed

Install writeup to follow. Done.

Friday, 28 November 2008

Can I has IP?

A quicky Perl utility that interrogates the DNS-O-Matic IP page which returns your public IP address.#!/usr/bin/perl

# cihip (Can I Has IP?)

use strict;
use warnings;
use LWP::Simple;

my ( $address, $error ) = get_address();

if ( $error ) {

print ( "$error\n" );
die "cihip failed: $error\n";

}

sub get_address {

$address = get( "http://myip.dnsomatic.com/" );

if ($address) {

print "$address\n";

} else {

$address = 0;
$error = "IP address query failed:\n";

}

return ( $address, $error );

}

Wednesday, 26 November 2008

DNS-O-Matic Updater


News: update-dnsomatic stable version 0.1 released 29 November 2008 at 13:00 GMT. I have moved the project to Goggle Code as my home server is only up 90% as a rule.

As I am a user of both opendns and dyndns I realized a while ago that it was a bit of a chore to keep them both up-to-date with my ever fluctuating home IP address. On discovering that DNS-O-Matic will update them at one fell swoop, I was hooked.

Time passed and as I was writing up installing ddclient, I was inspired to see if I could knock up a client of my own which would be designed specifically with DNS-O-Matic in mind.

Well, 24 hours and one abuse report from dyndns (sorry guys) later and it is done.

Specs

It's written in Perl and doesn't use any fancy modules so should work out of the box once installed.

It is written very much with Unix/Linux in mind but is simple enough to amend for other operating systems.

It only perfoms update notifications if your IP address has changed since it was last run.

Where?


The whole package complete with README, a sample configuration file and install.sh script can be found here.

How?


After downloading, extract the file update-dnsomatic-0.1.2.tar.gz and then edit the file config with a minimum of your DNS-O-Matic user id and password.tar xvzf update-dnsomatic-0.1.2.tar.gz
cd update-dnsomatic-0.1.2/

# edit ./config with your editor of choice
# A sample config file:
user = my_user_id
pass = my_password
#

# as root or sudo:
./install.sh

You may wish to assign the myhost configuration item with your domain name if you only wish to update that domain. If you don't assign myhost then all your DNS-O-Matic services will be updated.

A picture of the DNS-O-Matic status information. One update done by me and two by them.

Tuesday, 25 November 2008

OpenDNS and Ubuntu

I have been using OpenDNS for a few months now and it's about time that I wrote it up.

If you want to be able to use all the facilities that OpenDNS offer, then ideally you need to setup a DynDNS account and an OpenDNS account.

First things first, I have configured my router to use the OpenDNS nameservers 208.67.222.222 and 208.67.220.220.

Configuring DynDNS

You may wish to use my own DNS-O-Matic updater, which I have written about here.

Having created a free account with DynDNS, I needed to install ddclient on my home server.

Installing ddclient on the server

If your router can talk to dyndns, ignore this section.
sudo apt-get install ddclient
Configuring ddclient

Edit the contents of /etc/ddclient.conf using sudo

The contents of my file looks something like this:
daemon=600
cache=/tmp/ddclient.cache
pid=/var/run/ddclient.pid
use=web, web=checkip.dyndns.com/, web-skip='IP Address'
login=my_dyndns_account_name
password=my_dyndns_password
protocol=dyndns2
server=members.dyndns.org
wildcard=YES
my.dyndns.domain

It is worth checking file /etc/default/ddclient to see if ddclient is run as a daemon.
# Configuration for ddclient scripts
# generated from debconf on Tue Oct 14 15:19:15 BST 2008
#
# /etc/default/ddclient

# Set to "true" if ddclient should be run every time a
# new ppp connection is
# established. This might be useful, if you are using
# dial-on-demand
run_ipup="false"

# Set to "true" if ddclient should run in daemon mode
run_daemon="true"

# Set the time interval between the updates of the
# dynamic DNS name in seconds.
# This option only takes effect if the ddclient runs in
# daemon mode.
daemon_interval="300"


Configuring OpenDNS


Having created an OpenDNS account, visit https://www.opendns.com/dashboard/networks/ and click the ADD THIS NETWORK button.

Updating OpenDNS

Now that your two accounts are up and running, your need to periodically notify OpenDNS of your current IP address and your account name. This can be done using a script that I took from the OpenDNS forum here. I have a tidy and slightly different script on my home server here. Simply follow the instructions at the top of the file and away you go.

To check that OpenDNS is working, visit this link for a report: http://www.opendns.com/welcome/

Friday, 31 October 2008

Setting up Leafnode on Ubuntu

One of the things that I frequently use tunneling for is Usenet access - particularly as quite a few ISPs block port 119 these days and I can't get on with the Google Groups interface.

Leafnode is a simple and light weight NNTP server which adequately handles my needs.

Installing

From the command line:
sudo apt-get install leafnode
and enter details as necessary.

Configuration

This will very much depend on which upstream NNTP server you use. In my case, I use the cheesy server or
motzarella.org
eternal-september.org as it more formally known. I have an account with them and it is free which works for me.

A note about FQDN and Leafnode.

Leafnode insists that you have an FQDN assigned on your machine which Ubuntu does not necessarily set during installation. Solution 1 is to edit /etc/hosts and assign the FQDN there.

My hosts file looks like this:

127.0.0.1 localhost
127.0.1.1 my.fqdn localname

to check what your FQDN is, from the command line type:

hostname -f
Naturally, you will have to use your own FQDN.

Alternatively, you can configure the FQDN using the hostname= parameter in the leafnode config file. Personally, I prefer the former method rather than the cheat.

Breaking news: It appears that the cheesy server allows registered users to also reserve an FQDN which could be useful for some.

Configuring Leafnode

I shall put in the details I have configured to use the cheesy server.

Edit the file /etc/news/leafnode/config
expire = 60
server = news.eternal-september.org
username = my_username
password = my_password
# optional
hostname = my.fqdn
# ignores some x-posted chaff from Usenet trolls :-)
maxcrosspost = 4


Fetching news

fetchnews likes to be run as user news and I have set my cron job to run every 15 minutes. I have also set the texpire task to run at noon every day to remove expired articles.
sudo crontab -u news -e 
# Sets texpire to clean out news at noon:
0 12 * * * /usr/sbin/texpire 1>/dev/null
# Sets fetchnews to run every 15 minutes:
*/15 * * * * /usr/sbin/fetchnews -vv 2>&1 >> /dev/null


For setting up the tunnel, read my IMAP tunnel article and use port 119 instead :-)

Further reading on automating tunnels can be found on this blog here.

Wednesday, 29 October 2008

favicon

I have been meaning to sort out a favicon.ico for my home websever and quite fancied a Tux to do the job. A quick search using my favourite search engine revelaed this which I have altered to have a pink background to match the, erm, house style.

One less thing to worry about for when the inevitable re-build comes around.

Friday, 11 April 2008

Onion Routing

I was reading up on Onion Routing the other day and thought that I'd try it out to see if a) it was any good and b) how useful it might be. Before I go any further, perhaps I should explain that I'm not one of the Tin Foil Hat brigade and, frankly, don't care who knows where I visit or what I do when I'm on-line. Be that as it may, my curiosity was piqued and I installed Tor and Privoxy on my laptop to give it a whirl.

A note for Firefox 3 Beta users: at the time of writing, I noticed that a few of the Tor plug-ins wouldn't install for version 3 but FoxyProxy worked just fine.

Having got it all installed and configured, I set about testing it all. As far as I could tell, it did a bloody good job at making me anonymous and my exit node seemed to change approximately every 10 minutes. The performance through some nodes was pretty poor (I was not surprised by this and was expecting it, to be honest - some of the nodes are run by enthusiasts and are very far away).

This made using sites like Google interesting: one minute they thought I was German, the next minute Belgian, the owner of a compromised computer, Chinese and so on. Surfing from behind The Great Fire Wall of China was, erm, interesting and that got me really thinking. How often would this happen? This was a big disadvantage to the whole experience and made me think that Onion Routing should only be used on a need to use basis. (If I'm stating the obvious, so be it).

To surmise, I think that it should only be switched on if there's a desperate need to visit somewhere anonymously.

In order to get a feel for how annoying it could be, I generated a script to monitor the GEO-IP of whatever exit nodes were being used over a period of time so that I could determine how many of the exit nodes were very distant or censored and get a feel for how the surfing experience would be diminished. The results of my labours are here.

This is the code that I wrote to do the analysis and a pie chart of the outcomes is below (you have to have pie charts, you know).


Tor Exit Nodes by Country

Over the 18 hours, 78 unique IP addresses were used as Tor Exit Nodes though I'm not going to publish them here :-)

As an ironic footnote, I should mention that I'm based in .uk. Only one, yes, one exit node was GB. Of course, none of this takes into account of which territories I am passing through during a particular onion session.

Tuesday, 8 April 2008

Adventures with Ubuntu: tunneling into my IMAP server

Having setup dovecot on my home server, the next step was to enable remote access to it so that I could connect my laptop when I was out and about. As my laptop uses all manner of public networks of unproven security, I figured that using a secure tunnel was the way to do it.

Install openssh-server on the server
sudo apt-get install openssh-server
Install openssh-client on the laptop

sudo apt-get install openssh-client
On the laptop, generate ssh keys
ssh-keygen
Enter the defaults.

This will create, among other things, a file ~/.ssh/id_rsa.pub. This data needs to be put in a file .ssh/authorized_keys on the server. For a more detailed explanation of SSH key generation, I found this to be a clear and concise reminder.

Install the laptop's public key on the server
cat id_rsa.pub >> ~/.ssh/authorized_keys
where id_rsa.pub is the copy of the file from the laptop.

That's got the ssh gubbins sorted out and you should now be able to ssh to your server from your laptop without being prompted for a password. The next thing I needed to arrange was a dynamic DNS entry for my home server. Having created a free account with DynDNS, I needed to install ddclient on the server.

Installing ddclient on the server

If you have a fixed IP address, or your router can talk to dyndns, ignore this section.
sudo apt-get install ddclient
Configuring ddclient

Edit the contents of /etc/ddclient.conf using sudo

The contents of my file looks something like this:
daemon=600
cache=/tmp/ddclient.cache
pid=/var/run/ddclient.pid
use=web, web=checkip.dyndns.com/, web-skip='IP Address'
login=my_dyndns_account_name
password=my_dyndns_password
protocol=dyndns2
server=members.dyndns.org
wildcard=YES
my.dyndns.domain

It is worth checking file /etc/default/ddclient to see if ddclient is run as a daemon.
# Configuration for ddclient scripts
# generated from debconf on Tue Oct 14 15:19:15 BST 2008
#
# /etc/default/ddclient

# Set to "true" if ddclient should be run every time a
# new ppp connection is
# established. This might be useful, if you are using
# dial-on-demand
run_ipup="false"

# Set to "true" if ddclient should run in daemon mode
run_daemon="true"

# Set the time interval between the updates of the
# dynamic DNS name in seconds.
# This option only takes effect if the ddclient runs in
# daemon mode.
daemon_interval="300"

Creating the tunnel
On the laptop, create a tunnel to your home server.
ssh -f -N -q -L 1143:localhost:143 \
my_server_user_id@my.dyndns.domain

What this command does is create a tunnel for port 1143 on localhost (the laptop) and forwards it to the IMAP port (143) on the server (my.dyndns.domain). The reason why I selected the local port number 1143 is that it is greater than 1023 and 1000 more than the standard IMAP port (making it easy for me to remember) and only root can forward port numbers less than 1024. I have this command in a script file and fire it up ad-hoc whenever I am out and need to use it.

To test the connection on the laptop, type:
telnet localhost 1143
You should get a response along these lines:
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
* OK Dovecot ready.

So, although you are using the laptop's local port 1143, you are, in fact, accessing the server's port 143 through the SSH tunnel.

Configuring your e-mail client

Having gone through all that, just configure your laptop mail client's IMAP server to be localhost port 1143, set the user id to be your local/server user id and off you go.

Tunneling at work

You may find that using tunnels contravenes your employer's AUP. If that is the case, don't do it, OK?

Bigging up BigDump

Yesterday, I found myself in the position of having to transfer a medium sized (200 ish MB) MySQL database from one server to another. Ordinarily, I would use SSH on the new server and import the database that way. In this instance, I only had web and FTP access to the new site.

The main problem is that phpMyAdmin limits the size of SQL file that you can upload in one go and I was damned if I was going to split the data up in to ~ 100 files to update the new database. Apart form being time consuming, there was plenty of opportunity for error.

So, before I set about writing my own solution, I thought I'd use my google mojo to see if there was a ready made answer. Half an hour later, I found BigDump which fit the bill perfectly.

My implementation was a follows:

Dump source database using phpMyAdmin to local file localhost.sql. Make sure that you don't use extended inserts and, if necessary, don't use database creation.

Having downloaded it, edit bigdump.php. I amended it thus:

$db_server = 'localhost';
$db_name = 'database_name';
$db_username = 'database_user';
$db_password = 'password';

// Other settings (optional)

$filename = '/full/path/to/dump/localhost.sql'; // Specify the dump filename to suppress the file selection dialog

From the web root on the target server, create a directory dump and upload via ftp the files localhost.sql and bigdump.php

Point your browser at the my.domain/dump/bigdump.php and click on the Start Import link. You should see something like this:

Big Dump Screen Shot

When finished, delete your sql file and bigdump.php from the web server.

That's it. It did the job simply and saved me a load of effort. Yay!

Sunday, 30 March 2008

Adventures with Ubuntu: Mail

Setting up dovecot to work with Google Mail.

Given the choice, I prefer using an e-mail client in preference to a web browser on whatever machine I am using. Although Google Mail provides an IMAP service, I wanted to be able to keep a copy of my mail on my home server so that I could access Google Mail missives and my various other mail accounts from one source. I would also enjoy the benefit of having a copy of my mail in one easily backup-able and transferable source.

Installing dovecot and getmail:

sudo apt-get install dovecot-common dovecot-imapdc \
getmail4

Configuring dovecot.

Edit the file /etc/dovecot/dovecot.conf

Insert the line:
mail_location = maildir:~/Maildir
Restart dovecot:
sudo /etc/init.d/dovecot restart
You should get a reply:
* Restarting IMAP/POP3 mail server dovecot [ OK ]
Creating your mail directory.
cd ~
maildirmake.dovecot Maildir


Configuring getmail.


I have set up getmail to keep copies on the Google Mail server for 30 days. I figure that will give me enough resilience :-)

First, make sure that you have POP3 enabled on your Gmail account.
Click on your settings and select the Forwarding and POP/IMAP tab.



Create .getmail directory and rc file.
cd ~
mkdir .getmail
cd .getmail
chmod 700 .

Create a file getmailrc in ~/.getmail.

My getmailrc looks something like this:
[retriever]
type = SimplePOP3SSLRetriever
server = pop.gmail.com
port = 995
username = your.email.address@gmail.com
password = your_password

[destination]
type = Maildir
path = ~/Maildir/

[options]
delete_after = 30
get_all = false

Next all you need to do is schedule getmail to run when you are not looking.

I have set mine to run every 5 minutes.
crontab -e
:
:
*/5 * * * * /usr/bin/getmail >/dev/null 2>&1

Connecting to your server.

This is the easy bit.

Using your favorite email client, create an account which uses IMAP. Your server will be localhost (or the IP address of your home server). The user id and passwords are the same as your Ubuntu user account.

Notes:

Because this is set up on my home network which sits behind a firewall, I have made no provision to employ secure IMAP connections which are, frankly, not needed in this instance as all connections to the IMAP server are made behind the firewall.