Software Freedom Day 2010

Software Freedom Day? What’s that I hear you ask!

Software Freedom Day, or “SFD” as it is sometimes referred to, is a day of celebration that there is some computer software that is not encumbered by copyrights and Patents which place limitations on the software.

Did you know that with most proprietary software you purchase a licence to use the software but you never actually own it? If you read the Terms and Conditions of sale, more commonly referred to as End User Licence Agreement (EULA), you’ll see the clause that states that the author (usually a company or corporation) retains ownership of the software. In fact, the hint is in the name “End User LICENCE Agreement”.

For example in the Microsoft Windows XP Home Edition EULA, paragraph 3 states that:

3. RESERVATION OF RIGHTS AND OWNERSHIP. Microsoft reserves all rights not expressly granted to you in this EULA. The Software is protected by copyright and other intellectual property laws and treaties. Microsoft or its suppliers own the title, copyright, and other intellectual property rights in the Software. The Software is licensed, not sold.

This, and other clauses in the End User Licence Agreements (EULA) place many, many, many restrictions on what you can and can’t do with the software. Seriously, how many people have actually read an EULA all the way through and understand it? Then think…I have how many pieces of proprietary software on my computer, all with EULAs? Have I read and understood all of them, or did I just hit the “Accept” button? What if the EULA stated:

“You agree to send your firstborn child to the company and they will be an indentured servant of the company for a term no less than 10 years, after which time you will be required to collect them, in person, from the Antarctic”.

Sounds (and is) ridiculous, but how may other contracts do you enter into without fully reading them and understanding them? Why is the EULA on a computer any different to say the warranty on your fridge or the contact of sale of a car or house?

Imagine that you have bought the use of a piece of software (you are licenced to use it), but you now find that you need it to do something slightly different with that software: maybe you have a payroll system that now needs to apply a different tax rate. You have one choice. You can wait for the manufacturer to come out with an update which fixes the issue. But how long will that take? And how will that delay affect your business? Will your employees begin to get annoyed that their pay is either wrong or being delayed? If the software company is responsive to your request, maybe not long, but what happens if they are no longer in business? Or their software company is based in Norway and they don’t get the Australian Tax updates and it takes them a while to get around to making the necessary change? What happens to your business reputation then? If you don’t think this will happen to you, think again, this has happened to many companies around the world!

Imagine again…if you had access to the code that made up the system, the code that made it run and function (that’s the “source code”, the human readable programming that makes the system work) and you could go into this code and change that tax rate and get your system functioning within minutes of knowing about the problem. Your employees would never know and you don’t have to wait on someone else to help you. With your proprietary system you don’t have the ability, or more importantly, the right to have a look at the source code and see if you can get it changed, with Free and Open Source Software, you do have those rights.

I have used the example of a company’s payroll system, but extend this out to all walks of life:

  • students having the ability to change software functionality to suit their assignment needs, quickly learning much needed skills. A friend of mine, during his PhD, extended an existing open programming language to layout his thesis exactly the way he required
  • artists being able to change a system to suit their artistic intent and creating new works, rather than being limited to the functionality that a software programmer thinks they might need

Use you imagination to think of other ways that someone could extend a piece of software or its functionality to achieve that little bit extra.

There are few things that stand in the way of this kind of innovation. Ironically these things were originally designed to promote innovation, but now they do the opposite. They are copyright and (software) Patents.

Copyright is the concept that once you have authored something you have ownership over it and you may decide to whom it gets distributed. This often involves a monetary transaction. This stifles innovation very simply: if the idea or work is a good, or even great, one and someone wants to extend upon it, for example use it on their own project to make that project truly spectacular, then they are not able to do so without permission or monetary exchange. This may sound fair, and often is, but in some cases, the original author is not the one who decides if their creation can be used for this purpose. This is especially the case when the author has been dead for a while, and their estate is “looking after their interests”. Requests to use something that has been created a while ago can take time and sometimes the impetus may be lost in the delay. Think of cases where a satirical artist wishes to use a substantial part of a literary work to highlight the comedic value of a political situation, but has to get permission to use the literary work before the artist can publish the satirical work. If there is a lengthy delay, the comedic situation may have moved on and the artist is stymied, potentially affecting their ability to generate revenue from their own good idea.

Copyright, as embodied in the “Statute of Anne”, was originally effective for fourteen years. Authors had that amount of time to make the most of their work. Say, for instance, a playwright had written a play, they had fourteen years of earnings from the performances and then the play went into the public domain. There was the ability, provided the author was still living, to extend the copyright for another 14 years. The general idea was that the artist could create something and have enough to survive on. But they’d have to keep creating every now and again in order to keep alive! It was a good incentive scheme for creation – they could earn a good living if the work was good, but they’d have to keep doing it.

Copyright in this day and age is almost endless (70 years in Australia), and even if the copyright has expired, there are other forces at work to prevent the “general public” being able to freely use a work. For example, Beethoven has been dead for 350+ years (so there’s no incentive for him to keep creating!) but there are no publicly available, freely playable copies of his symphonies. If I wanted to create a TV show and put in a Beethoven symphony I’d have to pay someone the privilege of using their recording. Why? Because each recording is “created” by an orchestra and that orchestra retains the right of distribution of the work. But, they didn’t create the original work, they just performed it! According to the original tenets of copyright, if they want to go on eating, they need to go on playing, rather than living off a 70 year old recording, where most of those players are probably well and truly retired (sometimes rather permanently!). Musopen is a group aiming to raise enough money to pay an orchestra to perform on an “all rights” basis and make their music freely available.

Patents in the software industry are even worse. Each country has a Patent Office where people and companies can register patents. These were originally set up to register inventions. Someone invented a new way of getting something done and they went to the Patent Office, registered it, and then sold the invention. An example is something like a toaster: when that was first thought up, someone may have put in a Patent application to say something like “the use of small electrical elements to create heat to toast bread”. This is a good system, as it ensures that the person who thought up a useful widget gets paid.

However, the Patent Office doesn’t really understand software; they don’t usually have the technical expertise. So when a large corporation goes to the Patent office and says “the use of a graphical user interface to manage a desktop” they tend to get these Patents. Once they have these Patents they can then sue every other company who uses a system that is even remotely similar, depending on how vague the Patent application was in the first place. This does nothing to promote innovation, in fact putting other companies under financial strain is generally regarded as being anti-innovation. If you want to know more about vague patent applications, you only have to look at the case of John Keogh, an Australian who successfully patented a “circular transportation facilitation device“, more commonly referred to as a wheel!

In it’s simplest form, say a company Patents “2 + 2” (yes, I know they can’t do this, but humour me and use this as an example). Now, no other company can use this in their software for fear that they’ll get sued. They now have to do something like “6 – 4” or “8 / 2” to generate the same result. Extend this out to slightly more complex things that really only have one way of calculation, such as the area of a square (length * width) and you begin to understand the massive headaches that can happen for computer programmers to find their way around. How would you calculate the area of a square if you are not able to use “length * width”? And extending on the example in the paragraph above, how do you make a computer system interactive with the user if you can’t use a Graphical User Interface (GUI)?

There are some computer programmers in the world who believe that software should be freely available to all users. They should have the freedom to:

  • The freedom to run the program, for any purpose (freedom 0)
  • The freedom to study how the program works, and change it to make it do what you wish (freedom 1). Access to the source code is a precondition for this
  • The freedom to redistribute copies so you can help your neighbour (freedom 2)
  • The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this

The idea is that with the ability to run, see and change the source code and freely distribute the original and your changes around, innovation will truly be able to occur, and with innovation comes improvement to people’s lives.

Interestingly, Free and Open Source Software does not necessarily mean no cost. Sometimes companies do charge for their software and that’s OK as long as they give you the four freedoms above. More often, FOSS companies give their software away for free (no cost, “gratis” or “free as in beer”) and then, as most people or organisations don’t have the programming expertise to make alterations, and really, the company that wrote it knows it best anyway, they charge for installation, customisations and extensions of functionality. This business model has been highly successful for companies such as (but by no means limited to) Red Hat.

I’d like to highlight one use of free software that has, in my opinion, achieved these aims every effectively: the Ushahidi project. I have recently found out about this project and I am very impressed that so much has been done so quickly. Set up to track violence during the 2008 Kenyan democratic elections, the project used many Free and Open Source Software (FOSS) components and hacked them together to create something that was greater than the sum of its parts.

This is only possible using Free and Open Source Software. Imagine trying to do something similar with proprietary software! Every time they came across a situation where they said “we want the project to do this function”, with FOSS they could just do it themselves, but with proprietary software they would have had to wait potentially for a long time, putting people’s lives at risk, for the proprietary software company to create the new function.

And that’s why Software Freedom is important.

Come to Software Freedom Day 2010 in your area. There are events being held all over the world. Have a look at the map here and find the closest one to you: http://cgi.softwarefreedomday.org/2010/map.shtml

I’ll be volunteering at the Melbourne event, which has some fantastic speakers, presentations and interactive displays for your enjoyment. Come along and if you go, leave and comment and let me know what you thought.

Linux: obscurity through omission?

A quick look around the Internet will reveal that the general consensus is that the desktop market share of GNU/Linux distributions is about 1-2%. I have a theory about why that percentage is not higher for desktop usage, which I term “obscurity through omission”.

I have come across a recent example that had the opportunity to mention Linux, but did not. This was an article in this months “PC Authority“, where author Jon Honeyball discusses Dropbox. He mentions that there are clients for Windows and Mac. He fails to mention that there are also clients for Ubuntu and Fedora, in both 32 and 64 bit versions, as well as the ability to compile from source. Dropbox releases updated client versions for Windows, Mac and Linux simultaneously, showing that, to them, Linux is equally valued. However “PC Authority” readers would not know that Dropbox can be used on Linux just as easily as on Windows and Mac.

This is but one example. I am sure that readers of this blog post could come up with many, many more examples of Linux just being forgotten about or actively ignored. Most hardware and peripherals work just as well on Linux as they do on Windows and Mac, but we’d never know from the manufacturers or the reviewers. I am convinced that if given wider coverage, then people might begin to question “what’s Linux?” This may lead to greater adoption of Linux, which I happen to think is a good thing.

I will readily admit that “obscurity through omission” is just one of potentially many reasons why the adoption of Linux is currently quite low, but I believe it to be a contributing factor. What do you think?

Ubuntu: disabling the start-up and login sounds

When I start and login to my computer, I like it to be silent. While I understand why this accessibility feature is enabled by default, I want to turn this feature off. I don’t need my computer to play some pretty sounds to tell me that it’s ready for me.

Canonical, as the creators of Ubuntu, keep moving the location of how to do this! In previous of Ubuntu, it was all nicely integrated in one place and very easy to do this. Now you have to go to two locations just to disable these sounds.

Firstly, for the “login screen ready” sounds, you have to go to Administration -> Login Screen. The screen will be “greyed out” until you click on Unlock and type in your password. Then you can untick the “Play login sound” button.

Secondly, to disable the “logging in” sound, go to Preferences -> Startup Applications. Untick the box next to “GNOME Login Sound”. Click Close.

Now the next time that you start your computer, it should be nice and quiet. Just the way I like it!

Australia’s Internet Filter

Disclaimer: the idea for this post began as a comment on Renai Le May’s great Australian ICT site, Delimiter.

A lot has been written about the proposed Australian Internet Filter, but I want to look at it from a slightly different point of view.

Today, June 28 2010, we saw the first female Prime Minister of Australia, Julia Gillard, announce the lineup for the new cabinet. Not a lot changed. Many geeks around Australia had hoped that Victorian Senator Stephen Conroy, Minister for Broadband, Communications and the Digital Economy would be pro- or de-moted and that Senator Kate Lundy would replace him. But he’s a Labor “power broker” and has just had a big win with the Telstra/NBN deal. It was always pretty unlikely and it didn’t happen.

So Conroy, who is rabid about the Internet Filter retains the Ministerial portfolio to implement it. What is more important is that the Australian Labor Party (ALP) reiterated their commitment to its implementation. And the way that I look at it, we need to stop being so negative about this. Turn the frown upside down!

Just consider that, even if the entire geek population votes against Labor at the election, they are fairly likely to retain power (Tony Abbott as PM, seriously?). And even if every Victorian votes “below the line” in the Senate, and Conroy is disenfranchised, the policy will still be there, and Julia Gillard will proclaim from on high that “WE HAVE A MANDATE!” So, let’s just accept the inevitable – we are going to have a filter.

Let’s think about it this way instead: the ALP, with Conroy as their spokesperson, is giving us a massive challenge. The challenge for all geeks is not how to defeat the proposed filter – we know that is but a trivial challenge – but to find out in how many ways. To paraphrase the Bard: “How can I subvert thee? Let me count the ways”.

Blog about it. Twitter about your blog posts. Send emails to your friends. All demonstrating your powers of uber-geek and how we can all get around the filter.

Remember to publicise what, but more importantly why you are doing this. And you must remain positive. We can term it “A Challenge to Conroy” or something catchy like that, but stay on message that you are accepting this as a challenge.

Geeks of Australia, go forth and find as many ways as possible! Subvert^H^H^H^H^H^H^H accept my challenge to you all!

Western Digital 1TB external hard disks and Ubuntu Linux

A few weeks ago I bought myself two Western Digital 1TB external hard disk drives. I am using one for backups and one which I have connected via USB to my NAS as an expansion drive. This drive has its own share on the NAS, which I can mount on my Ubuntu Linux laptop via the SMB protocol. This replaced a 750GB drive doing the same job.

Interestingly, I have noticed that when I go to mount the share it, occasionally it throws an error message. I never got this with the 750GB drive, so initially I thought that the drive was faulty. It took me a while to figure it out and the actual cause was quite simple. These drives are WD Caviar Green “GreenPower” drives, which put themselves to “sleep” after a period of not being used. They take a few moments to wake up and spin up again. When Ubuntu initiates the mounting process, it times out when the drive doesn’t respond and throws the error message. Try about 10 seconds later and the drive has woken up and is responding and works just fine, mounting successfully. I hope this helps someone if they have a GreenPower or “Eco” drive of any sort, and are having problems with mounting them in Ubuntu. Just wait a few seconds and try again; it’s not a Linux problem, but the way that the drives go to “sleep” and then take a few moments to come alive and be responsive.

Ubuntu 10.04 – where did Sun Java go?

This post is partly for Kris and partly for everyone else! Some people are probably wondering where Sun Java has gone to in Ubuntu 10.04 “Lucid Lynx”. It always pays to read the Release Notes, as they tell you that Sun Java has been moved to the Partner Repository. So, how do you install it? Firstly you have to activate the Partner Repository. Go to the System -> Administration -> Synaptic Package Manager. Enter your password. Go to Settings -> Repositories. Go to Other Software. Click on the link with “partner” at the end. See screen shot (right).

Click Close. Click Reload, so you get the new software showing up. On the left hand side, near the bottom of the window, click on Origin, and then select lucid/main (archive.canonical.com). In the main window you’ll see a number of items appear. Scroll down until you see sun-java6-jre. Right click on it and select “Mark for installation”. See screen shot (below). A message will pop up telling you that some additional stuff is required. Click on the green “Mark” tick. If you want Sun Java to work with Firefox, you will also need to right-click on “sun-java6-plugin” and “Mark for installation”. I also install the “sun-java6-fonts” package too.

Media_httpdevworlditc_hxiki

Click on the big green “Apply” tick and wait for the install. You’ll need to tick the licence box and click Next. Now…you have Sun Java installed, but it is not the default Java Virtual Machine. If you ONLY want to use it (and this may be if OpenJDK/icedtea doesn’t work with a certain Java applet), then you’ll need to remove OpenJDK and “icedtea”. Search for “icedtea” in Synaptic and right-click on each “icedtea” and OpenJDK item you find and select “Mark for Complete Removal”, then click Apply. Now you should only have Sun Java. Enjoy!

Why I am not going to upgrade to a new laptop

I am writing this on my primary machine, an Asus A6Rp laptop. It is about 4 years old and has the following specs:

  • Intel Celeron M 420 @ 1.6GHz
  • 2GB RAM
  • 80GB HDD
  • 128MB ATI Radeon XPress 200M video card
  • Screen resolution of 1280 x 800
  • Broadcom 4318 wireless card
  • 10/100 Ethernet
  • Card reader
  • 4 USB 2.0 ports and some other stuff

I have been running Ubuntu GNU/Linux on this machine since late 2006, early 2007. After some initial configuration requirements, for the last few versions it has worked flawlessly with no requirement to alter anything to get it working.

However, I am wanting to upgrade to a new laptop. It would be nice to be able to watch 720p or 1080p video and do something resembling multi-tasking. The Celeron M 420 is a cut down non-HyperThreading version of the Core Solo so you can forget any kind of video-editing or cross coding!

I have watched the Pentium Dual Core, Core Duos and Core 2 Duos come and go, and am quite interested in upgrading to the new Core i3/i5/i7 range. But not with the configurations being offered at the moment.

I just can’t get interested in any of the current crop of laptops. This is mostly due to screen resolution. Most of the laptops, with up to about 16 inch screens, are advertising resolutions of 1366 x 768. So the vertical resolution of the screen is actually worse than what I have now! Why would I be interested in that?

Come on laptop manufacturers, give me a compelling reason to buy a new laptop!

Why Ubuntu 10.04 “Lucid Lynx” is meh

Don’t get me wrong, I love Ubuntu. I use it every day. In fact, it is the only operating system on both my laptop and netbook. I have divorced myself from Windows entirely, and no longer find myself wishing I could do things the “Windows way”. Actually, when I go to work and have to use Windows XP (government department so they are still evaluating Windows 7), I get very frustrated that it is slow (even on a relatively new machine) and has a number of annoying traits, including a lack of tabs in the file manger, Windows Explorer.

So why do I find Ubuntu 10.04 “Lucid Lynx” meh?

I have been running Ubuntu 10.04 since Beta 3. I have participated in the bug fixing process and resolving an issue with my laptop’s ATI video card. I have watched the default search engine go from Google to Yahoo and back to Google. I have watched the buttons to maximise, minimise and close windows, move from one side to another, then change their order.

However, I am still waiting for three two bugs to be fixed, all of which probably won’t be.

One: the default music player is Rhythmbox, but it can’t see a library that is accessed via a Windows share. So, for example, you have a NAS device which stores your music library. You’d think that putting in smb://NAS/music would work, but Rhythmbox can’t see it

Bug report here: https://bugs.launchpad.net/ubuntu/+source/rhythmbox/+bug/273294

This is now fixed.

Two: The workaround to the above bug is to mount shares by putting an entry into the “fstab” file. However, if you, like me, use a laptop with a wireless connection then you run into a bug with Network Manager which doesn’t unmount these shares cleanly before shutting down, delaying the shutdown. I raised the first bug report for this issue more than two years ago, and it is still not fixed.

Bug report here: https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/211631

Three: This last one is fairly minor, but still annoying. The file manager, Nautilus, displays the path to what folder you are currently using in two ways: using “breadcrumbs” (which are just clickable icons displaying the path) or using the full path in a text field. There used to be a way of easily changing, but some moron upstream (meaning that it wasn’t an Ubuntu decision, but a Gnome decision, which has flowed down to Ubuntu) disabled that. I have used the workaround to get the text box to display permanently, but I shouldn’t have had to do this

Bug report here: https://bugs.launchpad.net/nautilus/+bug/508632

Media_httpdevworlditc_cycej

But this still doesn’t explain why I think Ubuntu 10.04 “Lucid Lynx” is meh.

Recently I set up Windows 7 for a client. I have been using Windows since Windows 95 and not much has improved: it’s still a long, involved setup and customisation process, requires lots of third-party software to secure it and protect against viruses, trojans and general malware and to get basic functionality such as a PDF reader. Generally speaking, I found the whole process as frustrating and annoying as it always was.

Even with the slightly annoying bugs mentioned above, Ubuntu installs and boots quickly (20 minutes to install, and about 30 seconds to boot on my old laptop), does everything that I want to do and does it with effortless style (see screenshot on right). Amongst other things, it has a PDF reader and an office suite built in, so pretty much everything I want to do works right “out of the box”.

And that’s why I find it “meh”. It is kind of boring, simply because it works so very well. Given the choice, I wouldn’t use any operating system except GNU/Linux.

Moving WordPress to a new host server

I am writing this post as I want to keep a record of what I have done to move my blog from one hosted server to another hosted server, but keeping the same domain name. I also want to document this process, as it was slightly different to the the WordPress documentation.

The old server I was on used Plesk to manage the server settings and software installations. The new server uses CPanel and Installatron (not Fantastico). The outcome (a working blog) is essentially the same, but the way of getting there is a bit different and there are some limitations of each which affect the way of moving the blog across. For example, CPanel has a limit of 7 characters for a database name, so I couldn’t create the same database schema and restore directly; I had to use the database that gets installed by Installatron and change the table structures. Sounds complex, but in fact, it is trivially easy.

WordPress uses a combination of files to create the look and feel of the blog and a database to store the actual content. Both Plesk and CPanel’s software installers configure both of these for you. It is easier to let them do their jobs and work with them rather than against them. This is what I did…

  1. Backup the existing install: log in to your WordPress install as Admin, Tools, Export. This is only a partial backup, but is a good thing to have (if all else fails, etc)
  2. Make a full backup of all files in the existing WordPress install. This is probably accomplished using an FTP client
  3. Make a full backup of the database, probably using phpMyAdmin. I looked at some documentation which had certain settings, and I also saw that someone posted somewhere that the phpMyAdmin export defaults are fine. I didn’t change anything and just exported the database. When I restored it worked perfectly.
  4. Add the new servers nameservers to your domain records. Leave the existing ones in place. This is required for Installatron on CPanel, so that when it tries to install and resolve the DNS settings, it actually works, but you still have access to the old installation
  5. On the new server, use Installatron to install WordPress. For the database settings, select “automatically manage”
  6. At this stage, you have a vanilla working blog on the new server
  7. Copy across themes, plugins, uploads (and anything else which may be required)
  8. Restore the database to the new server, probably using phpMyAdmin. Installatron created a series of database tables with a random 2 to 4 letter prefix, so you’ll see those tables and the ones that you have imported from your old server backup. For me, all of the tables from the Plesk installation were prefixed “wp_”
  9. Change the “wp-config.php” file from the random 2-4 letter database table prefix, to the “wp_” (or applicable) prefix. Save the file
  10. Drop the Installatron tables from the database, leaving only the tables that you have restored (you’ll see that there’s a significant size difference, with your old tables containing all of your content so they’ll be much larger than the Installatron ones)
  11. Delete the old nameservers from the domain records. Wait for the DNS caching to timeout (may take anywhere between 5 minutes to 48 hours)
  12. Look at your blog on the new server! Check all settings and make sure everything is OK

I also backed up all the mail accounts, email redirections, etc and transferred those from the old server to the new server too, then deleted the domain hosting account off the old server.

I hope this helps someone if they are slightly confused about how to do this. Please leave a comment (it will be moderated) if you want me to clarify anything in here.

Impressions from Linux Conference Australia (LCA) 2009

I have returned home from Hobart where for the past week I have been attending the 2009 edition of Linux Conference Australia (or probably more accurately Australasia). It is usually referred to by the acronym: LCA.

This is largely a technical conference, but is evolving to be inclusive of all sorts of other disciplines too. For example, there were talks about the value of freedom in software and gaming with free and open source software and open hardware devices and so on.

This is interesting for me. Around 2002 I was configuring and installing Red Hat 5.2 boxes as on-site mail servers for clients. However, the company I was working for at the time didn’t give me any training on the systems. I just followed the “how-to” by typing in commands as I saw them. I didn’t develop an understanding of Linux, in fact quite the opposite and I was probably a denigrator of it for a while as I thought it was difficult to use and there was no GUI.

I continued in SME tech support and networking with Microsoft Windows, and got pretty good if I do say so myself!

Socially, most of my friends and people in my extended friendship circle are uber-geeks. But their world is completely different to mine. One of them works in a ISP writing code to monitor hundreds of servers. Others work in large organisations supporting thousands of users across continents. For someone who works in a small IT company supporting about 100 companies each of about 20 users, this is a completely different league.

In 2004 a friend gave me an Ubuntu 4.10 CD and told me it would be the next big thing. I tried it but wasn’t hugely impressed, so I left it for a while.

In 2005, the company I worked for had a client with a problem that needed an immediate, and if possible, cheap, solution. They were running a Windows Small Business Server and a Windows Terminal Server. One for data, the other for applications. They were having problems with bandwidth usage on their Internet connection and thought that one particular user was causing it, but were unable to prove it. We had to come up with a way of proving it. Easy you’d think, but when every web browser session is coming from the same server, the Terminal Server, that makes it rather difficult. We had to find a way of separating the session requests down toe the user level. And cheaply. The last bit was the actual problem. We could do it easily if the client agreed to pay lots of cash, but they weren’t going to agree to that.

I was tasked with this and after a few days of research and testing had a fully working and documented solution: Squid with NTLM authentication. The users would authenticate against the Squid proxy server and then we could analyse the log files to work out who was doing what. Needless to say, once the proof was presented to the user he stopped doing what he was doing and their monthly bandwith usage dropped off significantly.

This was my first real introduction to the power and flexibility of Free and Open Source Software. I was mightily impressed and I started to look into it a lot more. At about this time, Ubuntu was getting a lot of press. I tested out various distributions such as Ubuntu, MEPIS, SuSE and others. Whilst I didn’t begin to use any of them as my full-time operating system, at various points I had dual-boot systems and I was very slowly learning in my spare time.

I wasn’t until I got to China in late 2006 that I really had enough time to really get a grasp of Ubuntu (which was the only one of the various distributions that actually worked properly on my laptop). By the time 2007 rolled over into 2008, I was a firm FOSS fan-boi! The benchmark that I used was: how well does it work on my laptop? And the answer by this time was: really well. I no longer had to constantly worry about defragging, viruses and so on. Additionally, each major upgrade got things working better and faster.

I have been struggling to find out what I can do in this environment and community of free software uber-geeks (and I use the term in the nicest possible way!).

This conference I think I found it.

I am never going to be an uber-networking guy. I am never going to be able to programme anything much. I am never going to have the detailed technical knowledge of one particular subject. And that’s OK.

I am a generalist with business knowledge. I know how to write, document and train people.

And that’s what I can do. And will do. Stay tuned.