::compiz rocks

2 08 2007

I just used this howto to get Compiz working, and I have to say that I’m very impressed.  I’ve never been much for eye candy just for it’s own sake, but this is just plain cool.

It is fast and smooth, and has a heap of very cool options.

The howto is straight-forward and very simple to put in place.  I highly recommend you give it a whirl.




::getting ready for edgy, part duex–backups

21 10 2006

In this exciting episode, we’ll talk about the incredibly mundane and unsexy topic of backing up your key files and folders prior to making the move to Edgy. If backups are a normal part of your life, you can skip this. If not, they should be. I’ll do an entry at some point in the future talking about how to do regular backups. For this entry, though, I’m making a few assumptions:

  1. You don’t do regular backups of any kind.
  2. You have important files that you want to retain.
  3. You don’t have a second computer with linux installed, or it doesn’t have adequate space for your purposes.
  4. You have space enough to back up your data without having to wrestle for space.


Planning what you want to back up and how you want to do it is very important in terms of time taken to do your back up, the ease in getting it back, and storing it for the long term, if you want to. It is not necessary to back up your entire system, but you could do that if you wanted to and had the resources. My emphasis here will be in backing up your /home directory, as that is where most people store their personal files that they would want to keep.

Not everyone does this, though, and it is important that you take an inventory of your system before your upgrade to make sure you’re not missing anything. Be sure you look for directories on other paths to make sure you’re not missing anything. Where do you keep your writing? Your web site? Your mp3s? Your photos? Here’s how I plan out my backups:

What to Back Up?

  • Make a list of the types of files you want to save. Not necessarily the file types, but the category they would fall into. For example; music, photos, writing, ebooks, movies, and so on as you see fit. Don’t forget your email address book and your browser bookmarks.
  • Go through your /home directory and document where each file type is stored. If there are multiple users on your system, it would be a good idea to do this for each one.
  • Go through the rest of your hard drive and make sure you don’t have those files elsewhere, as well.
  • Write down all of directories that have files you want to keep. If your organization is spotty, you might want to take this opportunity to put your files into some logical order. If you move things, be sure to update your list.
  • Make a list, check it twice. You don’t want to do a back up only to discover you forgot that one really important set of files.
  • Get a rough estimate on how much space all of this is going to take up.

Where to Put it?

Now you need to plan out where you’re going to put all of this stuff you want to save. In a perfect world, you’ve got a spare disk (maybe external?), with enough space for everything you want to save. If this is the case, it makes life a lot easier. If not, hope is not lost.

Most likely, you’re going to end up putting your data on a spare disk or on some static media like a DVD or CD. A spare disk is probably the easiest way to go, but if you don’t have one, DVDs or CDs are still viable options. Depending on the total amount of data you’re retaining, you might end up using a lot of DVDs\CDs.

How to Back Up

There are a slew of ways you can back up your data. Some are easier than others, some are more complicated than others. For this example, I’m going to use rsync. Rsync is a simple and robust tool that you can use now and with your regular backups.

  • Backing up to a Spare Disk

So, let’s say you’ve want to save your /home directory (about 5GB, let’s say) and your music directory, which is actually off of the root tree in /mp3. You’ve got a 200GB external USB drive with plenty of space.

Sweet. This will be easy.

Once your external drive is plugged into the system, you’ll most likly find a new mountpoint in /media, called something like /usbdrive. The full path would be something like /media/usbdrive. For the sake of organization, I would create separate destinations for each unique path. So in /media/usbdrive, I’ll make a directory called /home_backup and another called /mp3_backup.

cd /media/usbdrive

mkdir home_backup

mkdir mp3_backup

Now we start the back up:

rsync -avrc /home/yourusername/ /media/usbdrive/home_backup

Here’s what we did:

The switches:

  • -a does the transfer in archive mode, so your permissions, symbolic links, etc. are preserved.
  • -v does the transfer in verbose mode, so rsync will tell you what it is doing. This is optional; I like to have it around.
  • -r does the transfer recursively into all subdirectories.
  • -c forces a checksum to make sure there were no errors in the copying process.

The rest:

  • /home/yourusername is the source directory
  • /media/usbdrive/home_backup is the–that’s right, you guessed it–destination directory.

This is going to take a while, most likely, and is incredibly boring to watch. Really. Once it is complete, you’ll do the same thing for the other data you want to back up, changing the source and destination as appropriate.

Backing up to a directory on the same partition

This is just as easy as backing up to an external drive, with a few extra steps and a little more time.

Navigate to the root directory and make a back up directory, then take ownership of it.

cd /

sudo mkdir backups

sudo chown yourusername:yourgroupname backups

Now again, we want to have separate directories for each destination.

cd backups

mkdir /home_backup

mkdir /mp3_backup

And now for the backup itself:

rsync -avrc /home/yourusername/ /backups/home_backup

Repeat for any other directories you want to save.

many hours later….
Your backups are complete, now what?

Verifying Your Backups (either method)

Now that you’ve got your backups completed, you’ll want to double-check them and make sure that your data has been preserved.  The -c switch did this for you, making sure that all of the file sizes matched up okay, but I’m paranoid and always want to check for myself.

Do this by simply browsing the files and checking them at random, maybe 20-30.  If there’s anything that’s really important, you might want to check the whole directory.
Moving Your Backups to Other Media

Now that you’ve got everything backed up and verified, it is time to move everything to your (hopefully) DVDs or (ick) CDs.  I’m sure there are better ways to do this that are far more elegant, but I just select blocks of files and burn them using k3b (or Gnomebaker, if you go that way).

Once you’ve moved your data to your DVDs, be anal and verify them again.  It would lame to go to all this trouble, only to find there was an error and you’ve lost your shit.  That, I think, would make me lose my shit in a big way.

Next time…on to the upgrade!



::sharing folders between systems with NFS

14 09 2006

edit: following Seb’s question about why I used the version of nfs that I did, I went back and tried it his way. After testing things out for a few minutes, it was apperent that his recommendation did make things faster, so I’m editing the howto. If you’re reading this then the howto is already edited and you can follow the steps listed exactly.
If you’ve already used this howto and are now kicking yourself for taking the advise of a jackass like me, don’t worry–correcting the setup is easy. Steps to remove nfs-user-server and replace it with nfs-kernel server will be in a comment following the howto.

Many thanks to Seb for bringing this up for me.


Say you’ve got multiple Ubuntu systems set up and you want to be able to easily browse folders and share files between the remote systems as if they were local. This is pretty easy to to with NFS, or network file system.

First off, let me say that there are other ways to achieve this same type of goal, but I’ve found NFS to be the easiest way to go, and is reliable enough for a home user who doesn’t have a long list of demands for the system.

Let me also say that this howto is based largely on another excellent howto found here. The reason I am making my own howto instead of just giving you a link and telling you to have fun is that 1) I found that it wasn’t 100% complete from an Ubuntu perspective and 2) it provides more information than the typical new user will really need. That being said, it is an excellent resource and deserves to be looked at if you have further questions or want to set up more than a few systems at a time.

That being said, let’s get started!

NFS allows you to mount and browse remote directories as if they are local to the client you’re working on. You can set permissions to read\write, read-only, and others that I’ll go into later on. Any machine can be either client or server, or both, and multiple clients can be set up to access the same directories. More on this later.

First, the server set up. The server, in this context, is the machine that actually has the directories you want to share.

Open a terminal and do the following:

sudo apt-get update

This, of course, updates your package list. Then:

sudo apt-get install nfs-kernel-server

This will install any necessary nfs packages and get daemons running.

Now set up the configuration files. There are three files you’ll work with on the server: /etc/exports, /etc/hosts.deny, and /etc/hosts.allow. Technically, /etc/exports is all you need to set up, but that would make for a very insecure configuration. Setting up the remaining files takes only a few minutes, so there’s no need to chance it and be lazy.

/etc/exports is nothing more than a list of volumes that are shared, how they are shared, and who they are shared to. You can get the full scoop on the setup options from your friendly neighborhood man page (man exports), but what we do here will probably work for you just fine.

Suppose the directory you want to share out is called /mp3, and you want to share it to two clients, giving the first client read\write and the second read only. Easy as pie.

sudo nano /etc/exports

and then insert something like the following line:

/mp3 (rw) (ro)

What did that do?

The text /mp3 is the directory you’re sharing. The first IP address (0.1) is the first client connecting to the directory, and (rw) is the permission set you’re giving the client. The second IP address, as you probably guessed, is the second client, with (ro) being the read-only permission you’re assigning.

You can use hostnames instead of IP addresses, but using the IP address is more reliable and secure.

For fun, here’s the complete list of (options) you can assign to each directory:

  • ro — Pretty straight forward.
  • rw — Again, pretty straight forward.
  • no_root_squash — Be careful with this one. By default, any file request made by root on the client will be treated like it has been made by user nobody on the server. If you select no_root_squash then root on the client will be treated like root on the server, and have the same set of permissions. Don’t use this option unless you have a good reason to do so.
  • no_subtree_check — If you only export part of a volume, there is a routine called subtree checking that verifies that the requested file is in the appropriate part of the volume. If you export an entire volume, then disabling this will speed things up a bit.

If you have a larger installation and want to make directories available to a large number of clients at once, you can give access to a range of machines instead of individuals.

To do that, you would put something like this in /etc/exports instead of the text we did up above:

/mp3 (ro)

You just gave read-only access to all of the clients between and

You can also use wild cards instead of host names or IP addresses.

/mp3 *.foo.com (ro)

Bear in mind, though, that this is not the most secure setup. If you’ve only got a few clients you’re dealing with, list them all out.

Now the other files:

/etc/hosts.allow and /etc/hosts.deny handle requests to the server in like this:

  • checks hosts.allow to see if the machine making the request matches an entry in the file. If so, it is allowed access. If it is not, then it moves on to hosts.deny
  • hosts.deny contains a list of machines that are specifically denied access. If the file contains a listing that matches the requesting machine, then it is denied.

If the requester doesn’t appear in either file, then the request is accepted by default. But this huge hole is easy enough to fill and make secure…

The easiest way to control access from unwelcome hosts is by opening hosts.deny

sudo nano /etc/hosts.deny

and adding the following lines:


or you can cast a wider net and do something like this:


This will deny any request unless it is explicitly allowed, but might cause some headaches if you forget about it and try to install some new service later on down the road.

Save your file, and let’s move on to /hosts.allow

sudo nano /etc/hosts.allow

portmap: ,

Again, you can replace IP addresses host names if you feel like it.

You can also use wildcards or IP ranges, just as in hosts.deny.

Now, let’s verify that NFS is running:

rpcinfo -p

And in there, you’ll see something like this: 100003 2 udp 2049 nfs

Now let’s mount the remote directory from the server client! (Thanks, Mark!)

First, you have to have a mount point. I created a directory in /mnt, but you can do it anywhere. I personally would not recommend putting in /home, though, especially if you run backups. And of course everyone does. If you do decide to put your mount point in your /home directory, make sure you also add that mount point to your excludes file for you backups.

But let’s just assume you want to put the mount point in /mnt.

cd /mnt

sudo mkdir /mp3, or whatever name you want to give it.

now you just mount it:

sudo mount servername:/mp3 /mnt/mp3 where servernameis the server or IP address of the directory’s host.


What’s that? It didn’t work? If you have trouble, try rebooting the server you just set up. You shouldn’t have to, but it is probably the easiest thing to do. (it worked for me).

To unmount…

sudo umount /mnt/mp3

Suppose you want to mount at startup. Just modify your /etc/fstab to read something like this:

# device mountpoint fs-type options dump fsckord

master.foo.com:/home /mnt/home nfs rw,hard,intr 0 0

The options specified here are worth talking about. rw, of course is read-write, and can be changed to ro should that suit your purpose. The other two have to do with how everything in handled in the event of a crash. hardwill cause the program accessing the NFS mounted file to hang when the server crashes. The process can’t be interupted or killed unless you specify intr. When the NFS server is back up, the program will continue undisturbed where it left off. There are other options you can use, but hard,intrare the most frequently recommended for NFS mounted file systems.

There are lots of other options and things you can do to optimize performance, and you can find them all here, or with the help of your good friend Google.

I hope you found this helpful, and as always, would appreciate any feedback you care to give.

Until next time..



::streaming mp3s

23 08 2006

So, you’ve got a large (completely legal) mp3 collection that you want to share with others in the house or in the world, and you want an easy way to do it, right?

Gnump3d is the answer, my friend, and the answer is blowing in the wind.

or something.

Gnump3d is one of many steaming mp3 servers available from the open source community.  I’ve tried a few, including kPlaylist and Agatha, but I like Gnump3d the most.  It takes just a few minutes to configure, is easy to manage, and is very robust without being bloated.

Ubuntu users can install Gnump3d from the repositories, or you can download the source from the main site.  Either way is essentially the same result–the only difference is downloading directly from the site requires you to go through the hell of having to extract the files and run the ever tedious make install command.


Once installed, you configure the program by editing /etc/gnump3d/gnump3d.conf, which takes all of about 10 minutes.

In addition to mp3 support, Gnump3d will also handle your OGG and movie files, and much, much more!

It is also well supported with a not overwhelmingly active mailing list supported by fairly large community.  You’ll frequently find posts by the progam’s main author, which I like a lot.

give it a whirl, you’ll like it.



::can’t start X

23 08 2006

There’s a problem I see posted to the Ubuntu forums on a fairly regular basis that has to do with not being able to start an X session.  A user will report an error message that reads something like “no write access to /home/user/.ICEauthority…”

How does this happen?

You’ll get this error when you improperly run some graphical applications as Root.  Say you want to move some files to or from a directory in which you don’t have write access.  You fire up for favorite file manager as Root, do your stuff, and all is right with the world. 

If you’re like a lot of users, it might be days or weeks before you log out of your current X session, in which case you don’t remember performing the unhappy action above. The issue could be lurking for quite a long time, waiting for the next time you log in.

Well fear not, we can fix this in a jiffy.

From your login screen, either drop to a console session or log into failsafe.  You’ll be able to login here, but all you’ll have is a command prompt. From here, you’ll want to check the ownership of your files and try to see what Root has gone and taken ownership of.

$ ls -al |less

ls will list the contents of your directory (home, in this case), with two switches thrown in for flavor.  -a lists all files, and l (or -l if you want to do it all by itself) will give you the ownership and permissions.  Piping the command through less will pause the output a screen at time so you can actually see what you’re looking at. 

Depending on the application you ran as Root, there may be one or more files that is now owned by root.  You can take ownership of them individually, or you can reclaim ownership of all files in the directory at once.  Kind of like a dog pissing on his favorite tree.  Either way, the command is almost the same.

To reclaim ownership of an individual file:

sudo chown yourusername:yourusername filename (or in my case, something like sudo chown jim:jim .ICEauthority)

chown allows you change ownership of a file, with the first yourusername setting the user and the second yourusername setting the group. 

If you want to piss on the tree and make sure everyone knows that the whole yard is yours, simply modify the command like this:

sudo chown yourusername:yourusername *.*

Either entry will prompt you for your password–and remember–Ubuntu doesn’t enable the Root account by default, so you enter your username.

You should return to your command prompt without error.  Now you should be able to log out and log back in to your favorite X session.

And all is right with the world.

or something.

But sometimes I really want to run a graphical app as Root, what do I do?

You don’t have to fix files every time you want or need to run a graphical app as root.  If you’re using KDE, you will replace use of sudo with kdesu.  If you’re running Gnome, you’ll use gksudo instead of sudo.

so there it is…my first technical post.  Couple questions for those of you unlucky enough to come across my blog and bored enough to have read all the way down to this point:

1) Was it helpful?  If you came across this entry because you’re actually having this error, did it help you fix the problem?  Do you think it might help you in the future?

2) Was it easy to understand?

3) Was it easy to tell what was a command and what wasn’t?  Did you know what you had to enter at the command line, or did you have to muddle through the bold, italics, and bold italics until you figured out what was what?

4) What’s your favorite color?

Sometime in the next few days I’ll update the blog with some helpful links and resources I’ve found.  I know you’re giddy with anticipation now.