::kde4

19 12 2007

KDE4 RC2 was released last week, and I decided to have a go at it a few days ago. Installation instructions, if you’re interested, are here.

I’m a little too picky, probably, and I have to remind myself that this is not a finished product. But so far, I’m not wowed. Things are a lot less configurable out of the box, or at least no nearly as intuitive as they used to be, and that is irritating.

How any of you tried kde4?  How do you feel about it?

It is entirely possible that I’m overlooking some incredibly cool stuff and I need to be enlightened.

kisses,

jimbo

Advertisements




::underwhelmingly easy

5 12 2006

Finally did my CD installation today, preserving my /home partition and all of the data inside.  I briefly considered reformatting /home as well and restoring from back up, but quickly thought better of it.

Less than 30 minutes after starting the installation, I rebooted into my cleanly installed system with everything just as I had left it, except for the clutter.

Now, because I’m feeling lazy, I’m installing automatix2.  This will get me going with mp3 and dvd support, along with a host of other goodies I can choose from.

The only burning question I have is whether or not Amarok will recognize my Creative Zen mp3 player. But it is not burning enough for me to get off the couch.

Like.i.said.

lazy.

kisses,

jimbo





::got root?

14 11 2006

One of the most common questions posed by people new to Ubuntu has to do with root.  Brand new users usually know enough about Linux in general that they wonder why they didn’t have to create a root password, and experienced Linux users catch it right off the bat. 

Ubuntu disables the root account by default, opting instead to use sudo for administrative or root-like functions.  Since Ubuntu 6.06, users who attempt to perform an administrative function are prompted for their own password.  Prior to 6.06, users would be prompted for the root password, but this was a terminology bug–what the system was really looking for was that user’s password.

While this works well for 99.9% of the people out there, there are times when you don’t want to throw sudo at the start of every line to get something done.  If you would like to enable the root account and set a root password, all you do is:

sudo passwd root

You’re prompted for your password (because you invoked sudo), then you’re asked to enter a password for root, and confirm it. 

If you want to disable root, enter this at a command prompt:

sudo passwd -l root (that’s a lower-case L for lock)

Can I login graphically as root?

You can, but it is really not a good idea to enable this functionality.  It’s such a bad idea, in fact, that I’m not going to tell you how to do it.  If you really want to know, you can look here.

Starting graphical apps as root

I’ve mentioned it before, but it’s worth mentioining again.  If you need to run a graphical application as root to, for example, update a configuration file or something, you don’t want to use sudo, as this will cause permissions problems later on that will prevent you from logging into your X session. 

Instead, you’ll use a slight modification of sudo.

In Ubuntu (gnome), Edubuntu, and Xubuntu, use gksudo appname

In Kubuntu, use kdesu appname

And that’s all there is to it. 

kisses,

jimbo





::post upgrade, mostly clean.

26 10 2006

So yesterday I followed my own instructions and upgraded to Edgy RC1 from the repositories.  I did it just as my blog describes, and for the most part, it went fairly smooth.  There were a few oddities that I’ll get into here, but most people (I think) will not run into the problems quirks that I had.

First off, there were a lotof packages to download–over a 1 gig.  I figured out that this is because I’m fickle and I’ve got multiple desktop environments installed on my system–Kubuntu (my default), Ubuntu (Gnome), and Xubuntu (Xfce).  This made for a lot more than the standard fair to download.  Now, I’m not alone in trying many environments, and there might be a lot of people who shared my experience.  Downloading wasn’t completed by the time I left for work, so I just let it go.

Upon my return home, downloads had completed but I had a few questions to answer about package configuration.  These had to do with configuration files being replaced, and the system was asking me if I wanted to keep my already modified config files or replace them with what was in the new package.  For the most part, I kept my config files in place, although there were a couple that I decided to overwrite.  It took a good 90 minutes for the system to configure and install all of the packages.  When it was done, I rebooted and crossed my fingers.

The first thing I noticed was that following the Grub menu, the screen went dark until I was presented with the login screen.  The Kubuntu splash screen, and all of the boot status messages didn’t appear.  This made me nervous, but once the login screen came up I felt like I was okay.

Upon login, I was presented with a KDE config menu.  This is a first time wizard that tells K how you want to handle window focus, opening programs, stuff like that.  It was the first time I’d seen it.  Once I got passed that, K loaded incredibly quickly–faster than any X window session I’ve ever had.  I poked around a little, looked at the new menus and such, and was for the most part pleased but underwhelmed.  Then I noticed that the Adept Update manager was flashing.  I got busy and opted to ignore it for the moment. 

I had time to check it out this morning, and there were 17 packages that it said needed an update.  This struck me as odd.  I figured that after upgrading to the RC yesterday, I would have a small number of updates to do that would bring up to 6.10 final, but 17 seemed excessive.

I closed the update manager and opened up Konsole (lightning fast, I tell you) and did a sudo aptitude upgradeto get a closer look at the packages.  Again, I just like to do it this way…I’ve found that the easiest way for me to get comfortable with the command line is to use it instead of the GUI whenever I have the option.  One package that stood out for me that was listed as needing an upgrade was Xorg, but when I opted to proceed with the upgrade, it wasn’t included in the upgrade, along with three other packages that I can’t remember right off hand.

I also noticed that my power management was still being handled by kLaptop, the default utility in 6.06.  One of the things I’d been looking forward to the most with 6.10 was a better power management utility called Guidance.  I looked for it on my system but didn’t see it.  Had everything not installed?

Then I remembered something from Arsgeek that I read yesterday in his upgrade instructions.  He recommended that prior to upgrading, it was a good idea to to make sure your desktop environment was up to date with all packages by doing sudo apt-get install kubuntu-desktop or whichever desktop environment you’re using.  Better late than never, I say, so I opted to do that and see what happened.

Sure enough, there was a slew of packages listed as part of kubuntu-desktop that weren’t installed yet.  I installed those, all went well.  Rebooted and poked around.  All of my packages were up to date and kLaptop had been replaced by Guidance (though the pop-up doesn’t call it that–calls it something else.  Since I’m not at my machine right now, I can’t tell you what it is. ) 

I strongly suspect that sudo apt-get install kubuntu-desktop will save most people from having most of the issues I’ve described here today.

I still don’t have a slash screen, which I don’t like.  I like to be able to watch the boot process and make sure things aren’t going wrong, so I need to work that out.  I also want to learn more about the power management utility, because the impression I’ve gotten from the release notes is that it is very robust. 

My laptop hot keys work now, which I think is very cool, albeit not really consequential.  Overall, the machine is much more responsive.   I’m trying to decide as I write this if I should try to figure out the splash screen problem, or just download a kubuntu iso and see what the machine is like with a truly fresh Kubuntu installation.   Which, I still need to give some instructions on how to do, don’t I?

To sum up…installing from the repositories was a pretty painless (but time consuming) experience.  Most of the quirks I ran into won’t be experienced by most people.  While there are few things left to work out, my system is working and functioning very well–performing better than before, for the most part. 

 kisses,

jimbo





::getting ready for edgy, part one

20 10 2006

Edgy Eft, the latest addition to the Ubuntu family of releases, is due to come out on October 26th. Release Candidate 1, which is pretty much the final beta, is already out. If you’re like me, that means that the time to upgrade is getting close. 

Over the next few days, and I’m hoping to keep with that schedule, I’m going to give a few suggestions for what I think will make for a successful upgrade.  Your mileage, as always, may vary.

Before any method of upgrading, it is absolutely necessary for most people to do a back up of their system.  I say most people only because there are some folks out there who don’t care about potentially losing the data that is on their computer.  Or think they don’t, anyway.  Experience has taught me to always do a back up of anycomputer I am upgrading.  After you make an irrevocable change to a computer, someone always finds that there was something they missed and want back.  If you do regular backups of your computer, you can probably just go with that.  If you don’t, you should–but that’s a topic for another day.

There are a couple of ways you can go about moving up to edgy. You can do a distribution upgrade, which basically entails changing your repos to edgy and using Synaptic, Apt, or Aptitude to upgrade. You could also do a fresh installation of edgy by downloading the .iso image and installing cleanly. There are other ways–dual booting with Windows, another linux, or even your previous Ubuntu installation, but I won’t really go into those.

I prefer to do clean installations rather than upgrades.  They are a little more work when it comes to preperation and wrap-up, but they generally to a lot smoother than trying to do a distribution upgrade.

Think of upgrading via the repositories as more or less equal to using a Windows CD to upgrade from the previous version of the OS. It is theoretically easier, and you (again, theoretically) don’t have to reinstall all of your applications and data.  In the Windows world, I have never seen this go smoothly–either through my own experiences or that of others.  In the Windows world, applications break, you run into file conflicts, and end up with a generally unstable system. 

I’ve only tried this once on Ubuntu, and the fact that it didn’t go well was probably my own mistake–I decided to go from Hoary to Breezy on either the day of or shortly after the release–the repositories timed out and I ended up with a half-complete installation.  Of course, I was still able to boot into the current and stable version, and probably could have finished the job that way, but I decided to save myself the trouble and download an .iso, going with the second method that I’m about to go over.  Based on my experience, though, I don’t recommend this method of upgrading. 

The other method of upgrading is a more or less clean installation–download an edgy CD and do a fresh installation from there.  If you keep your /home directory on it’s own partition, you can have everything up and running with the Edgy Eft in under 45 minutes, all your files and settings and loveliness intact as it was with the lovely and talented Dapper Drake.  If /home isn’t in the same partition, that’s where your backup will come in very handy, though it will probably take a few more hours to restore your data. 

The next installment in our upgrade saga will be backing up your your important files.  Then we’ll briefly go over an upgrade via the repos; and finally, the old fashioned-way.

kisses,

jimbo





::sharing folders between systems with NFS

14 09 2006

edit: following Seb’s question about why I used the version of nfs that I did, I went back and tried it his way. After testing things out for a few minutes, it was apperent that his recommendation did make things faster, so I’m editing the howto. If you’re reading this then the howto is already edited and you can follow the steps listed exactly.
If you’ve already used this howto and are now kicking yourself for taking the advise of a jackass like me, don’t worry–correcting the setup is easy. Steps to remove nfs-user-server and replace it with nfs-kernel server will be in a comment following the howto.

Many thanks to Seb for bringing this up for me.
kisses,

jimbo

Say you’ve got multiple Ubuntu systems set up and you want to be able to easily browse folders and share files between the remote systems as if they were local. This is pretty easy to to with NFS, or network file system.

First off, let me say that there are other ways to achieve this same type of goal, but I’ve found NFS to be the easiest way to go, and is reliable enough for a home user who doesn’t have a long list of demands for the system.

Let me also say that this howto is based largely on another excellent howto found here. The reason I am making my own howto instead of just giving you a link and telling you to have fun is that 1) I found that it wasn’t 100% complete from an Ubuntu perspective and 2) it provides more information than the typical new user will really need. That being said, it is an excellent resource and deserves to be looked at if you have further questions or want to set up more than a few systems at a time.

That being said, let’s get started!

NFS allows you to mount and browse remote directories as if they are local to the client you’re working on. You can set permissions to read\write, read-only, and others that I’ll go into later on. Any machine can be either client or server, or both, and multiple clients can be set up to access the same directories. More on this later.

First, the server set up. The server, in this context, is the machine that actually has the directories you want to share.

Open a terminal and do the following:

sudo apt-get update

This, of course, updates your package list. Then:

sudo apt-get install nfs-kernel-server

This will install any necessary nfs packages and get daemons running.

Now set up the configuration files. There are three files you’ll work with on the server: /etc/exports, /etc/hosts.deny, and /etc/hosts.allow. Technically, /etc/exports is all you need to set up, but that would make for a very insecure configuration. Setting up the remaining files takes only a few minutes, so there’s no need to chance it and be lazy.

/etc/exports is nothing more than a list of volumes that are shared, how they are shared, and who they are shared to. You can get the full scoop on the setup options from your friendly neighborhood man page (man exports), but what we do here will probably work for you just fine.

Suppose the directory you want to share out is called /mp3, and you want to share it to two clients, giving the first client read\write and the second read only. Easy as pie.

sudo nano /etc/exports

and then insert something like the following line:

/mp3 192.168.0.1 (rw) 192.168.1.2 (ro)

What did that do?

The text /mp3 is the directory you’re sharing. The first IP address (0.1) is the first client connecting to the directory, and (rw) is the permission set you’re giving the client. The second IP address, as you probably guessed, is the second client, with (ro) being the read-only permission you’re assigning.

You can use hostnames instead of IP addresses, but using the IP address is more reliable and secure.

For fun, here’s the complete list of (options) you can assign to each directory:

  • ro — Pretty straight forward.
  • rw — Again, pretty straight forward.
  • no_root_squash — Be careful with this one. By default, any file request made by root on the client will be treated like it has been made by user nobody on the server. If you select no_root_squash then root on the client will be treated like root on the server, and have the same set of permissions. Don’t use this option unless you have a good reason to do so.
  • no_subtree_check — If you only export part of a volume, there is a routine called subtree checking that verifies that the requested file is in the appropriate part of the volume. If you export an entire volume, then disabling this will speed things up a bit.

If you have a larger installation and want to make directories available to a large number of clients at once, you can give access to a range of machines instead of individuals.

To do that, you would put something like this in /etc/exports instead of the text we did up above:

/mp3 192.168.1.0/255.255.255.0 (ro)

You just gave read-only access to all of the clients between 192.168.1.0 and 192.168.1.255

You can also use wild cards instead of host names or IP addresses.

/mp3 *.foo.com (ro)

Bear in mind, though, that this is not the most secure setup. If you’ve only got a few clients you’re dealing with, list them all out.

Now the other files:

/etc/hosts.allow and /etc/hosts.deny handle requests to the server in like this:

  • checks hosts.allow to see if the machine making the request matches an entry in the file. If so, it is allowed access. If it is not, then it moves on to hosts.deny
  • hosts.deny contains a list of machines that are specifically denied access. If the file contains a listing that matches the requesting machine, then it is denied.

If the requester doesn’t appear in either file, then the request is accepted by default. But this huge hole is easy enough to fill and make secure…

The easiest way to control access from unwelcome hosts is by opening hosts.deny

sudo nano /etc/hosts.deny

and adding the following lines:

lockd:ALL
mountd:ALL
rquotad:ALL
statd:ALL

or you can cast a wider net and do something like this:

ALL:ALL

This will deny any request unless it is explicitly allowed, but might cause some headaches if you forget about it and try to install some new service later on down the road.

Save your file, and let’s move on to /hosts.allow

sudo nano /etc/hosts.allow

portmap: 192.168.1.1 , 192.168.1.2

Again, you can replace IP addresses host names if you feel like it.

You can also use wildcards or IP ranges, just as in hosts.deny.

Now, let’s verify that NFS is running:

rpcinfo -p

And in there, you’ll see something like this: 100003 2 udp 2049 nfs

Now let’s mount the remote directory from the server client! (Thanks, Mark!)

First, you have to have a mount point. I created a directory in /mnt, but you can do it anywhere. I personally would not recommend putting in /home, though, especially if you run backups. And of course everyone does. If you do decide to put your mount point in your /home directory, make sure you also add that mount point to your excludes file for you backups.

But let’s just assume you want to put the mount point in /mnt.

cd /mnt

sudo mkdir /mp3, or whatever name you want to give it.

now you just mount it:

sudo mount servername:/mp3 /mnt/mp3 where servernameis the server or IP address of the directory’s host.

Sweet!

What’s that? It didn’t work? If you have trouble, try rebooting the server you just set up. You shouldn’t have to, but it is probably the easiest thing to do. (it worked for me).

To unmount…

sudo umount /mnt/mp3

Suppose you want to mount at startup. Just modify your /etc/fstab to read something like this:

# device mountpoint fs-type options dump fsckord

master.foo.com:/home /mnt/home nfs rw,hard,intr 0 0

The options specified here are worth talking about. rw, of course is read-write, and can be changed to ro should that suit your purpose. The other two have to do with how everything in handled in the event of a crash. hardwill cause the program accessing the NFS mounted file to hang when the server crashes. The process can’t be interupted or killed unless you specify intr. When the NFS server is back up, the program will continue undisturbed where it left off. There are other options you can use, but hard,intrare the most frequently recommended for NFS mounted file systems.

There are lots of other options and things you can do to optimize performance, and you can find them all here, or with the help of your good friend Google.

I hope you found this helpful, and as always, would appreciate any feedback you care to give.

Until next time..

kisses,

jimbo





::streaming mp3s

23 08 2006

So, you’ve got a large (completely legal) mp3 collection that you want to share with others in the house or in the world, and you want an easy way to do it, right?

Gnump3d is the answer, my friend, and the answer is blowing in the wind.

or something.

Gnump3d is one of many steaming mp3 servers available from the open source community.  I’ve tried a few, including kPlaylist and Agatha, but I like Gnump3d the most.  It takes just a few minutes to configure, is easy to manage, and is very robust without being bloated.

Ubuntu users can install Gnump3d from the repositories, or you can download the source from the main site.  Either way is essentially the same result–the only difference is downloading directly from the site requires you to go through the hell of having to extract the files and run the ever tedious make install command.

egads

Once installed, you configure the program by editing /etc/gnump3d/gnump3d.conf, which takes all of about 10 minutes.

In addition to mp3 support, Gnump3d will also handle your OGG and movie files, and much, much more!

It is also well supported with a not overwhelmingly active mailing list supported by fairly large community.  You’ll frequently find posts by the progam’s main author, which I like a lot.

give it a whirl, you’ll like it.

kisses,

jimbo