::sharing folders between systems with NFS

14 09 2006

edit: following Seb’s question about why I used the version of nfs that I did, I went back and tried it his way. After testing things out for a few minutes, it was apperent that his recommendation did make things faster, so I’m editing the howto. If you’re reading this then the howto is already edited and you can follow the steps listed exactly.
If you’ve already used this howto and are now kicking yourself for taking the advise of a jackass like me, don’t worry–correcting the setup is easy. Steps to remove nfs-user-server and replace it with nfs-kernel server will be in a comment following the howto.

Many thanks to Seb for bringing this up for me.


Say you’ve got multiple Ubuntu systems set up and you want to be able to easily browse folders and share files between the remote systems as if they were local. This is pretty easy to to with NFS, or network file system.

First off, let me say that there are other ways to achieve this same type of goal, but I’ve found NFS to be the easiest way to go, and is reliable enough for a home user who doesn’t have a long list of demands for the system.

Let me also say that this howto is based largely on another excellent howto found here. The reason I am making my own howto instead of just giving you a link and telling you to have fun is that 1) I found that it wasn’t 100% complete from an Ubuntu perspective and 2) it provides more information than the typical new user will really need. That being said, it is an excellent resource and deserves to be looked at if you have further questions or want to set up more than a few systems at a time.

That being said, let’s get started!

NFS allows you to mount and browse remote directories as if they are local to the client you’re working on. You can set permissions to read\write, read-only, and others that I’ll go into later on. Any machine can be either client or server, or both, and multiple clients can be set up to access the same directories. More on this later.

First, the server set up. The server, in this context, is the machine that actually has the directories you want to share.

Open a terminal and do the following:

sudo apt-get update

This, of course, updates your package list. Then:

sudo apt-get install nfs-kernel-server

This will install any necessary nfs packages and get daemons running.

Now set up the configuration files. There are three files you’ll work with on the server: /etc/exports, /etc/hosts.deny, and /etc/hosts.allow. Technically, /etc/exports is all you need to set up, but that would make for a very insecure configuration. Setting up the remaining files takes only a few minutes, so there’s no need to chance it and be lazy.

/etc/exports is nothing more than a list of volumes that are shared, how they are shared, and who they are shared to. You can get the full scoop on the setup options from your friendly neighborhood man page (man exports), but what we do here will probably work for you just fine.

Suppose the directory you want to share out is called /mp3, and you want to share it to two clients, giving the first client read\write and the second read only. Easy as pie.

sudo nano /etc/exports

and then insert something like the following line:

/mp3 (rw) (ro)

What did that do?

The text /mp3 is the directory you’re sharing. The first IP address (0.1) is the first client connecting to the directory, and (rw) is the permission set you’re giving the client. The second IP address, as you probably guessed, is the second client, with (ro) being the read-only permission you’re assigning.

You can use hostnames instead of IP addresses, but using the IP address is more reliable and secure.

For fun, here’s the complete list of (options) you can assign to each directory:

  • ro — Pretty straight forward.
  • rw — Again, pretty straight forward.
  • no_root_squash — Be careful with this one. By default, any file request made by root on the client will be treated like it has been made by user nobody on the server. If you select no_root_squash then root on the client will be treated like root on the server, and have the same set of permissions. Don’t use this option unless you have a good reason to do so.
  • no_subtree_check — If you only export part of a volume, there is a routine called subtree checking that verifies that the requested file is in the appropriate part of the volume. If you export an entire volume, then disabling this will speed things up a bit.

If you have a larger installation and want to make directories available to a large number of clients at once, you can give access to a range of machines instead of individuals.

To do that, you would put something like this in /etc/exports instead of the text we did up above:

/mp3 (ro)

You just gave read-only access to all of the clients between and

You can also use wild cards instead of host names or IP addresses.

/mp3 *.foo.com (ro)

Bear in mind, though, that this is not the most secure setup. If you’ve only got a few clients you’re dealing with, list them all out.

Now the other files:

/etc/hosts.allow and /etc/hosts.deny handle requests to the server in like this:

  • checks hosts.allow to see if the machine making the request matches an entry in the file. If so, it is allowed access. If it is not, then it moves on to hosts.deny
  • hosts.deny contains a list of machines that are specifically denied access. If the file contains a listing that matches the requesting machine, then it is denied.

If the requester doesn’t appear in either file, then the request is accepted by default. But this huge hole is easy enough to fill and make secure…

The easiest way to control access from unwelcome hosts is by opening hosts.deny

sudo nano /etc/hosts.deny

and adding the following lines:


or you can cast a wider net and do something like this:


This will deny any request unless it is explicitly allowed, but might cause some headaches if you forget about it and try to install some new service later on down the road.

Save your file, and let’s move on to /hosts.allow

sudo nano /etc/hosts.allow

portmap: ,

Again, you can replace IP addresses host names if you feel like it.

You can also use wildcards or IP ranges, just as in hosts.deny.

Now, let’s verify that NFS is running:

rpcinfo -p

And in there, you’ll see something like this: 100003 2 udp 2049 nfs

Now let’s mount the remote directory from the server client! (Thanks, Mark!)

First, you have to have a mount point. I created a directory in /mnt, but you can do it anywhere. I personally would not recommend putting in /home, though, especially if you run backups. And of course everyone does. If you do decide to put your mount point in your /home directory, make sure you also add that mount point to your excludes file for you backups.

But let’s just assume you want to put the mount point in /mnt.

cd /mnt

sudo mkdir /mp3, or whatever name you want to give it.

now you just mount it:

sudo mount servername:/mp3 /mnt/mp3 where servernameis the server or IP address of the directory’s host.


What’s that? It didn’t work? If you have trouble, try rebooting the server you just set up. You shouldn’t have to, but it is probably the easiest thing to do. (it worked for me).

To unmount…

sudo umount /mnt/mp3

Suppose you want to mount at startup. Just modify your /etc/fstab to read something like this:

# device mountpoint fs-type options dump fsckord

master.foo.com:/home /mnt/home nfs rw,hard,intr 0 0

The options specified here are worth talking about. rw, of course is read-write, and can be changed to ro should that suit your purpose. The other two have to do with how everything in handled in the event of a crash. hardwill cause the program accessing the NFS mounted file to hang when the server crashes. The process can’t be interupted or killed unless you specify intr. When the NFS server is back up, the program will continue undisturbed where it left off. There are other options you can use, but hard,intrare the most frequently recommended for NFS mounted file systems.

There are lots of other options and things you can do to optimize performance, and you can find them all here, or with the help of your good friend Google.

I hope you found this helpful, and as always, would appreciate any feedback you care to give.

Until next time..






7 responses

28 09 2006

Hi Jimbo. Any particular reason why you chose nfs-user-server rather than nfs-kernel-server? I was under the impression that the latter was better (certainly faster). Also, the kernel server is in main, whereas the user server is in universe, which suggests that Canonical prefer nfs-kernel-server.

28 09 2006


I have to admit that I’m ignorant to the difference between the two, so thanks for the heads up.

I’ll take a look at nfs-kernel-sever, and maybe revise the post.



7 10 2006

Now, to remove nfs-user-server and replace it with nfs-kernel-server is pretty simple stuff. I almost didn’t include it, but then I remembered what it’s like to find a mistake in a howto and not know how to fix it.

Honestly, I’m not sure if it is even necessary to remove user-server, or if kernel-server will just trump it. But I removed it and saw an improvement in performance.

These actions will take place on the machine you’re hosting from.

sudo apt-get remove nfs-user server

Once that is complete:

sudo apt-get install nfs-kernel-server

You’ll get a prompt that a few files going to be replaced, and what do you want to do with them. Leave your existing files in place.

That should be all there is to it. I was able to remount the share from my laptop within a few minutes by just doing a

sudo mount -a



5 12 2006
13 01 2007


Unless I’m mistaken, there’s a wee mistake in the text.

Shouldn’t “Now let’s mount the remote directory from the server!” be “Now let’s mount the remote directory from the client”?

16 08 2007

shure would be cool if there were gnome or other gui tools to do this – time could be saved

9 04 2009
Oliver Treend

Very good article – it will come in handy some time!

Just a couple of typos I spotted – “The other two have to do with how everything in handled in the event of a crash.” I think should read “The other two have to do with how everything is handled in the event of a crash.”

And your link to Google towards the end is missing the “http://” at the start, so it’s creating 404 errors on your blog.


Thank again

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: