Data Management optimize

Set up easy file sharing with NFS on Linux

Particularly for Windows-less environments where Samba is not needed, NFS is an easy and reliable way to share files between Linux and UNIX systems. Vincent Danen takes you through the setup.

NFS is an excellent way of sharing files between Linux and other UNIX systems. While Samba is a great choice due to the compatibility with Windows, if you're in a Windows-less environment, NFS may be a better choice.

NFS allows for machines to mount without authentication, at boot, which is great if you have a cluster of systems or if you want to use a centralized home directory system (using an NFS-mounted directory for home directories to keep your configurations and files identical on multiple systems).

NFS is also very easy to set up. To begin, you need to install the NFS package, so on Fedora or Red Hat Enterprise Linux and other similar systems, install the nfs-utils package:

# yum install nfs-utils

Next, you will need to edit /etc/exports which is where we define what filesystems can be remotely accessed. A sample /etc/exports may look like this:

/srv hosta.domain.com(rw)
hostb.domain.com(ro)
/home 192.168.1.0/255.255.255.0(rw)

What this /etc/exports does is export the /srv directory on the server to the hosta.domain.com computer as read/write and to hostb.domain.com as read-only. It also exports /home as read/write to any computer in the 192.168.1.0 network (192.168.1.0 being the network address and 255.255.255.0 being the netmask).

There are other options you can supply on a per-host or per-network basis, including the no_root_squash option which will not prevent root on a client machine from writing files to the server as root; by default, NFS will map any requests from root on the client to the 'nobody' user on the server.

Next, check /etc/hosts.allow and /etc/hosts.deny. NFS will check these files for access controls to the server. This is particularly necessary if you are using wildcards or broad network specifications in /etc/exports; using hosts.allow and hosts.deny you can fine-tune which clients do and don't have access. For instance, you may add in /etc/hosts.deny:

portmap:ALL

and then in /etc/hosts.allow:

portmap: 192.168.1.1, 192.168.1.2, 192.168.1.3

This would only allow the hosts specified in /etc/hosts.allow to connect to the portmap service. You can get more fine-grained and also add entries for lockd, rquotad, mountd, and statd -- all other NFS-related services.

Finally, to start NFS sharing, on the server you need to start a few services:

# service portmap start
# service nfs start
# service nfslock start
# service rpcbind
start
# service rpcidmapd start

On newer systems, portmap is probably deprecated in favour of portreserve; in that case you would use service portreserve start instead.

To see what filesystems are exported, use the exportfs command; if you've made changes to /etc/exports, use exportfs -ra to force NFS to re-read the configuration. To make sure that NFS is running, use the rpcinfo command; if it returns a list of services and addresses being listened to, you know it is running.

Finally, if you are running iptables on the server as a firewall, you will need to change what ports the NFS services listen to. By default, these are random unused ports, with portreserve/portmap letting requesting services know what ports to connect to. This is a major difference between NFSv3, where this is true, and NFSv4 which solely uses TCP port 2049, so this largely depends on which version of NFS you plan to use or enforce. On Fedora or Red Hat Enterprise Linux, this can be done by editing /etc/sysconfig/nfs. By default, it's all commented, so the following is what we want to uncomment and define:

RQUOTAD_PORT=875
LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
STATD_PORT=662
STATD_OUTGOING_PORT=2020

This will force static ports for the above services. The next step is to open the firewall on these ports, which can be done by editing /etc/sysconfig/iptables (again keeping in mind this is on a RHEL system):

# the following are for NFS
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p udp
--dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 111 -j ACCEPT
-A
RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 2049 -j ACCEPT
-A RH-Firewall-1-INPUT -s
192.168.1.0/24 -m state --state NEW -p tcp --dport 32803 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state
--state NEW -p udp --dport 32769 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport
892 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p udp --dport 892 -j ACCEPT
-A
RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p tcp --dport 875 -j ACCEPT
-A RH-Firewall-1-INPUT -s
192.168.1.0/24 -m state --state NEW -p udp --dport 875 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state
NEW -p tcp --dport 662 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state --state NEW -p udp --dport 662 -j
ACCEPT

After these changes are made, restart the firewall and the NFS services:

# for i in iptables
portmap nfs; do service $i restart; done

At this point, your NFS server is set up and ready to accept connections from remote clients, which can be tested by mounting one of the exported filesystems on the client:

# mkdir -p
/server/srv
# mount -t nfs server.domain.com:/srv /server/srv

If mount does in fact mount the remote filesystem, everything is working as it should.

NFS is really easy to use, and it works really well. Being able to mount NFS filesystems at boot is a great boon; you can have NFS mounted filesystems without your users even being aware that they are there, and without any direct intervention by them, which is handy.

About

Vincent Danen works on the Red Hat Security Response Team and lives in Canada. He has been writing about and developing on Linux for over 10 years and is a veteran Mac user.

18 comments
dwhixon
dwhixon

I don't claim to understand all the instructions, but I followed the directions verbatim, substituting only the hostnames and directories I wanted to share and it worked pefectly for sharing between RHEL VM's.


Thanks Vincent!

cearrach
cearrach

I think it's fairly important to point out that in /etc/exports there cannot be any space between the clients string and the bracketed options. Leaving a space or tab is a very common mistake, which completely changes the meaning of a line.

julioa.morales
julioa.morales

Too much typing, I wonder why MS Windows is the most adopted OS in the world with "there is a GUI for that..."

lefty.crupps
lefty.crupps

Network file systems may work fine on desktop machines which are (usually, in my experience) either On or Off. But laptops and other mobile devices often have their networking go up and down, they get put into sleep/hibernate, and they're often not at the same location all the time (meaning the connection may fail, possibly after waking from sleep). I've been looking for a good network file system which can overcome these standard behaviours; is the long-standing NFS my solution? I've tried with SSHFS which seems to hang far too often after sleep; scripting the connect/disconnect doesn't work out; AutoFS+SSHFS+Avahi hasn't worked for me... this seems like a pretty big issue that should have a solution but I've not found it. Suggestions? I'm running Debian Testing, 2.6.32...

Neon Samurai
Neon Samurai

Not needing authentication at time of connection does enable boot time mounting for some interesting setups but, how is security managed? - WorkA with username:username is the client system. - ServB with username:"users" is the NFS server with share "shared". Group "users" does not exist on WorkA however all users on ServB are members of a common "users" group. The share "shared" gives group "users" read/write access. How is the NFS share going to write files as username:"users" or what default user:group will it use locally for remote client connections?

j-mart
j-mart

If you are not interested in expanding your technical knowledge there are plenty of sites for sewing, knitting, cooking, or if these are still too technical for you there are plenty of forums that deal with gossip, entertainment news (gossip again). You may find these more suitable for you.

Neon Samurai
Neon Samurai

If you want NFS share management through a GUI utility, you can certainly go get the app for that. If I remember correctly, Mandriva will have a GUI app included in the Draketools all "control panel" like. Regardless of OS though, typing is so much faster for this stuff. Mount on *nix based systems. Net or win+r on Windows based systems. .. But, don't let any of that get in the way of taking a cheap shot at your own ignorance. ;)

Slayer_
Slayer_

But the whole article looks geared to Nix administrators. Not for average joe.

techrepublic@
techrepublic@

... and it works well on my notebook and netbook computers. One down side is that sshfs is slower than NFS. And (re)connecting can take several (5 to 10) seconds.

tbmay
tbmay

Usually kerberos is added to the mix. There are a number of tricks you can use though. Honestly, I only use NFS for exports that don't need any authentication though. I've tested the protocols pretty exensively and Samba, while it does give up some performance when you're using it between unix boxes, it really doesn't give up enough to make adding layers on top of NFS worth fooling with any more. That's just my experience though. I'd love to hear differing opinions.

vdanen
vdanen

No, you're right... the setup isn't necessarily easy (well, yeah, it kinda is, but that's not what the implication is). _Using_ NFS is easy, really easy. Using it is easy... setting it up... your mileage may vary. But note where the word "easy" appears in the tip title. =)

Neon Samurai
Neon Samurai

So your best bet is to have no security or have an LDAP centralizing user accounts. Good to know. I was looking at it for a home NAS box setup between three OS except that NFS conflicts with the Harden packages in Debian. still, I'd love to get or significantly reduce the use of Samba.

Neon Samurai
Neon Samurai

Given how other packages have handled IPv4 and IPv6 (iptables for example), I'm guessing the only change in the config file will be the IPv6 hex instead of IPv4 four-threes.

Slayer_
Slayer_

I don't know much about IPv6, especially how to make addresses with it, but v4 was specified in the config files. What if you are v6?

tbmay
tbmay

...a Windows box in the mix, and a reasonable one even if there's not. There is a unix tools for windows free program that will let you use nfs on a windows box. NFS is faster than samba...but not by much. NFS is much simpler if you're on a completely trusted network. Your house will probably fall into that category. Other examples would be lock-and-key, isolated networks in the datacenter. Of course...iscsi targets are even faster still, assuming we're talking gb connections with all the protocols. I digress though.

Neon Samurai
Neon Samurai

In this case, I'm looking at a family NAS box shared between at least three OS platforms. I was considering going with native prots; osX uses it's, Windows uses CIFS/Samba, *nix uses NFS. The outcome seems to be CIFS (I hope samba4 turns up in Debian 6 soon). On the server side, the NAS would have to account for users which means authentication pulling NFS and Rsync off the list. On the client side, Debian throwing a warning over NFS conflicting with the security Harden packages keeps it off the list. I don't like adding things into my systems let alone specifically listed insecure things.

vdanen
vdanen

A future tip addresses using NFS with kerberos. And for a kerberized environment, it works really well (I wouldn't necessarily go through the hassle of setting up kerberos just for a secure NFS). There are other non-kerberos security options for NFSv4 as well (but I've not played with those yet). And you're right... NFS, out of the box, is about as secure as anonymous FTP. Don't hand out important exports with rw access; make them ro and enjoy the speed and convenience (coupled with automounting, it works awesome).

tbmay
tbmay

It doesn't help NFS's basic problem. Client.... "I'm user 1000, give me access to the export user 1000 has access to." Server.... "Done." Don't use plain old NFS in insecure environments. There are a lot of ways to fix that but "out-of-the-box" it is an insecure protocol.