Particularly for Windows-less environments where Samba is not needed, NFS is an easy and reliable way to share files between Linux and UNIX systems. Vincent Danen takes you through the setup.
NFS is an excellent way of sharing files between Linux and other UNIX systems. While Samba is a great choice due to the compatibility with Windows, if you're in a Windows-less environment, NFS may be a better choice.
NFS allows for machines to mount without authentication, at boot, which is great if you have a cluster of systems or if you want to use a centralized home directory system (using an NFS-mounted directory for home directories to keep your configurations and files identical on multiple systems).
NFS is also very easy to set up. To begin, you need to install the NFS package, so on Fedora or Red Hat Enterprise Linux and other similar systems, install the nfs-utils package:
# yum install nfs-utils
Next, you will need to edit /etc/exports which is where we define what filesystems can be remotely accessed. A sample /etc/exports may look like this:
What this /etc/exports does is export the /srv directory on the server to the hosta.domain.com computer as read/write and to hostb.domain.com as read-only. It also exports /home as read/write to any computer in the 192.168.1.0 network (192.168.1.0 being the network address and 255.255.255.0 being the netmask).
There are other options you can supply on a per-host or per-network basis, including the no_root_squash option which will not prevent root on a client machine from writing files to the server as root; by default, NFS will map any requests from root on the client to the 'nobody' user on the server.
Next, check /etc/hosts.allow and /etc/hosts.deny. NFS will check these files for access controls to the server. This is particularly necessary if you are using wildcards or broad network specifications in /etc/exports; using hosts.allow and hosts.deny you can fine-tune which clients do and don't have access. For instance, you may add in /etc/hosts.deny:
and then in /etc/hosts.allow:
portmap: 192.168.1.1, 192.168.1.2, 192.168.1.3
This would only allow the hosts specified in /etc/hosts.allow to connect to the portmap service. You can get more fine-grained and also add entries for lockd, rquotad, mountd, and statd — all other NFS-related services.
Finally, to start NFS sharing, on the server you need to start a few services:
# service portmap start
# service nfs start
# service nfslock start
# service rpcbind
# service rpcidmapd start
On newer systems, portmap is probably deprecated in favour of portreserve; in that case you would use service portreserve start instead.
To see what filesystems are exported, use the exportfs command; if you've made changes to /etc/exports, use exportfs -ra to force NFS to re-read the configuration. To make sure that NFS is running, use the rpcinfo command; if it returns a list of services and addresses being listened to, you know it is running.
Finally, if you are running iptables on the server as a firewall, you will need to change what ports the NFS services listen to. By default, these are random unused ports, with portreserve/portmap letting requesting services know what ports to connect to. This is a major difference between NFSv3, where this is true, and NFSv4 which solely uses TCP port 2049, so this largely depends on which version of NFS you plan to use or enforce. On Fedora or Red Hat Enterprise Linux, this can be done by editing /etc/sysconfig/nfs. By default, it's all commented, so the following is what we want to uncomment and define:
This will force static ports for the above services. The next step is to open the firewall on these ports, which can be done by editing /etc/sysconfig/iptables (again keeping in mind this is on a RHEL system):
# the following are for NFS
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state —state NEW -p udp
—dport 111 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state —state NEW -p tcp —dport 111 -j ACCEPT
RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state —state NEW -p tcp —dport 2049 -j ACCEPT
-A RH-Firewall-1-INPUT -s
192.168.1.0/24 -m state —state NEW -p tcp —dport 32803 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state
—state NEW -p udp —dport 32769 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state —state NEW -p tcp —dport
892 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state —state NEW -p udp —dport 892 -j ACCEPT
RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state —state NEW -p tcp —dport 875 -j ACCEPT
-A RH-Firewall-1-INPUT -s
192.168.1.0/24 -m state —state NEW -p udp —dport 875 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state —state
NEW -p tcp —dport 662 -j ACCEPT
-A RH-Firewall-1-INPUT -s 192.168.1.0/24 -m state —state NEW -p udp —dport 662 -j
After these changes are made, restart the firewall and the NFS services:
# for i in iptables
portmap nfs; do service $i restart; done
At this point, your NFS server is set up and ready to accept connections from remote clients, which can be tested by mounting one of the exported filesystems on the client:
# mkdir -p
# mount -t nfs server.domain.com:/srv /server/srv
If mount does in fact mount the remote filesystem, everything is working as it should.
NFS is really easy to use, and it works really well. Being able to mount NFS filesystems at boot is a great boon; you can have NFS mounted filesystems without your users even being aware that they are there, and without any direct intervention by them, which is handy.