Way back in the early 1980s, the single biggest problem with computers was getting your data from one machine to another. Ethernet was all the rage, and yet we still didn’t have a really elegant way to share files. Worse yet, if you went from one PC to the next, everything was different on each machine. For the most part, if you wanted to maintain a common working environment, you carried it around with you on floppies!

In 1989, Sun Microsystems invented the Network File System (NFS) and virtually eliminated all the problems associated with sharing files across a network. Certainly, there were many alternatives that enabled a user to share files, but none were nearly as clean and intuitive as the NFS model. The NFS consists of a network file server and a network file I/O client. The network file server resides on the machine that shares its files, while the I/O client resides on the machine that receives access to the files.

The network file I/O client presents the user with interfaces identical to those that local file systems possess. The operation of the NFS client and server is transparent to the user application. This is really the supreme benefit of using NFS. The user, for all practical purposes, accesses the remote file systems in exactly the same way as if they were local. He might never even realize that these files are not local. Later, I will explain why this can also be undesirable. In this Daily Drill Down, I will discuss both performance and security issues surrounding NFS and show you some simple solutions to get around these issues.

The NFS client operation
When an application attempts to write to a remote file, the kernel on the application’s local machine invokes the NFS biod (block Input/Output daemon). The biod blocks up several NFS write requests until it fills an entire block. The block size of the file system is specified when it is created (with mkfs or newfs or some similar utility). The NFS block size ideally should match the normal file system block size, but it can be adjusted via its wsize parameter.

The NFS client machine caches blocks of data that it reads from remote machines. A request to read a remote file causes the kernel to first check to see if the requested data has been cached locally. If so, then it just returns the data. Otherwise, it sends a request to the server machine. Ideally, the client machine will read enough blocks ahead to prevent the applications that are running on the client machine from ever having to wait for the remote server.

The NFS server operation
When a remote client attempts to write data to an NFS server, the server accepts the UDP (User Datagram Protocol) message from the network, extracts the data from it, and writes it to the local file system’s I/O buffer RAM. It is eventually written to local disk.

The remote read request is very similar. The server reads the requested data from the local machine and then sends it back across the network to the client.

I don’t know how the average user feels about their virtual work environment, but I like it to be identical on every machine I use. NFS makes this possible. Wherever I go, my home directory, /home/edg, is right there with all my usual files. My .cshrc and .xinitrc files are present, no matter which machine I happen to be using at the moment. Also, if I change one file, I change them all!

Performance issues
NFS has a number of potential bottlenecks that will reduce performance. First is the network connection itself. A typical Ultra Wide SCSI interface can deliver 40 Mbps, and EIDE can deliver 16.7 Mbps. Even if you had a traffic-free 100BaseT network, you could expect no better than 12.5 Mbps from your network connection. So already you can see that the network itself is a limitation.

Worse yet, what if your client system doesn’t run enough of the biod daemons or your server doesn’t run enough nfsd daemon tasks? Well, if you have too few biods, your client machine will not cache data efficiently, and your applications will have to wait on I/O. You want to avoid this situation whenever possible, especially when you are doing very large tasks, such as billing. Likewise, if there are not enough nsfd tasks running on the server machine, the UDP requests are queued up by the server machine’s kernel. Again, the application program must wait for data.

Fortunately, you can set both machines to run as many of either daemon as you would like. Running too many biod processes doesn’t really hurt anything; however, they do take up swap space and process table space. Running too many nfsd tasks will cause unnecessary context switching, but this is a far better condition than having too few of them. Hewlett-Packard recommends the following test (for HPUX-9) to determine a reasonable number of nsfds:
Type: netstat -s | grep overflow

This command will return the number of socket overflows. A number in the tens or hundreds is okay, but a number in the thousands or higher says that you need to run more nfsds. By watching the CPU utilization of various nfsd processes (using the top or ps command), you can determine whether you have started too many of them.

Unlike most of the Microsoft-type operating systems, UNIX allows the file system to be tailored to the user’s needs. A user who has many small files might want a smaller block size. A user who needs to read large amounts of data as rapidly as possible might prefer a larger block size. The advantages of the latter are significant, since the majority of performance loss can be attributed to seek times. By reading larger blocks with each read, more data is moved with fewer seek operations. A larger chunk of the disk is allocated to each file, but this consequence is becoming less important as drive density continues to increase. After choosing an appropriate disk block size, make sure that your NFS biod read/write block size is set to match. If the sizes are different, each read or write request might inadvertently cause an insane number of disk reads or writes to occur. Operating systems differ slightly on how to set these parameters, but each typically has some kind of command, such as mount_nfs. If you study the manual pages for this command, you can usually figure out which configuration files are used when the system mounts remote file systems. For example, under HPUX-9, /etc/mnttab is where the rsize and wsize parameters are specified.

A few years ago, I was trying to optimize a very data-intense processing job. Every month, the company needed to process several million records from an input file. Fortunately, each record could be processed independently of all the other records, so there was no restriction on separating the file into multiple pieces and sending each piece to a different CPU for processing. Each machine would read a record from this file, retrieve a data item from that file, use that data to look up an entry in another file, and then write out a record. In the end, the various output files would be combined.

Initially, I used NFS to allow each of the machines to access the lookup file, which had about 10 million entries in it. This turned out to be a huge mistake! Each time a machine read a record from its data file, it needed to do at least 23 seek operations on the lookup table file, using a binary search algorithm to identify the record of interest. All these random reads across NFS were absolutely killing the throughput! Instead of using this method, I copied the lookup table to a local disk on each machine. This copy operation took less than 15 minutes, and then the main processing job could run in two hours instead of 40 hours. The process might have taken 20 times longer using NFS.

Security issues
Unfortunately, NFS presents a huge security risk to both the client and server machines. NFS must perform authentication on two levels. First, it needs to somehow authenticate that the user making a request is authorized to do so. Second, it needs to authenticate that the machine making the request is entitled to make it. NFS systems can use a combination of private key and public key cryptography to provide a reasonable level of security. Unfortunately, NFS can also be enabled using minimal authentication, which presents no challenge to hackers. If the network is not well protected, it’s very easy for an outside machine to impersonate either the client or the server. The same is true if you have to worry about internal threats, where a potential hacker has access to the network. For example, if an employee on the network disconnects the LAN connection from his desktop and places it on his Linux laptop, will it fool either an NFS client or an NFS server?

Normally, the server and the client authenticate themselves in order to be trusted by one another. If this traffic is visible to outsiders, it can be studied and possibly exploited. A good firewall can at least eliminate all of the outside threats. Unless security is of no concern, NFS should never be run except within the confines of a good firewall. This means that both the client and the server must be inside the firewall, so that nothing outside can see their traffic.

NFS provides some significant benefits in terms of user friendliness, but it presents some huge security risks and it offers considerably lower performance than local disk access. Remote file systems provide users with a convenient mechanism to access their files in a uniform manner regardless of which machine they use. Processes that require a great deal of disk access should be organized to maximize the number of local disk accesses rather than remote accesses. Most important, NFS should never be run outside a firewall. In this Daily Drill Down, I discussed the performance and security issues surrounding NFS and showed you some simple solutions to get around them.
The authors and editors have taken care in preparation of the content contained herein but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for any damages. Always have a verified backup before making any changes.