By Bill O'Brien
Some feel that the convergence of Storage Area Network (SAN) and Network Attached Storage (NAS) architecture would be much like the mating of Big Ben and the Leaning Tower of Pisa (with the ultimate outcome something akin to A Clockwork Orange). Yet the movement to marry the two is growing. If you'd like to avoid the necessity of taking the kid who works in the closet down the hall out to lunch for a briefing, let's take a few minutes to track down the twists and turns of these palindromic technologies.
Solving the storage dilemma
It doesn't take a genius to realize that IT storage requirements are increasing by leaps and bounds. In fact, IT storage capacity is currently growing at an average 52 percent each year (the Forrester Report, March 2001). Keeping pace with storage needs means not only adding new physical hardware but also creating new infrastructure to deal with it. The combination is often a killer blow to the bottom line at a time when IT budgets have been shrinking. Thankfully, we've already started to move away from the Direct Attached Storage (DAS) model. The hyper-expensive method of adding hard drives to each individual server offered no real networkwide amortization plan. All it did was increase infrastructure costs.
NAS is one accepted alternative. Effectively, NAS is a server that handles data on the file level. It's attached to your existing network, typically via a dedicated Ethernet connection. Unless you start mixing NAS components from different manufacturers, NAS is relatively easy to install and manage.
SANs are more complex beasts, typically treating data as blocks and hauling loads across faster Fiber Channel connections and all the infrastructure that comes with that technology. Not only are they inherently speedy by design and implementation but SANs are also networks unto themselves. They tend to allow data traffic to be offloaded from the main network, with a resultant increase in response time (or, perhaps better, extremely decreased main network crawl time, especially during backups).
Although a bit simplistic, a NAS would be your choice for a server farm needing a common file system (for instance, using an e-mail application), while the speed and expandability of SANs lend themselves more toward e-commerce applications where small bits of data might be requested or shared among a large number of end points.
The reality of convergence
As with all things that can be explained in relatively simple terms, give someone enough time and the slightest incentive, and they'll try to muck things up with complexity. Such is the case here, as we approach a forced march to a convergence of NAS and SAN technologies. That may be too cynical. There is some validity behind the trend toward blenderization.
The speed of SAN and the interoperability offered by the file-handling capability of NAS would be a desirable combination. As well, the 10-km, point-to-point Fiber Channel distance limitation imposed on SANs can be a pain, and it's something that may be overcome by an IP connection. It just means sending Fiber Channel commands over the IP network (FC/IP); even if you're dedicated to the SCSI protocol, iSCSI does the same for SCSI commands. And thanks to the impressive work on 10-Gigabit Ethernet, there is actually reality at the end of that tunnel.
But there are a few disclaimers: No matter how much you might hear about NAS/SAN convergence during the Intel Developer Forum, it is still probably a year and a half to two years away—and may well be tied to the successful implementation of Intel's 3GIO bus architecture. 3GIO proposes a bus that's six times faster than PCI-X and that will go far to break up a huge data bottleneck in the box, whether it's a server or a workstation. In the meantime, as you consider either NAS or SAN additions, look for vendors that are already preparing for their eventual convergence. Hitachi's Freedom NAS architecture, for one example, is quite interesting. Otherwise, you'll be carting all your late 2002 purchases down to the curb as your 2004 upgrades arrive.