For the last 10 months, I’ve worked as the IT Director for Elmira College,
a small liberal-arts college located in Elmira, NY, in the midst of the Finger Lakes.
One of the areas that immediately caught my eye was the storage situation in
our data center. With 35 servers, we have no central storage and no good way to
implement highly-available solutions like clusters of servers. Moreover, each
time we buy a server, we need to project the possible data storage needs over
the next few years. Sure, it’s doable, but we end up with a ton of wasted
space, little ability to granularly manage disk space, and in the rare event
that we underestimate our storage needs, we need to scramble to correct the
situation. That’s all about to change.

Thinking through the iSCSI vs. fibre channel decision

By this point, many of you reading this article are probably
thinking that it’s time we invest in a Storage Area Network (SAN), and, you’re
right. Recently, we completed the selection and purchase of a new SAN. However,
instead of choosing a “traditional” fibre channel SAN complete with
FC switches and host-bus adapters, I made a decision early in the process to
focus on iSCSI-based SANs. Here’s a summary of my reasoning:

  • First,
    I knew I wouldn’t be able to present enough justification to the
    President’s Cabinet to get an expensive fibre channel solution in house. Even
    with ROI and savings estimates—both cash and labor savings—it would have
    been a no-win for me, and I knew that.
  • Second,
    all of my staff is well-versed in Ethernet and TCP/IP, but no one—myself
    included—has had exposure to fibre channel, making the training side of
    the equation much more difficult. We’re a small school with a relatively
    small IT staff, so adding major new technology to the portfolio can
    sometimes be difficult.
  • Third,
    I needed the SAN in short order, so extended training was not really an
    option. We’re migrating to Microsoft Exchange over the next two months, as
    well as moving a number of vendor-supported databases to Microsoft SQL
    Server, and I wanted all new projects on centralized storage in order to
    provide highly-available, clustered solutions (Exchange, for example) as
    well as to have the ability to take regular point-in-time
    “snapshots” of our production databases.

I can’t begin to describe the amount of research that I did
prior to taking the iSCSI plunge. While I love new technology and like to see
it in action, for my production environment, I remain somewhat risk-averse, and
iSCSI is a fairly new technology of which I was very skeptical. The first
hurdle I had to leap was the speed issue. FC runs at 2 Gbps whereas iSCSI runs
on top of Gigabit Ethernet links. In theory, this limits transmission speeds to
125 MBps for iSCSI and 250 MBps for fibre channel. However, with iSCSI’s
support for multipath IO (MPIO), we can use multiple network adapters to access
the storage array. And, the simple fact is this: we have a small environment. We
don’t need massive storage bandwidth.

With iSCSI, we also have the option to use—but don’t have to
use—iSCSI adapters that improve throughput. However, to start with, we can use
simple, standard Gigabit network adapters and determine from there whether we
need more. Next, we don’t need special, really expensive switches. A standard
Gigabit Ethernet switch is all that’s required. All in all, with the
understanding of its potential storage limitations, the massive cost savings
from iSCSI is due to its use of standard, inexpensive Ethernet hardware.

In my next articles, I’ll go over our iSCSI vendor-selection
process and detail our initial installation and production-use experience.