TechRepublic’s free Storage NetNote newsletter is designed to help you manage the critical data in your enterprise. Automatically sign up today!
In the first three parts of this series, I outlined my
team’s decision-making
process in choosing an iSCSI solution; the task of narrowing down the field of vendors;
and finally, the selection of
EqualLogic’s PS200E storage array for our network. In this article, I will
provide the installation process we went through with the array to get it up
and running and serving our data storage needs.
Preparing the network
Like any storage system, the network on which the storage
traffic flows is a primary consideration in the design of the system. With
iSCSI, you’re able to use typical, off-the-shelf gigabit Ethernet switches, but
you do need to make sure that the storage system is secure and able to serve
data needs at full speed. To meet these needs, we opted to use a completely
separate storage network for the array. We’re also using an IP address range
completely separate from the rest of our network, and it’s not routable
anywhere.
For this implementation, we are using two HP ProCurve 2848
gigabit Ethernet switches, connected with a crossover cable, but not stacked
with the units’ stacking features. We installed two additional gigabit Ethernet
NICs in each server and cabled each NIC to a separate HP 2848 switch. We
selected PCI Express
NICs and risers (for the slots) in our Dell servers because of the improvements
PCI Express makes in the ability of the server to handle full Gigabit
throughput.
With the switches and servers installed and cabled together
in a fully redundant fashion, the next step was cabling the PS200E to the
network. The back of the PS200E has six gigabit Ethernet connections—three on
each controller. One controller is active and one is used for backup. To
provide the maximum redundancy in the event of a controller, switch, or server
NIC failure, ports 1 and 3 from the first controller and port 2 from the second
controller are cabled to the first HP 2848, while the remaining ports (1 and 3
from the second controller and port 2 from the first controller) are cabled to
the second HP 2848.
Our cabling system provides for maximum redundancy while
also offering the ability for improved throughput using multipath I/O (MPIO). MPIO
is partially enhanced through the direct connection between the two switches
that I mentioned earlier. Any one part of our system can fail and we’re still
up.
Installing the array
You might find this hard to believe, but once the cabling
was in place, our PS200E was configured, running, and ready to serve up storage
space in 15 minutes. I was skeptical when EqualLogic told us it could go from out
of the box to functional in less than a half hour, but they weren’t giving us a
line.
Here are the quick install steps for the PS200E, for which
the initial installation is completely wizard-driven:
- Create
a cluster for the unit. Even a single unit needs to be a part of the
storage cluster. Management of the unit (or multiple units, if more than
one is present) is achieved through a single cluster IP address. When the management
software is started, all cluster members can be managed. - Give
each interface its own IP address. These are used internally by the unit. - Provide
the cluster with a name. We chose the ever-original name of
“san”. - Decide
if you want to optimize the unit for capacity or performance. The
EqualLogic folks we talked to indicated that capacity is a good option,
although it makes it sound like the unit’s performance degrades. It
probably does degrade to a point, although we ran no specific tests, and
we needed the increased storage. The performance option results in data
requiring twice as much space since it mirrors it all.
Next in this series, I’ll go over establishing volumes on
the array and connecting servers to new volumes.