Networking

A look at an iSCSI-based highly available architecture

High availability is an important consideration in any service and becomes more important all the time. Scott Lowe shared with you a look at the highly available storage infrastructure implemented at Westminster college.

High availability is an important consideration in any service and becomes more important all the time.  Scott Lowe shared with you a look at the highly available storage infrastructure implemented at Westminster college.

—————————————————————————————————————————————————————————— 

As I tend to say a lot, I've written quite a few postings in this blog about the EMC AX4 iSCSI SAN we installed at Westminster College over the summer.  In my last posting, I shared with you a positive story regarding a minor failure of one of my AX4's storage processors that resulted in our high availability planning paying off big time.

In this posting, I'm going to share with you a snapshot of our highly available storage architecture at Westminster College and what makes it tick.

Westminster storage infrastructure 

So, what are you looking at?  It's a look at how a single server is cabled to our storage infrastructure.  At the top of the diagram sits our AX4.  We have two physical arrays; the SATA unit doesn't have any brains.  It's simply cabled to the primary unit, which houses dual controllers/storage processors, each containing dual gigabit Ethernet adapters.  Each of the physical array enclosures is connected to a separate UPS, each connected to separate electrical circuits.

As I indicated, each storage processor includes a pair of gigabit Ethernet ports used to pass iSCSI traffic between the array and any connected servers.  Each port on each controller is connected to one of two Ethernet switches in a mesh-type configuration.  In our scenario, we currently have the AX4 cabled to a pair of Dell M6220 blade-based Ethernet switches.  Each of the blades has redundant uplinks to our HP ProCurve 5412zl.  In order to keep the diagram uncluttered, this uplink scenario isn't shown in the diagram.  All iSCSI traffic is confined to its own VLAN, which does not communicate at all with the rest of the network.  At present, we do not use CHAP or any other iSCSI authentication/security methods, opting instead to simply segregate all iSCSI traffic.

Each server, including each of the blade servers connected to the storage infrastructure, has four Ethernet ports.  There are two onboard gigabit Ethernet adapters and another dual port gigabit Ethernet adapters.  For storage connections, we've used one onboard gigabit Ethernet port and one port on the expansion adapter and connected each to one of the two Ethernet switches.

The front end traffic for the server is currently using just a single gigabit Ethernet adapter, however we are in the process of converting these to bonded channels connected to different blades in our 5412zl, but in the same VLAN, of course.

We've installed EMC's PowerPath software on each server using the AX4.

The primary single point of failure in this scenario remains the individual servers.  We are working on clustered and VMware-based scenarios to correct this deficiency.

About

Since 1994, Scott Lowe has been providing technology solutions to a variety of organizations. After spending 10 years in multiple CIO roles, Scott is now an independent consultant, blogger, author, owner of The 1610 Group, and a Senior IT Executive w...

Editor's Picks