EMC AX4: Installation experience

I've indicated in this blog before that I recently purchased an EMC AX4 SAN to complement my existing computing environment and enable more advanced capability and improve overall infrastructure reliability. This past week, my staff and I finally found time to unbox and install the array in one of our server cabinets and get it minimally up and running. I wanted to share the experience with you and let you know if the AX4 lives up to its ease-of-installation promise.

In short: Yes it does.

We started out by unboxing the two arrays that we recieved. It seemed much easier to rack the units once they were out of the box. We have two physical chassis: The storage processor unit, which includes twelve 400GB 10K RPM SAS drives and redundant storage processors, each with two iSCSI gigabit Ethernet ports and a single Disk Array Enclosure (DAE). Our DAE holds twelve 750GB SATA drives. Our total raw capacity is just over 13TB.

Our arrays arrived in three boxes -- one box for each chassis and one for the standby power supply, which is basically a small UPS that powers the storage processors for a short period of time in the event of a power failure in order to prevent data loss.

Racking the units was a breeze. The Standby Power Supplies (SPSs) go on the bottom of the stack, followed by the chassis with the storage processors on top of that, and then the DAE. In my configuration, the solution requires a total of four power connections -- two for the DAE and one for each SPS. The processing unit is powered from the pair of SPSs -- one SPS for each processing controller. Each SPS is connected to a different UPS for maximum reliability.

The processing unit and the DAE are connected to one another through the use of a pair of uplink cables that are shipped with the AX4.

For networking, each controller has three network connections. One is for management and the other two are for iSCSI connections. The management connections can be connected to either a dedicated storage network or to your production network. The ability to access these management ports is critical to completing the array setup.

Depending on the level of redundancy you want to achieve, there are number of ways that you can connect the iSCSI ports on each storage controller. For my configuration, we've created a VLAN dedicated to storage traffic on our core switch, which is an HP Procurve 5412zl. The first iSCSI port on each storage controller is connected to one module in our 5412zl, while the second iSCSI port is connected to a separate module. Through this configuration, we reduce the single points of failure in our configuration. Sure, the core switch might fail, which would result in a total loss of communication, but that is an unlikely event and is an acceptable risk.

Once the unit was fully cabled, we connected our management server to the newly created storage VLAN through a secondary network connection. This VLAN has no route to the outside world and is not accessible from elsewhere on the network. From this server, we did the following to get the AX4 software up and running:

  • Upgraded the Flare Operating Environment to the latest version. This was required in order to upgrade to Navisphere Manager, which we purchased with our unit. Navisphere Manager allows us to install additional enhancements to the AX4, including SnapView Manager and is an upgrade of the AX4's default Navisphere Express software.
  • Installed EMC's management toolkit that allowed us to upgrade enable Navisphere Manager, SnapView Manager and the expansion enabler, which allows us to make use of the attached DAE.

This is as far as we've gotten so far. I've included some pictures of our setup below. The first picture shows you both chassis, with the bottom chassis being the control unit and the top being the expansion array. The last picture gives you a look at the back of the setup. You can see our Dell M1000e chassis in the first picture, too.