Data Centers

Dell M1000e: Lessons learned

Previously, I discussed the process I went through to make a decision on some new elements for my network and server infrastructure at Westminster College. I purchased an EMC AX4 SAN to complement a new Dell M1000e blade chassis filled with seven M600 servers. Here's how it's working out.

In a couple of previous posts in this blog, I discussed the process I went through to make a decision on some new elements for my network and server infrastructure at Westminster College. Specifically, I purchased an EMC AX4 SAN to complement a new Dell M1000e blade chassis filled with seven M600 servers. I promised I'd report back once I got things up and running, so here you are!

The goals for the project were to:

  • Lower overall costs to operate the data center (if possible).
  • Simplify management of the data center.
  • Provide a measure of availability with regard to server failure. This will be accomplished through a virtualization initiative that includes VMotion.
  • Give us a good, solid base for the future... with some kind of plan for meeting future needs.

On the server side of the equation, as I mentioned, I went with Dell's new blade solution — the M1000e chassis and a set of seven M600 servers with a couple of different configurations. All seven of the servers have dual 73GB 15K RPM drives in RAID1, dual quad-core 2.33 GHz processors and four gigabit Ethernet ports. Five of the seven servers have 8GB of RAM and two have 32GB, with 24GB usable and 8GB held back in a sort of redundant configuration that protects against failure of a bank. Think of it as RAID for RAM.

The initial installation for the chassis and one of the servers ended up being done on the quick. We were at a critical period in the semester when students were registering for classes and the product used for the service was not working properly. The vendor had released some unoptimized code (to put it mildly) in a fix and the only solution was to throw hardware at the problem, at least until the vendor got everything fixed (which they still haven't!). At the time of the installation, a couple of things weren't going right. Specifically:

  • The internal KVM switch wasn't working with a monitor, keyboard and mouse connected to the blade chassis. However, I was able to manage the servers through their individual integrated DRACs, which are hardware-based remote consoles.
  • None of the blades would boot. This was a bit of a problem. To get the server I needed up and running, I had to remove the dual Gigabit Ethernet mezzanine card from the blade. Once I did that, the server booted and was operational. At the time, I was able to get by with the two onboard Gigabit Ethernet ports, but once I got the iSCSI SAN in place, I was going to need those two extra ports.

Now, neither problem was a deal breaker, but they were things that needed to be fixed. Fortunately, both fixes were very simple. For the first problem — the KVM not working — a firmware update fixed the problem immediately. The second problem, quite honestly, I never figured out. I discovered, however, that opening up each blade and completely removing and reseating the mezzanine card corrected the problem, but only after I tried once to boot the blade. So, if I reseated the card before I ever tried to boot the blade, the next boot would fail. But, if I tried to boot the blade first, let it fail, then reseated the card, everything worked fine. A tad odd, but not earth shattering and definitely not the weirdest thing I've seen in IT.

Here's what I wish I had done differently: My M1000e chassis has four modules in the back of the chassis. Each module maps to one of the Ethernet ports on each blade server. Since I have four Ethernet ports on each blade, I needed four Ethernet modules in the back of the switch. I chose to go with two 16-port passthrough Ethernet modules and two 4-port Dell M6220 Ethernet switches. The passthrough modules mapped to the onboard Ethernet ports and the M6220 switches mapped to the two mezzanine ports. I wish I had simply gone with all M6220 switches and not messed around at all with passthrough modules. A passthrough module is just like connecting a regular server to a network. Every port needs to be connected with a cable meaning that I now have a whole lot of cables connected to my chassis when I could have a lot fewer. The M6220 switch, which includes four external ports and 16 internal ports (each internal port maps to a port on a blade), supports trunk bonding and also includes a pair of 10GbE uplinks. So, bandwidth will never really be an issue, but I could have lowered the overall cable mess quite a bit.

I am working with Dell to determine the feasibility of swapping the two passthrough ports for a pair of M6220 modules.

The M6220s will work great in conjunction with the EMC AX4 iSCSI SAN. Since most of the SAN traffic will originate from the blades, I've simply connected the SAN's iSCSI ports to the M6220s in a failover fashion.

Overall, the M1000e solution definitely meets the goal of simplifying data center management. The iDRAC is a great improvement over previous generations of Dell's remote management hardware. To install VMware ESX 3.5, for example, I simply used Internet Explorer to connect to the target blade, started the iDRAC viewer and then pointed the server to my workstation's DVD drive to install ESX. I didn't have to go to the data center at all, didn't have to connect a local CD ROM drive to the blade chassis or anything. The iDRAC viewer will allow a blade server to use my workstation's physical optical drives, or it will mount an ISO image that can be used to load a server.

On the cost side of the equation, the overall project will help us spend less in future years as we need to update the infrastructure. Between VMware, the SAN and an easily and expensively expandable blade chassis, I can't see us ever spending $7K, $8K or $10K on a rack server again. The pricing we got on the M600s was, quite frankly, absolutely astounding, even with a 32GB configuration.

If you have questions or comments, please leave them below! I'd love to hear your thoughts.


Since 1994, Scott Lowe has been providing technology solutions to a variety of organizations. After spending 10 years in multiple CIO roles, Scott is now an independent consultant, blogger, author, owner of The 1610 Group, and a Senior IT Executive w...

Editor's Picks

Free Newsletters, In your Inbox