Servers

Dell M1000e: Lessons learned

Previously, I discussed the process I went through to make a decision on some new elements for my network and server infrastructure at Westminster College. I purchased an EMC AX4 SAN to complement a new Dell M1000e blade chassis filled with seven M600 servers. Here's how it's working out.

In a couple of previous posts in this blog, I discussed the process I went through to make a decision on some new elements for my network and server infrastructure at Westminster College. Specifically, I purchased an EMC AX4 SAN to complement a new Dell M1000e blade chassis filled with seven M600 servers. I promised I'd report back once I got things up and running, so here you are!

The goals for the project were to:

  • Lower overall costs to operate the data center (if possible).
  • Simplify management of the data center.
  • Provide a measure of availability with regard to server failure. This will be accomplished through a virtualization initiative that includes VMotion.
  • Give us a good, solid base for the future... with some kind of plan for meeting future needs.

On the server side of the equation, as I mentioned, I went with Dell's new blade solution -- the M1000e chassis and a set of seven M600 servers with a couple of different configurations. All seven of the servers have dual 73GB 15K RPM drives in RAID1, dual quad-core 2.33 GHz processors and four gigabit Ethernet ports. Five of the seven servers have 8GB of RAM and two have 32GB, with 24GB usable and 8GB held back in a sort of redundant configuration that protects against failure of a bank. Think of it as RAID for RAM.

The initial installation for the chassis and one of the servers ended up being done on the quick. We were at a critical period in the semester when students were registering for classes and the product used for the service was not working properly. The vendor had released some unoptimized code (to put it mildly) in a fix and the only solution was to throw hardware at the problem, at least until the vendor got everything fixed (which they still haven't!). At the time of the installation, a couple of things weren't going right. Specifically:

  • The internal KVM switch wasn't working with a monitor, keyboard and mouse connected to the blade chassis. However, I was able to manage the servers through their individual integrated DRACs, which are hardware-based remote consoles.
  • None of the blades would boot. This was a bit of a problem. To get the server I needed up and running, I had to remove the dual Gigabit Ethernet mezzanine card from the blade. Once I did that, the server booted and was operational. At the time, I was able to get by with the two onboard Gigabit Ethernet ports, but once I got the iSCSI SAN in place, I was going to need those two extra ports.

Now, neither problem was a deal breaker, but they were things that needed to be fixed. Fortunately, both fixes were very simple. For the first problem -- the KVM not working -- a firmware update fixed the problem immediately. The second problem, quite honestly, I never figured out. I discovered, however, that opening up each blade and completely removing and reseating the mezzanine card corrected the problem, but only after I tried once to boot the blade. So, if I reseated the card before I ever tried to boot the blade, the next boot would fail. But, if I tried to boot the blade first, let it fail, then reseated the card, everything worked fine. A tad odd, but not earth shattering and definitely not the weirdest thing I've seen in IT.

Here's what I wish I had done differently: My M1000e chassis has four modules in the back of the chassis. Each module maps to one of the Ethernet ports on each blade server. Since I have four Ethernet ports on each blade, I needed four Ethernet modules in the back of the switch. I chose to go with two 16-port passthrough Ethernet modules and two 4-port Dell M6220 Ethernet switches. The passthrough modules mapped to the onboard Ethernet ports and the M6220 switches mapped to the two mezzanine ports. I wish I had simply gone with all M6220 switches and not messed around at all with passthrough modules. A passthrough module is just like connecting a regular server to a network. Every port needs to be connected with a cable meaning that I now have a whole lot of cables connected to my chassis when I could have a lot fewer. The M6220 switch, which includes four external ports and 16 internal ports (each internal port maps to a port on a blade), supports trunk bonding and also includes a pair of 10GbE uplinks. So, bandwidth will never really be an issue, but I could have lowered the overall cable mess quite a bit.

I am working with Dell to determine the feasibility of swapping the two passthrough ports for a pair of M6220 modules.

The M6220s will work great in conjunction with the EMC AX4 iSCSI SAN. Since most of the SAN traffic will originate from the blades, I've simply connected the SAN's iSCSI ports to the M6220s in a failover fashion.

Overall, the M1000e solution definitely meets the goal of simplifying data center management. The iDRAC is a great improvement over previous generations of Dell's remote management hardware. To install VMware ESX 3.5, for example, I simply used Internet Explorer to connect to the target blade, started the iDRAC viewer and then pointed the server to my workstation's DVD drive to install ESX. I didn't have to go to the data center at all, didn't have to connect a local CD ROM drive to the blade chassis or anything. The iDRAC viewer will allow a blade server to use my workstation's physical optical drives, or it will mount an ISO image that can be used to load a server.

On the cost side of the equation, the overall project will help us spend less in future years as we need to update the infrastructure. Between VMware, the SAN and an easily and expensively expandable blade chassis, I can't see us ever spending $7K, $8K or $10K on a rack server again. The pricing we got on the M600s was, quite frankly, absolutely astounding, even with a 32GB configuration.

If you have questions or comments, please leave them below! I'd love to hear your thoughts.

About

Since 1994, Scott Lowe has been providing technology solutions to a variety of organizations. After spending 10 years in multiple CIO roles, Scott is now an independent consultant, blogger, author, owner of The 1610 Group, and a Senior IT Executive w...

25 comments
drfredp
drfredp

Dell says the M600 maxes out at 64GB RAM. You didn't test the max RAM config?

jon911
jon911

I'm considering a similar setup except I'd like 10Gb Ethernet links to my iSCSI. I see the M6220 has 2 (or 4 for fiber) "uplinks". As 1GB links are plenty of bandwidth to the outside world, I'm wondering if a can user the 10Gb links to my iSCSI? Any thoughts?

salman
salman

Hi, we are facing similar issue with Dell M1000e chassis, i have two Cisco catalyst 3032 IO modules with M1000e chassis, i am facing very high response time issue, i have checked the response in two ways: disconnected the chassis with my office network and when i try to ping servers internally the reponse time is gradually rise upto 700ms, secondly i have check the same by connect chassis with my office network and the response time is same due to which i am facing network disconnection problem as well, also my chassis showing some abnormal behavior when i remove chassis CMC network cable the chassis suddenly nose like a jet engine for few seconds and when i check print screen on my monitor console it shows initializing in all raws and when i checked the chassis front LCD display it showed me that 2 of my server slots are unknown and chassis is unable to recognize the blades in two slots. For work arround i rebooted the chassis properly and after reboot it showed my all servers also network issue issue seems to be like resolved, but after an hour or so network again started, i am also in contact with Dell support team, can you also please advice any solution

oztek
oztek

I actually have 2 M6220 switch modules that are almost brand new. Dell couldn't supply the Cisco switch modules in time, so they supplied the Dell modules, which I replaced with the Cisco switch modules when they became available. Our Dell account manager has said they don't want the M6220 switch modules back and they are mine to do with as I please. If anyone is interested in buying them, please contact me. This is legitimate and I will happily supply full details including service tag numbers to anyone interested.

scott_hanson
scott_hanson

Scott, great writeup. I'm a firm believer in sharing knowledge, have been ever since my BBS days :-) I'm part of Dell's Enterprise Technology Center (www.delltechcenter.com). Just wanted to point you that direction in case you weren't aware of us as a resource to help. - Scott Hanson Dell TechCenter www.delltechcenter.com

IT Generalist
IT Generalist

Scott, Thanks for keeping us up-to-date with your data center upgrade progress and for replying to my questions in your previous posts. I am also planning on upgrading our data center to a SAN and virtualization type of environment. I like to know what other SAN solutions that you evaluated from from other vendors and what you didn't like about them. How much were you able to save over other vendors? I am looking into NetApp FAS2000, EMC Celera NS Series/Integrated and Clariion AX4, Dell/Equalogic PS5000XV and HP StorageWorks 2000i MSA. How do you plan on backing up data to a tape drive from your SAN storage? What made you to setup dual SAN controllers as Active/Active rather than Active/Passive? Thanks.

jkilungya
jkilungya

Hi Am a tech in one of the dell distributors in kenya,this technology is about to get in our market.please be sending me information from your expiriences.This was post informative. from jkilungya@yahoo.com

vyassh
vyassh

Thanks Scott for wonderful note. It was very useful. I was in the process of buying a couple of servers for our data centre. Now we looking in to possiblity of deploying blade though bit more expensive but in long run it would pay off.

rflii
rflii

I would like to see a total initial cost, power requirements and rack space for the Blade Center and M6200 compared to the same in 1U rack servers and network gear. When I tried with IBM Blade Centers a couple of years ago it was not worth the trouble due to the 220V power needed for the Blade Center.

jsears
jsears

Almost four years ago I went through almost exactly the same experience with a Dell 1855 blade chassis and blades and a Dell/EMC CX300 SAN. A trade-off with Dell blades is after 5 years when the Dell support expires, you must usually purchase a whole new blade chassis and new blades, because the blades typically don't fit the new chassis, and they themselves are out of support if you bought them at the same time as the chassis. Ouch. That is, if you need your setup to be under support. So, make sure to build this huge expense every 5 years into your long-term IT plan and annually remind whomever controls your IT budget of this recurring expense.

gordonmcke
gordonmcke

Looking forward to hearing how your ESX servers work with VMotion and iSCSi.

andyhassard
andyhassard

Why did you buy so many physical servers? Do you plan on having tens of VMs on your SAN?

zonemath
zonemath

Hello, I guess you are gonna place all of your VMs in the SAN (to use the vmotion feature), right ? Also, is it possible to provide us more details on the SAN you choose ? Thanks, Mathieu.

piper88
piper88

The M6220 has 2 10Gb module slots, each of which supports two ports. slot 1 supports CX4 copper, CX4 12Gb stacking (to other M6220s only), SFP+ fiber, or XFP fiber modules. Slot 2 is the same except that it does not support stacking, and does support a 10GBase-T module, which cannot be used in slot 1 due to power consumption. I haven't seen any iSCSI arrays with 10Gb ports, but I'm sure they're out there or will be soon enough.

Scott Lowe
Scott Lowe

I wish we'd talked a week ago! I replaced my two Ethernet passthrough blades with M6220 switches. Honestly, Dell gave me a sweet deal to do so. So... now I have two passthroughs with no home :-) Scott

Scott Lowe
Scott Lowe

I did do a short comparison of rack-based servers with direct attached storage vs blade servers with a SAN. It's somewhere in the IT Leadership section of TechRepublic and gives you some of the information you're looking for. Over a 5 year period, this solution does bring down our infrastructure costs. I don't have exact numbers, but it's enough that it makes a difference. Scott

Scott Lowe
Scott Lowe

Power was very easy for this project. We already had two UPSs with appropriate connectors for the blade chassis.

Scott Lowe
Scott Lowe

I lease most of my hardware so I have permanently budgeted replacement money. I'm ok with Dell changing things up every 5 years or so. If they do it more quickly, THAT would be of more concerned. That said, I think they have a good thing with the M1000e. I hope it last longer and gets more attention than the 1855/1955 platform, Scott

oztek
oztek

I can tell you that we have two ESX servers on M600 blades using a Dell MD3000i iSCSI storage solution and so far it seems to be working fine.

Scott Lowe
Scott Lowe

It's a dual purpose. The 32GB configurations are for server consolidation. The 8GB servers will be used for Terminal Services for a bunch of new machines in some of our labs and to begin to replace some of our public computers with terminals.

spicc7
spicc7

Thanks for posting this. Can you share any info on configuring the M6220s with the M1000e. Do you have the switches configured in a stack? Have you configured port aggregation? If one of your blades was a firewall (i.e. ISA in DMZ) how would you configure the switches? Thanks for any help you can provide.

cmdux_consult
cmdux_consult

Hi there, NetApp is really great with ESX. You can use both iSCSI and FC. Also you have the possibility to use a lot of LUN protocols, just to name some: Solaris, Windows, HP-UX, AIX, Linux, NetWare and of course VMware. NetApp?s snapshot/snapshot features work perfectly with VMware and can be used both for (disk) backup or deploying new servers/clients in a couple of seconds (cloning). Especially in ESX case, using NFS, will result in increase performance and make virtualization administration easily. Have a nice day an good weekend to all Gheoghe (A Netapp evangelist, but not a salesman, just an enthusiast and user) :-)

Scott Lowe
Scott Lowe

I went with an EMC AX4. I'll provide more details in my next post. Scott

oztek
oztek

I could've given you a sweeter deal!! :)

IT Generalist
IT Generalist

I would also add cost of power and cooling to the comparison. Every major vendor has some sort of ROI/Cost-benefit calculator which I would also check out to get the total savings.

Editor's Picks