Servers

Dell M1000e blades: First impressions

Scott Lowe is overhauling his data center and just received a new Dell M1000e blade system. Here are his first impressions of installing and powering up the new servers.

I've posted a couple of times regarding my server room/infrastructure overhaul project. This last week, we received our Dell M1000e blade system and I thought I'd share with you some of my first impressions.

The arrival was something of a surprise; not because we weren't expecting the shipment, but mainly because I didn't really expect the entire chassis, five blades and all of the various modules to ship in a single box. Yes - the entire system that we ordered came in one box. With enough manpower, it would have been possible to simply unbox the chassis, mount it into a rack, plug it in and get started!

Our original plan was to stage our installation. That is, remove some old hardware one day, install the chassis the next day, get the unit configured and then begin to bring servers online one-by-one. As they say, the best laid plans... well... sometimes don't go quite according to plan. An unrelated event on campus had the end result that one of our new servers had to be brought online sooner than expected. In short, we needed a new server up pronto -- our last ditch effort to correct a serious service problem affecting our ability to register students for classes. In a college, course registration is sort of important. The details for that problem aren't important except for the fact that a new server was needed.

We installed the chassis into a new server cabinet in our server room after installing the rail kit. We simply removed each of the five blades from the chassis to lessen the load and then two of us lifted the remaining weight and slid it into the rail kit that came with the shipment. That was it for physical installation. Besides being a little heavier, it wasn't much more dramatic than installing a single server.

Our blade chassis has six power supplies, two 24-port gigabit Ethernet passthrough modules and two gigabit Ethernet swtich modules on the back. We also have a KVM and two management modules. To get our chassis up and running, we connected one of the management modules and one of the switch modules, and also connected a keyboard, monitor and mouse to the KVM. With that out of the way, we ran through the initial chassis configuration routine, which was quite simple. The process grabbed DHCP IP addresses and displayed them on the chassis' integrated LCD display.

Next up, powering up one of the servers. We pushed the power button and... nothing. Each server's power LED blinked a few times and then the server shut down. A call to tech support resulted in the removal of the gigabit Ethernet daughter card on the server after which the server booted with no problems. We're still left with two onboard Gigabit Ethernet ports, which is all that was necessary for this server for now. Eventually, however, once we get our iSCSI SAN in place, we'll need these ports and we're working with Dell support to determine the cause of the problem.

With the server booted, we were able to get things up and running and return our problem service to operation.

So, here are the good points:

  • One box! Honestly, I was stunned to see this. Sure, it was darn heavy, but made for a very easy installation.
  • Easy initial setup. The initial setup was quick and painless. We got the chassis to a usable point within minutes.
  • Good servers. They're fast, they're economical and well designed.

And now for the cons:

  • Boot much? I'd really love it if the servers booted with all of the add-ons installed on the first try. Although I'm certain that Dell will get this solved, it's just annoying.
  • CD/DVD. Maybe I'm missing something, but I'd like to see a CD/DVD drive connected through the KVM and integrated into the chassis. Sure, the M1000e/M600 combination provides a number of ways to access a CD/DVD drive, including USB ports on the front of each server, but a chassis-integrated DVD drive would make like a little simpler for administrators.

The support forums are still somewhat lacking for the M1000e, but I can't fault Dell for this. I knew going into this that I was buying a brand new product.

At the end of the day, even with the cons I listed, I'm still very happy with the solution. As things develop, I'll continue to report back, particularly once I find out what's wrong with our daughter cards.

About

Since 1994, Scott Lowe has been providing technology solutions to a variety of organizations. After spending 10 years in multiple CIO roles, Scott is now an independent consultant, blogger, author, owner of The 1610 Group, and a Senior IT Executive w...

11 comments
jay_reddy
jay_reddy

Scott, I am the lead engineer on the M1000e; I am intrigued by your mezzanine card problem. Could you please email me your contact information, I would like to see if I can assist. You can reach me at jay_reddy@dell.com. Jay

d.g.bunting
d.g.bunting

We found removing all the modules from the chassis helped, it took four people to lift one into our datacentre! all modules removed two people could easily lift the chassis.

john_oeffner
john_oeffner

Here's a great initial setup document for the chassis and components. http://delltechcenter.com/page/10G+Blades+Cookbook In regards to the cd/dvd did you try the virtual media built into each blade through the iDRAC?

The 'G-Man.'
The 'G-Man.'

This sounds great - any chance of a few photos??

Dusterman
Dusterman

I am impressed ! ! . This is why we [ a small ....... very small computer and laser printer sales and service company ] always recommend Dell to our customers :-) . One " atta boy " for you Jay ! . Mike

d.g.bunting
d.g.bunting

The KVM on this blade centre is also much better and can cascade from the Dell 2161DS by simply plugging in one CAT5 cable. Once you have the KVM plugged in you can configure the CMCs(Chassis Management Controller) IP address and then do the rest of the configuration through a webpage.

Scott Lowe
Scott Lowe

John, We haven't gotten as far as getting the iDRAC up and running yet. The initial iDRAC attempt didn't work and, as I mentioned, we were in a hurry. I hate deploying servers like that, but sometimes, it's necessary, unfortunately. I didn't mention it since I hadn't really looked into it much yet. Scott

Scott Lowe
Scott Lowe

John, I appreciate the link! Scott

Scott Lowe
Scott Lowe

G-man, Photos are coming. I didn't have a chance to snap any for this posting, but will for my next one. Scott

sharac1
sharac1

Hi, i've also bought M1000 with 4xM600 and Brc 4424 and M6220 and i can confirm its all good :). Only have 2 problems with one being dumb Dell representative which forgot to mention all 6 PSU have C19/C20 hiAMP plugs and that you need either special cables/UPS closeby/DELL power distribution units/marine type outlets and with second being iDRACs not forwarding email alerts to my exchange box. I was also surprised by lack of DVD on chasis but found iDRAC much better and easier solution (just plug the chasis into your LAN, let the dracs get DHCP addresses, connect to idrac and launch console view, connect your laptop cdrom (or mount image) and voila server setup is ON). I just hope that Blade will work as advertised or that service support offered by Dell will be comparable to that of an IBM or HP (had IBM equipment previously). BTW: I've setuped hyper-v cluster on 4 Blades and so far everything is superb.

Editor's Picks