Data Centers

The top four mistakes organizations make when building datacenters

Etienne Guerou, Vice President, Chloride South East Asia shares some of his 20 years worth of expertise in building datacenters. Read on to find out more.

I had the opportunity to speak with Etienne Guerou, who is the Vice President, Chloride South East Asia- a world leader in power solutions. Over a cup of coffee, Mr. Guerou, who has 20 years of experience in designing and building datacenters, briefed me on some of the top mistakes that IT professionals and decision-makers make when building their own datacenters.

Here are the main mistakes he outlined:

1. Harboring the wrong appreciation of a datacenter

One typical mistake is that would be that IT professionals and decision makers don't differentiate between datacenters. Instead, they treat a datacenter as an all-inclusive black box where "many" servers are to be housed. That mindset is typically exposed when confronted with the simple question: "What do you intend to use your datacenter for?"

Ask yourself about the scale and anticipated usage of the datacenter, expansion plans of at least two to three years down the road, whether blade servers or standard rack mount servers will be utilized, etc. When you answer these questions, you can then extrapolate power consumption, as well as current and future capacities in terms of cabling, cooling and power.

2. Attempting to run a datacenter from improper facilities

It would be a mistake to simply acquire an ad-hoc facility and have it rebadged as a datacenter without a proper appraisal of its suitability, cautions Guerou. He cites an example in which a client, after having signed the lease for a fairly large space, sought out Mr. Guerou's advice on how to proceed. To the client's horror, the answer is that the venue was simply not suitable for a datacenter due to granite floorings and thick beams across the ceiling - resulting in an effective height that is simply inadequate for cabling and cooling purposes.

While it might not be possible for most organizations to put up custom-built datacenters on whim, what this client should have done was get an experienced consultant in and involved right from the get-go.

Ideally, in Mr. Guerou's own words: "A datacenter should be a technical building dedicated to a very particular business of processing data."

3. Buying by brands

Another common mistake is that many IT professionals attempt to buy into selected brands. While this strategy might work well when it comes to standardizing on servers or networking gear, an efficient and well-run datacenter has nothing to do with specific hardware brands or models. Rather, you should approach the datacenter from the perspective of a complete solution, where the entire design has to be considered as an integrated whole.

As an unfortunate side-effect of strong marketing by enterprise vendors, many users have, consciously or subconsciously, bought into the idea of designing a datacenter by snapping together disparate pieces of hardware. While not wrong, it's imperative that the end-result be evaluated as a whole - and not in a piecemeal fashion.

Various hardware, such as types of servers, positioning of racks, networking equipment and redundant power supplies should dovetail properly with infrastructure such as cooling, ventilation, wiring, fire-suppression systems, and security measures.

4. Rushing onto the "Green IT" bandwagon

The increasingly popularity of "Green IT" has vendors unveiling new servers and equipment touted for their superior power efficiency. While the idea is definitely laudable, you should sieve out the marketing hype from actual operational consumption.

For example, two UPS from "vendor X," while individually more power efficient at 90% loading, might actually offer a much poorer showing if deployed in a redundant configuration, where they will end up running at 45% loading. In the absence of proper scrutiny, green IT initiatives could degenerate into a numbers game.

At the end of the day, you should take overall power efficiency - or power factor, of the entire datacenter -a benchmark rather than weigh it by individual vendor claims. After all, a yardstick of a well-run datacenter has always been about power efficiency.

In parting, Mr. Guerou has the following advice for organizations thinking of building their own datacenter. "Hire an experienced consultant."

Do you have anything to add on things to bear in mind when building a datacenter?

About

Paul Mah is a writer and blogger who lives in Singapore, where he has worked for a number of years in various capacities within the IT industry. Paul enjoys tinkering with tech gadgets, smartphones, and networking devices.

13 comments
kerry.sisler
kerry.sisler

If the HVAC is properly supported by generators with reasonally short startup periods (typically 3-5 minutes), then they do not need to be on UPSs. The interesting variation to the discussion is to putting the HVAC blowers on UPS, but putting the compressors only on generator. A basic design which pre-plans for no hotspots and keeping the air flow going is the key. Of course this assumes that the entire continuity of operations plan is well written, thoroughly trained, and exercised on a regular basis. There are many subtlties, but that is the guist of it.

Jaqui
Jaqui

[b]Ideally, in Mr. Guerou???s own words: ???A datacenter should be a technical building dedicated to a very particular business of processing data.???[/b] Yet I know of a building designed from the start for that exact purpose that doesn't get used for it. They built it for fiber networking, on-site backup power generating, air conditioning [ humidity control as well as heating and cooling ], yet it's only really used by small companies for their offices. The entire 4 story structure is a total of around 10,000 square feet, and is not located for convenient access at all. You can only enter the location from the West. [ drive North on divided North South road, turn east into location property. ] It would be far easier to convert a standard office space in a more convenient location than to use this one building.

jmacsurf
jmacsurf

1. Deciding on raised floors or using cable ladders; usually this is not weighed heavily enough in terms of cost, cooling, accessibility, aesthetics and basic floor paneling and rack layout. 2. Using 2 posts racks with cable organizers at each end is superior to rack cages for Layer 2 devices. 3. Rack cages for Layer 3 devices is superior to #2: less ports = less cables 4. Cold air aisles and hot air uptake returns not placed in an about face position in smaller data centers.

mklinz01
mklinz01

Additionally there is a strong tendency to continue to commit the following 4 mistakes: 1) The physical equipment layout is not a part of the original planning. 2) Improper orientation of the equipment to the airflow in the datacenter. 3) Inadequate planning for power redundancy at the rack PDU level. 4) Placing racks with the exhaust of each row blowing into the air intake of the next row.

paulmah
paulmah

Etienne Guerou, Vice President, Chloride South East Asia shares some of his 20 years worth of expertise in building datacenters. Read on to find out more.

tuomo
tuomo

Good points even if a little simplified if thinking the whole data-center - no wonder, there are books even from partial issues in building data-centers. They are complicated, small or large doesn't really matter but the purpose makes all the difference. 24x7 data-center has definitely different requirements than 8x5. 24x7 search data-center is very different than 24x7 transaction data-center. Everyone wants security but to what level - huge differences (if you know where the center is?) Is it 450 feet under granite, EMP protected - or just EMP protected, is it a manufacturing data-center for 24x7 operations with six 9's or one for administrative functions with two 9's? Does the data-center need fully separated external and internal networks, even separated by concrete walls? Does the data-center need a realtime / fully functional backup center - the parameters totally change! And so on.. I would add the fifth point - external environment. Be it available workforce, maintenance and union contracts, alternative accessibility in case of emergency, disasters, etc. And the vendor reliability and / or alternate equipment, resource availability. Easy and legal(!) evacuation plans, self sustained in case people have to stay there one or two weeks, etc? Everyone wants toilets and maybe showers - think AZ, NM, CA, etc. Are the employees suitable working in isolate location or base in case the data-center is one of those. What are the plans when (not if!) both primary and backup network communications fail? What are the plans when (not if!) both feeding power grids fail and the local UPS / generators start failing? Tests and test execution(!) plans for all the connections - local AND global. Are the premises really toxic free, no surprises a year later? What are the state / government plans for water resources nearby? Is it possible to sustain the employee living in that area? Does the remote monitoring / management function fulfill the same rigorous requirements as the data-center itself? And so on - a never ending and evolving list! Why mistakes(??) I have seen mistakes in all (REALLY!) of the issues mentioned and every and each time it has been inexperience! If all you know is a hammer, all the problems look like a nail! And this is the largest problem in IT today or actually has always been but today it is even worse! I see bad mistakes today in issues which were known / handled 30+ years ago but are not mentioned in some training, certificate, whatever today and forgotten - or handled by a Cxx who has for example only IT background? Or separate issues are left to "specialist" without any real oversight? And I don't mean task forces or committees with oversight but someone who is responsible and has the authority to make it happen!

grgoffe
grgoffe

Howdy, The answer is obvious, "Hire an experienced consultant"... What was left out was, "AND LISTEN to him/her". Regards, George...

oldfield
oldfield

Remember the UPS has to power the air-conditioning too ! Yes it has been forgotten and it got really hot really quick.

mattohare
mattohare

I don't buy the idea that a building needs to be built for the purpose to be fit for it. Perhaps we need another (shorter) standard size of rack that would fit into shorter floor sizes. Especially in this economy, we should look at reusing buildings when we can. I bet a bit of creative planning could have made good use of the building. I agreed that we need to spend more time in the planning stage (in regards to green-friendly and location-picking), but that should not stop creative thinking. On the contrary, planning should encourage it.

SaulGoode
SaulGoode

Generator yes but I don't see an AC on a UPS as a real solution you could power it for an hour maybe 4 but what if your power is out for 4 days? We had a situation where a Hurricane (Ike) hit our primary building and took out power for 4 days. Our Generator kicked on and kept our primary datacenter online (even though we had transfered function ot our Denver Co location. Without AC we would have had to shut it down.

etienne.guerou
etienne.guerou

I do agree with your comments. Nevertheless a clear understanding of the business model is necessary. We face in South East Asia many customers mixing collocation, Telco application, hosting in kind of "All in One data center". Cooling and power availability are very different for each application due to heat density. New rack design could be a solution to accept more load varieties, may be!

vwsportruck
vwsportruck

Matt, I'm with you on being able to reuse buildings when appropriate, but I don't think shorter racks would be a solution unless the technicians are shorter as well. In a room with an 8 foot ceiling, adding an elevated floor takes off about a foot and a half giving someone like myself a close margin to hitting the ceiling. Just my 2c -=Mark

JS3Alternate
JS3Alternate

I've been building tier 4 datacenters for a few years now and with the high density configurations, you almost have to put cooling on it's own dedicated UPS. With no cooling, you can lose the room in as short at 3 minutes even though you have 20 minutes of UPS power available for the IT loads. The only solution is two UPS systems, one for IT and a mechanical UPS (either rotary, delta conversion or a double conversion running in forward transfer mode). And without saying, all of that MUST be on generator power.

Editor's Picks