When I started my career, I worked as a Junior Programmer/Analyst for a large Fortune 500 company. I was on the job for less than a week when a veteran employee invited me to see the data center. For a 20-year-old engineering geek, the opportunity to see the “big iron” was a fascinating invitation.
As we stepped off the elevator, my guide was quick to note that all the windows in the data center were made of bulletproof glass and that the guards waiting at the main entrance did, in fact, carry firearms.
We walked through about 10 doors and a mile of corridors. As each door opened, it reminded me of the opening to the ‘60s television show “Get Smart.” We entered the home of the System 380, IBM’s largest and fastest mainframe computer. You could barely speak above the roar of the cooling units, whirring of the disk drives, and spinning of the tape reels.
Are data centers like this one dinosaurs lost in the evolutionary shuffle? For a number of years, consultants have told us that the mainframes were going to give way to smaller client/server systems. And in many cases, this has held true. Even organizations still using mainframe-class computers have found that their environmental needs are significantly reduced—the units are physically smaller, use less power, and generate less heat.
In this article, I’ll discuss what the term “data center” means in IT today.
New Life for old spaces
In many cities, these fortresses of data are being readapted for use in the client/server age. I was involved in a recent conversion where all of the mainframe paraphernalia was replaced with row upon row of collocational cabinets.
This was a win-win situation, as the original owner was able to get a premium price for the space, and the co-location company was able to get space fitted out for less than 50 percent of what it would have cost to start from scratch.
In another instance, a large corporation that had been a mainframe shop converted their computer room into a client/server data center. They were able to move their Network Operations Center (NOC) and a large test lab into the space but retain the use of key components that made the room valuable in the first place.
The need is still there
One of the key maxims to remember is that although individual servers are generally small, most companies who have adopted an Internet-based strategy have found that the sheer number of required servers still demands substantial real estate that’s specifically dedicated for their operation. In addition, PCs and UNIX boxes produce significant heat—much more by volume than their older mainframe brethren.
Physical security also remains a valid issue. The data stored on today’s Internet-centric servers is no less valuable than the information we guarded so doggedly on the big iron. Many who go down this path, however, will spend tens of thousands of dollars on Internet security strategies like firewalls, for instance, and then leave the physical systems out in the open.
I have worked with many clients who kept strategic databases on small, distributed database servers which sat right out on their desktops. As any security expert will tell you, most data theft and destruction happens on the inside. I could walk into any one of those client’s offices with a zip disk and walk away with their entire customer databases, and no one would know.
What defines a data center?
First of all, let’s define our nomenclature. The phrase data center has specific implications, relating specifically to a location where mission critical IT systems are maintained. In our business, we have begun to refer to these as critical use facilities, broadening the definition a bit to include other mission critical systems.
A broader definition is: any facility for which the cost of downtime is high relative to the cost of providing uptime.
Examples include telephone company switch sites, production control centers, generating plant control rooms, and an operating room. All of these cases share a number of key elements that define a critical use facility:
- Reliable power: The public power grid is not reliable enough for a facility which has low tolerance for downtime, especially accounting for the fact that you do not have to lose power completely to cause an outage—a power surge or sag can be just as damaging. Reliable power is provided by an engineered combination of power conditioners, batteries, and generators.
- Controlled mechanicals: Newer systems are marginally more tolerant of higher temperatures and humidity variations. However, temperature still must be controlled within a 20- to 30-degree range, and humidity must be maintained ±15 percent (a more difficult task).
- Fire suppression: Systems overheat, wires short out, and fires happen. In a typical office situation, the sprinklers would go off, and the power would be automatically cut—neither one particularly helpful for keeping a system operational. Not good enough in a mission critical environment.
- Security: Both network and physical security must be addressed. Hackers are real, and are seeking to wreak havoc. Industrial espionage is on the rise as dot coms look to find ways to compete.
- Overall control: Finally, all of these systems must be monitored and controlled. If any of the fail-safes should kick in, someone must be aware of it in order to get the primary systems restored, lest the backup systems fail as well.
Send Andy a message and he may answer your questions in an upcoming column. To share your thoughts on this article, post a comment below.