Photos: Inside a new build datacentre
Image 1 of 9
Work in progress
Welcome to Slough, or the “Silicon Valley of Berkshire”.
Slough may be a town of only 122,000 people but its concentration of fibre links makes it home to datacentres for major hosting companies, such as Rackspace and Equinix.
This tier 3+ datacentre belongs to Paragon Internet Group, which operates three companies offering shared web and cloud service hosting. It hosts more than 200,000 websites for 70,000 customers, ranging from individuals to multi-national companies.
It is Paragon’s first datacentre and building it was something of a landmark moment in the life of Paragon’s young co-founder and CEO, Adam Smith.
“We’re infrastructure geeks at heart. Myself and three other founders are server nerds,” he said.
“I don’t have any kids so this is basically like my baby. It’s something that I wanted to do for 10 years. It was definitely on my bucket list and now it’s been done.”
It was Smith who dubbed Slough the “Silicon Valley of Berkshire” at an event to show off the datacentre this week.
It took about six months to complete the datacentre, which officially opened in November last year. This shot shows the 9,300 square foot main hall of the datacentre during construction.
The datacentre will cost about u00a31.5m once fully kitted out with servers. When compared to the cost of the company co-locating its servers in third party datacentres, which is what the company did previously, it will u201cpay for itself in the first yearu201d, said Smith.
Server aisle
This aisle holds 40 racks, each holding 30 Dell servers. Paragon plans to add two more of these enclosed aisles to the main hall to make a total of 120-racks.
When the other server aisles have been installed, in about four months time, this will be a 1MW datacentre.
Paragon also operates servers in a nearby facility owned by Equinix and smaller datacentres in Bulgaria and the US. It uses the Equinix facility to provide dual site failover to customers.
The firm is planning to open more datacentres, one in Slough and another outside the UK.
Backup power
Servers in the datacentre will continue running in the event of various forms of power outage, thanks to its 2N power supply.
Each server has two separate sources of power, and each of these sources of power can switch from mains to generator power in the event of an outage.
Power to the datacentre is fed through uninterruptible power supply (UPS) units that prevent the datacentre from being affected by a mains outage.
If the mains goes off then the UPS provides power to the datacentre while it switches to generator power, which takes about 30 seconds. The UPS are fed by phase one batteries, which provide about five minutes of power when the datacentre is at full load.
Generators
These generators are fuelled by 13,000 litres of red diesel, which will power the datacentre for nearly eight days at current load and up to 48 hours at full capacity.
Cooling
Server temperatures are kept low by direct expansion cooling.
The datacentre runs cold aisle containment, where cold, pressurised air is passed into an enclosed corridor at the front of the servers and is then drawn through the server chassis to cool the machines.
The cooling system keeps motherboard temperatures at about 24C and consumes up to 32 amps per rack of servers.
The datacentre has a PUE of 1.3, which reflects how much of the total power delivered to a datacentre gets to a server. The “gold standard” for the industry is 1.5.
Smith attributes its low PUE rating to running cold aisle containment and using PDX cooling units, which adjust the number of compressors running based on air temperature inside the datacentre, u201cso we’re only cooling what we need to coolu201d.
Some companies, such as Facebook, have achieved a PUE of 1.07 through methods, such as using fresh air cooling.
The datacentre chooses not to use outside air because, according to Smith, u201cit’s full of contaminants and you can’t control the humidityu201d.
Server rear
Hot air is blown out of the rear of the servers, seen here.
The hot air rises into air handlers where it is cooled, pressurised and piped under the floor back into the enclosed aisle where the servers sit.
The cool air then flows through the servers and out the back where the cycle starts again.
Server front
These are dual-socket Dell servers and are used to run a mix of virtual images and databases.
Because many servers are virtualised and run as one consolidated system the spare capacity in the datacentre is fairly low, about 20 percent.
The datacentre uses Hyper-V, KVM and Xen bare metal hypervisors and is also investigating the use of software-defined networking.
The datacentre also offers a Hadoop cluster with 200TB of storage running Cloudera’s distribution of Hadoop.
Dell plans to offer some of its cloud services from Paragon datacentre, including its Dell Cloud Management tool and Boomi for linking legacy applications to cloud apps.
Cabling
Cabling at the rear of the servers. Having neat cabling helps ensure an unobstructed airflow for cooling said Smith.
Network traffic is relayed to the datacentre via a 150km dark fibre ring, which links the centre to Equinix LD5 and Telehouse Docklands facilities, allowing traffic to be routed to another path if there is a break in the connection.
Fire supression
The fire suppression system pumps in a mix atmospheric gases u2013 such as nitrogen and carbon dioxide u2013 to reduce the oxygen content in the room to the point that flames are extinguished.
“It’s like being at high altitude but it won’t kill anyone,” said Smith.
“At any hint of fire it will deploy and stop the fire before it’s taken hold.”
CO2 is released into the room to dilate the blood vessels and increase the rate at which oxygen can be absorbed by people inside, he said.
Some other datacentres use inert gases or water-based systems to extinguish fires.
-
Account Information
Contact Nick Heath
- |
- See all of Nick's content