The Open Compute Project is breaking new ground with its goal of developing efficient, low-cost servers and data centers that follow the open source model.
What do you do when you have one of the most widely used platforms on the planet -- and you need to figure out the best way to make that platform efficient? You open source the project. Facebook, in an effort to scale its platform and do so in a much more efficient manner, has brought to life Open Compute. This project has a goal of helping the IT and data center industries scale beyond current limitations more efficiently and more cost effectively, with better adherence to industry-wide standards.
Lofty goals. Thankfully, opening the platform has open sourced this project so IT pros worldwide can access the documentation and schematics for the hardware and help make improvements on nearly every aspect of the project. Here are 10 things that may convince you to participate as a developer or even (eventually) migrate to the Open Compute platform.
1: It was started by Facebook
If you're going to look for someone to create a platform based on scalability and open standards, you should look no further than one of the most widely used platforms. The Facebook data center is pummeled by users day in and day out -- and it's still standing. Obviously, the developers and designers of the Facebook platform know a thing or two about scaling and reliability. If I'm going to look for guidance in these areas, Facebook would be one of the first platforms I would turn to. The project has now migrated globally, so engineers around the world are helping to improve the project.
2: Its goal is efficiency
The focus of Open Compute is efficient server, storage, and data center hardware designs for scalable computing. This means saving money on every aspect of the platform while ramping up its scalability factor. The idea of efficiency permeates every aspect of the platform, including hardware used, environmental impact, power usage, and water usage. All this efficiency adds up to a significant cost savings as well as a much smaller carbon footprint.
3: It has big plans for cold storage
Open Compute has focused a lot of energy on the area of cold storage (where data is stored and rarely read -- often used for legal data and backups). The Open Compute cold storage initiative plans to create a completely separate infrastructure. (Download the cold storage specification here.) And the Open Compute storage initiative doesn't end with data that will rarely (if ever) be accessed. Making use of some dramatic new designs (such as the HYVE -- a torpedo design that serves as a 2xOpenU storage server and can accommodate 15 3.5" drives arranged in a 3 x 5 array), Open Compute could possibly reinvent the way data centers store data.
4: It's highly efficient and economical
All this efficiency doesn't just help the environment. Imagine having such a scalable platform that uses 38 percent less energy and costs 24 percent less.
5: It promotes tight adherence to standards
The Open Compute platform is helping create a unified standard that will go from top to bottom. Even the way racks are designed and built are not immune to the quality of standards being set by Open Compute. You can download the mechanical drawings for the 1.0 single-column rack here. You'll find the 0.6 Triplet and 1.0 PDU on GitHub.
6: It's fully open source
Everything. Period. From racks to servers to data center designs to software -- every aspect of Open Compute will be open source. And the very nature of open source will lead Open Compute to become one of the biggest communities of developers working to push forward the impact and design of data centers. Anyone involved with data centers (in nearly any capacity) can appreciate the nature and scope of this.
7: Specifications and mechanical designs are available to everyone
Because the approach to Open Compute was to keep everything transparent, anyone can access the specifications and mechanical designs of nearly every aspect of the platform. This means anyone can help implement and improve the system. Open Compute firmly believes that by sharing the intellectual property of the platform, it can grow faster and farther than any other data center project on the market.
8: It's fully standards compliant
Though some scoff at the idea of standards, it is only by adhering to standards that a more universal platform can be developed and expanded. Open Compute's idea of standards goes beyond software and will apply to hardware and design. This should make for a much faster and easier implementation on every level.
9: It supports AMD, ARM, and Intel
You want support? How about a data center that can deliver support for multiple architectures? Most data centers align themselves with one architecture or another. Open Compute even has an ARM project (an ARM-based server that converts the Open Vault JBOD into a full-blown storage server). You can download information and specs on the various motherboards and architecture support from the motherboard page of the Open Compute site.
10: Anyone can get involved
To get involved in Open Compute, all you have to do is download the membership agreement, fill it out, and submit it. Then, just pick a project to be involved in and sign the Open Web Foundation Contributor License Agreement for that project. With the paperwork out of the way, you can work on a project or even try to incubate a new project on the Open Compute GitHub.
There is no reason the data center can't (or shouldn't) be an open platform. By opening the source, the design, and the schematics, it is now possible for engineers across the globe to help make the data center a far more efficient and scalable platform. Businesses that may not have been able to afford a data center might now have a platform within their reach.
- The 21st Century Data Center (ZDNet special report page)
- Executive Guide: The 21st century data center (free ebook)
- Open Compute: Does the data center have an open future?