Learn more about the Linux Ceph project, which seeks to provide a petabyte-scale distributed file system with high performance and solid reliability.
Feeling a little extra geeky, I thought I would share this new paper on the IBM site, "Ceph: A Linux petabyte-scale distributed file system: Exploring the Ceph file system and ecosystem" by author M. Tim Jones. Now, normally this kind of thing is not my light reading, but trying to wrap my head around just how much a petabyte is, I thought I would take a look. Ceph evolved from a Ph.D. research project at the University of California Santa Cruz. The author states that it is now in the mainline Linux kernel, though for evaluation purposes, not yet for production environments.
The goals for Ceph are to develop a distributed file system capable of scaling to multi-petabyte systems while maintaining high performance and reliability -- a tall order:
Ceph has developed some very interesting concepts (such as dynamic metadata partitioning and data distribution and replication), which this article explores shortly. Ceph's design also incorporates fault-tolerance features to protect against single points of failure, with the assumption that storage failures on a large scale (petabytes of storage) will be the norm rather than the exception. Finally, its design does not assume particular workloads but includes the ability to adapt to changing distributed workloads to provide the best performance. It does all of this with the goal of POSIX compatibility, allowing it to be transparently deployed for existing applications that rely on POSIX semantics (through Ceph-proposed enhancements). Finally, Ceph is open source distributed storage and part of the mainline Linux kernel (2.6.34).
Check out the detailed article, which includes cool figures and graphs to illustrate the architecture of the system and the components.
What does a petabyte look like?
Related to this, I found a blog entry from online backup service provider, BackBlaze, "Petabytes on a budget: How to build cheap cloud storage," which gives you a fairly detailed account of how they built their own custom storage pods: 67 terabyte 4U servers for $7,867. They offer a cost-comparison chart of the major vendors, which is what drove them to create their own storage:
Based on the expense, we decided to build our own Backblaze Storage Pods. We had two primary goals: Keep upfront costs low by using consumer-grade drives and readily available commodity components and be as power and space efficient as possible by using green components and squeezing a lot of storage into a small box.
They do a good job with providing a 3-D model of the design and diagrams that show the components and the power wiring setup. This photo shows Tim Nufire of BackBlaze deploying pods in a rack that "contains just under half a petabyte of storage." Pretty cool.Petabyte tidbits:
- What's bigger than a petabyte? A zettabyte, of course! It's equal to one million petabytes. Whoa!
- The entire rendering of Avatar required over 1 petabyte of storage space, which is pretty crazy when you consider Mozy's estimation that a petabyte would store about 13 years worth of HDTV video.
- The World of Warcraft requires 1.3 petabytes of storage to be maintained.