Storage

Why Ceph could be the RAID replacement the enterprise needs

Scalable storage platform Ceph had its first stable release this month, and has become an important option for enterprise storage as RAID has failed to scale to high density storage.

cphhero.jpg
Image: iStockphoto/backtasan1

As both traditional and solid-state disks become increasingly dense, continued reliance on RAID has become a liability as rebuild times have skyrocketed. As such, the enterprise needs a new storage option.

Ceph, an open source distributed storage platform owned by Red Hat, had its first stable release this month. To be precise, Ceph is comprised of multiple components—the CephFS filesystem component was marked stable in this release—taking the entire platform a significant step forward to being production-ready.

What is Ceph?

Ceph is a distributed storage platform that provides interfaces for object, block, and file level storage in a single unified system. Ceph aims to provide a distributed storage platform that can scale up to the exabyte (1000^6) level. Ceph replicates data across disks so as to be fault-tolerant, all of which is done in software, making Ceph hardware independent. To that end, Ceph can be categorized as "software defined storage," though in contrast to most entries in that category, Ceph is open-source software, licensed under the LGPL.

SEE: Data storage: Preferred vendors, demands, challenges (Tech Pro Research)

Because of the modular way in which Ceph is designed, there are various milestones that constitute a "stable release." The first of these was Argonaut in July 2012. This month's release, Jewel, is the first release in which CephFS is considered stable and feature-complete.

Jewel brings complete repair and disaster recovery tools to CephFS, with improved administration and diagnostic tools. Prior to this release, XFS was recommended for large-scale deployments, with ext4 for small-scale deployments. (Btrfs and ZFS are recommended for non-production systems.) With this release, ext4 is no longer recommended due to differences in long filename handling between that filesystem and other Ceph components.

Why is Ceph important?

The reasons why Ceph is important could likely fill a book. Ceph combines all types of storage with practically any means of interacting with storage. Various components of Ceph allow for a variety of ways to access data-RADOS—the "reliable, autonomous, distributed object store" that has services which allow for programmatic access, RESTful access, block devices, and as a POSIX-compatible file system. It can adapt to the needs of your organization by providing the access methods required to integrate into your existing public, private, or hybrid cloud deployment, and the apps that run inside your existing cloud infrastructure.

SEE: 10 open source storage solutions that might be perfect for your company (TechRepublic)

The real pressing importance of Ceph—and the attribute that differentiates it from technologies that preceded it—is the fact that Ceph nodes and storage clusters run on commodity hardware. While vendor solutions like Fujitsu's ETERNUS CD10000 utilize Ceph, it is entirely possible to build your own Ceph deployment without relying on a specific storage solution vendor.

Compare this to the relatively restrictive requirements of RAID deployments—high performance RAID controllers are often exceedingly expensive, and the function of these controllers are often generally opaque. Some RAID solutions require all disks to be purchased from the controller vendor, though the disks themselves are rebadged drives from actual hard disk manufacturers at a markup of several times the retail price of the manufacturer version.

What's wrong with RAID?

Aside from the cost concerns of vendor lock-in, the fact remains that RAID has not scaled well to modern storage. Much of the design of RAID5 and RAID6 dates back to the late 1980s to early 1990s, in which hard disk drives did not generally exceed 2 GB.

While drive capacities have exponentially increased, the speed at which data can be transferred from drives has not increased at the same rate. This, plus the time needed to perform a rebuild after a single drive failure—combined with the degraded performance and risk of additional drive failure—has led technology pundits and device vendors to strongly advise against using RAID for mission-critical data.

Ceph suffers practically none of these issues, though it does so in part because of the sheer amount of disks involved—a given Ceph deployment can have thousands of drives. This allows recovery to happen in parallel, as new copies of data are distributed across hundreds of drives. The CRUSH algorithm used in Ceph does not depend on synchronizing stripes of data across disks or calculating parity, and Ceph does not rely on having identical drives in the way RAID does—drives produced from the same batch, from the same manufacturer, are more likely to fail around the same time.

What's your view?

Do you plan to adopt Ceph in your organization? Is the rebuild time of RAID arrays a concern for you? Share your thoughts in the comments.

See also

About James Sanders

James Sanders is a Java programmer specializing in software as a service and thin client design, and virtualizing legacy programs for modern hardware.

Editor's Picks

Free Newsletters, In your Inbox