
The newly announced Fujitsu ETERNUS CD10000 storage system has the capability of scaling up to 224 storage nodes for a total maximum raw capacity of 56 TB — quite a sizable chunk of data. The actual usable capacity is dependent upon the configuration of data duplicates. The ETERNUS uses full duplication instead of RAID, which would be impractical on a system that when fully expanded contains upwards of 13,000 disk drives.
Assembling the nodes
The ETERNUS system connects individual nodes using InfiniBand 40 Gb links, with a 10GbE front-end interface. Resources can be acceded using KVM, Swift, and S3. Four individual node types can be part of an ETERNUS system; the management node is a required component, and at least four other nodes must be attached to it for a basic installation. For example, an installation could have one management node, two storage performance nodes, and two storage capacity nodes.
Management node
The management node is intended for collecting the logs of the rest of the system. The system is able to operate in the event of a failure, though it is required for operation. It contains one Intel Xeon CPU, 64 GB RAM, and four 2.5″ SAS 10K drives at 900 GB: two for the OS and two for user data.
Basic storage node
The basic storage node contains two Xeon CPUs and 128 GB RAM. For data, it has 16 2.5″ 900 GB SAS 10K drives, two of which are reserved for the OS, for a total raw capacity of 12.6 TB. It also has a PCI Express SSD for caching.
Storage performance node
The storage performance node uses the processor and RAM of the basic storage node, but adds a larger, 800 GB PCI-e SSD, and more SAS disks for a total of 34.2 TB of raw storage capacity.
Storage capacity node
The storage capacity node uses the same processor and RAM as before, but packs in 14 SAS 900 GB drives, and 60 3.5″ NL-SAS 4 TB drives spinning at 7200 RPM for a total raw capacity of 252.6 TB.
Using Ceph instead of RAID
Because of the sheer scale of the ETERNUS CD10000, the use of RAID for this type of storage device would be exceedingly difficult — issues with member disk failures and performance hits from rebuilds would unnecessarily slow down access to data in a production environment. With the rise of 4 TB and larger disk drives, the feasibility of the continued use of RAID is very much in doubt. Some commenters are calling this the end of the RAID era for server environments.
As such, ETERNUS ships with Ceph, a distributed storage platform licensed under the LGPL. The Ceph project is developed by Inktank Storage, which was purchased by Red Had in April 2014. Inktank was initially funded by DreamHost and Mark Shuttleworth.
Ceph reached its seventh major release, version 0.87, on October 29, 2014; notably, it isn’t at 1.0 yet, though it is being shipped as the underpinning of Fujitsu’s newest offering. Red Hat’s acquisition of Inktank makes Ceph look greatly more venerated, and gives vendors confidence that the software provided by a startup — and the authors of the software — won’t simply disappear, though this is still a concern even with open-source software.
Importantly, the release notes on the Ceph website for 0.87 indicate “we do not yet recommend CephFS for production deployments” — this extends only to the CephFS file system component, not the entire software package. Ceph and OpenStack are becoming increasingly widely deployed, and are considered to be the new orthodoxy for organizations building out their own clouds. As it stands, the block and object storage components of Ceph are far more mature than the file system level of the software, and between milestone releases 0.80 and 0.87, CephFS has made great strides toward becoming a more feature-complete and robust system for production deployment.
What’s your view?
Is your organization migrating to OpenStack? Do you manage enough data to warrant a system as large as the Fujitsu ETERNUS CD10000? Do you have any reservations about Ceph? Let us know in the comments.