This article is courtesy of TechRepublic Premium. For more content like this, as well as a full library of ebooks and whitepapers, sign up for Premium today. Read more about it here.
As virtualisation technology spreads through the datacentre, the race is on to develop ways of sharing out data to virtual servers and desktops in large numbers.
For years, servers and storage more or less managed to keep pace with each other. As faster processors appeared disks got bigger and quicker, and new ways of sharing storage were developed, such as Network Attached Storage (NAS) and Storage Area Networks (SANs). Then virtualisation arrived, followed by the cloud. This caused a once-amicable relationship to break down as hundreds, if not thousands, of virtual machines began to compete for the same storage resources.
Of course, NAS appliances and SANs have adapted to cope: SSD technology has been introduced to boost performance and reduce power consumption, while fibre-channel and iSCSI enhancements have also helped to ramp up performance. However, neither NAS nor SANs were really designed to share out data to virtual servers and desktops in large numbers. Nor were they built to keep pace with virtual machines that can be provisioned in seconds and moved between host machines at the drop of a hat.
Another issue is the cost of building a SAN, not to mention the sheer complexity of provisioning and managing such solutions, which is typically performed at the LUN rather than virtual disk level.
Enjoying this article?
Download this article and thousands of whitepapers and ebooks from our Premium library. Enjoy expert IT analyst briefings and access to the top IT professionals, all in an ad-free experience.Join Premium Today
The race, therefore, is on to virtualise storage to make it a better fit for the software-defined datacentre as well as more affordable and easier to manage. All the big names are busy doing something -- including virtualisation leader VMware, which recently released its own Virtual SAN technology (VSAN) built into the kernel of its ESX hypervisor. But there are other solutions, and for this feature we opted to examine two quite different approaches to the problem: one from Nutanix, the other from Tintri.
Nutanix Virtual Computing Platform
• Compute and storage in one rack-mount appliance
• Simple scalability at node and appliance level
• 10GbE connectivity
• Rapid deployment
• Cross-platform hypervisor support
• Potentially disruptive upgrade for existing virtualisation users
Based on industry-standard multi-node server hardware from Supermicro, Nutanix adds a mix of SSD and HDD storage to its appliance plus its own clustering software to deliver both a platform for VMs to run on and a fault-tolerant virtual SAN to go with them.
From $74,250 (£43,963) for a 3-node appliance
A relative newcomer to the virtualisation market, Nutanix sells converged infrastructure appliances containing just about everything you need to run virtual machines in terms of both processing and storage. Moreover, despite being based on industry-standard hardware it's a lot more than just the sum of those parts: a nifty layer of clustering software enables direct-attached SATA storage inside the Nutanix boxes to be pooled, virtualised and shared across multiple hypervisors, eliminating the need for a SAN or NAS solution. According to Nutanix, this approach makes for a simpler and more affordable virtualization platform without sacrificing performance, scalability or availability.
Basic building blocks
Referred to as "blocks", Nutanix appliances are based on commodity Supermicro 2U rackmount hardware capable of accommodating one to four processing nodes, depending on the model. Each of these nodes is, in effect, a self-contained server with its own Intel multicore processors and RAM for hosting a vSphere, Hyper-V or KVM hypervisor and client VMs.
It's worth noting, however, that there is no backplane as such. Indeed, the only available link between servers is via the on-board network interfaces, with two 10GbE network ports per node requiring a suitable top-of-rack switch to provide the connectivity. You also need at least three nodes to make a viable Nutanix cluster, so most customers start out with a four-node NX-1000 or NX-3000 series chassis equipped with either three or, more probably, the maximum four processing nodes.
Opt for the NX-1000 series and each node will have two 6-core Intel Xeon E5-2620 Sandy Bridge processors plus up to 128GB of memory, enabling a single node to handle an estimated 50 VMs. Go for the NX-3000 and the nodes have faster 8-core or 10-core Ivy Bridge Xeons plus up to 512GB of RAM for 115 virtual machines per node (although, as with all things virtual, that's just an estimate).
Storage to go
When first launched, Fusion-io adapters were used to accelerate the direct-attached storage included in the Nutanix appliance, but these have since been dropped in favour of Intel S3700 series SSD drives. These simply plug in at the front of the chassis together with conventional SATA hard disks, with a set number of SSDs and HDDs allocated to each node -- all direct-attached without using conventional RAID adapters.
The NX-1000/3000 series employs 1TB hard disks (four per node), with a single 400GB SSD allocated to each node on the NX-1000 and a pair of 400/800GB SSDs per NX-3000 node. Higher-capacity 4TB HDDs are used on the NX-6000 series, to support VMs running SQL databases and other data-heavy applications; however, the trade-off is that this series can only carry two processing nodes.
There's also a single node NX-7000 series aimed expressly at virtual desktop infrastructure (VDI) deployments, with two 400GB SSDs and 6TB of HDD storage plus support for Nvidia Grid GPU adapters and Teradici PCoIP APEX cards.
The software magic
Tying all this otherwise unremarkable hardware together is clustering software implemented in the form of a Nutanix Controller hosted in a preconfigured and imaged VM (the CVM) ready to run on the chosen hypervisor that's installed on each node.
Based on several open-source technologies, the CVM operates independently of the hypervisor, managing all the data I/O through the Nutanix Distributed File System (NDFS) which, in turn, presents the shared storage available across the cluster as either virtual NFS or iSCSI resources depending on the platform involved.
Redundancy is provided by making two copies of every block within NDFS to enable the cluster to carry on working in the event of one of the failure of one of the nodes and its storage, with an even higher level of redundancy (the simultaneous failure of two nodes) planned for release soon. SSD capacity is also pooled across all of the nodes in the cluster, with I/O served via the local CVM, creating a fully persistent data tier in flash rather than just using SSD as a simple cache.
Deduplication is another standard feature of the NDFS, along with the ability to take snapshots as required, with everything monitored and controlled via Prism Central. The proverbial 'single pane of glass', this web-based management console can be used to manage multiple Nutanix clusters, allowing administrators to allocate storage to their VMs without the need for the usual SAN management skills. Likewise the Prism console lets managers and users see just how well the technology is performing with visibility all the way down to the individual VM level.
• Rapid deployment
• Compact format with multiple levels of redundancy
• 10GbE connectivity
• Cross-platform hypervisor support
• Storage provisioned, monitored and managed at the VM level
• Storage only -- no hosting of VMs
An easy-to -manage virtual SAN appliance, Tintri's VMstore uses SSDs to maximise performance plus sophisticated software to deliver enterprise-class storage in a format that's uniquely ready for virtual machine use.
From $74,000 (£44,246) for the VMstore T620
Billed as 'Zero Management Storage', Tintri VMstore is a rack-mount appliance targeting the virtualisation market -- just like the Nutanix Virtual Computing Platform. Unlike the Nutanix appliance, however, VMstore is designed purely to address the storage part of the equation with no compute power to host VMs. This omission may put some customers off, but others will see it as an advantage, enabling the VMstore to be added to an existing setup without having to migrate VMs or rip-and-replace existing host servers. Also, by concentrating on storage, Tintri has come up with a particularly fast, scalable and robust solution that addresses many of the shortcomings of traditional SAN/NAS alternatives, effectively delivering a shared datastore ready to plug into the LAN.
Inside the box
The Tintri VMstore is built from industry-standard hardware components, packaged as a ready-to-use rackmount appliance either 3U or 4U high, depending on model. We looked at the 4U VMstore T600 series, which currently comprises two models fitted with dual-controllers and redundant hot-swappable power supplies as standard for maximum availability.
The controllers are hot-swappable and feature dual Xeon E5 multicore processors (the exact spec depends on the model involved) supported by up to 64GB of RAM and 1GB of NVRAM. Ethernet rather than Fibre Channel is used to connect the appliance to its client VMs, with dual 10GbE ports on the high-end T650, which is aimed at large datacentres (Tintri reckons it can support thousands of VMs), while the T620 comes with Gigabit ports that can be upgraded to 10GbE for mid-size and branch office deployments.
A pair of Gigabit ports are also provided on each appliance for replication, along with two more dedicated to management duties.
Capacity, capacity, capacity
On the storage front, VMstore appliances all come fully populated with a mix of SSD and traditional SATA hard drives with differences in the SSD/HDD split and overall capacity depending on the model involved.
On the T620, for example, six 240GB SSDs are installed along with eighteen 1TB HDDs. This equates to a raw capacity of 19.44TB, although usable capacity is reduced to 13.5TB due to overheads of the RAID 6 technology used to protect both the flash drives and hard disks.
Step up to the T650, however, and the SSDs double in capacity to 480GB and go up to nine in number; that leaves fifteen hard disks, which are also larger-capacity 3TB drives. Raw capacity on this model is therefore an impressive 49.32TB, of which 33.5TB is usable.
Finally there's a VMstore T540 model, which delivers the same 13.5TB of usable capacity as the T620 but in a smaller 3U chassis fitted with eight 300GB SSDs, eight 3TB HDDs and 10GbE connectivity as standard.
More software magic
As with Nutanix, it's the software on the VMstore (currently Tintri OS 2.0) which turns this otherwise fairly ordinary storage array into something special. It achieves this by doing away with the need to configure disk arrays or worry about LUNs or volumes. Instead, it simply presents the pooled storage inside the appliance as somewhere for virtual machines to create their virtual disks (vDisks, in virtualisation jargon).
Plug the VMstore appliance into a VMware network and the Tintri OS simply looks like a ready-made NFS datastore which, as of April, can also be used by Red Hat Enterprise Virtualization (RHEV) customers too. Microsoft users aren't left out either, with Hyper-V support announced while we were testing the product using the SMB 3.0 protocol to make storage available to Hyper-V VMs.
Regardless of the client platform, a protocol manager within the Tintri OS intercepts and processes the I/O calls behind the scenes, logging the source of the request to provide visibility and management down to the individual VM level. The I/O requests are then handled on a flash-first basis by the custom Tintri file system, the aim being to service as many requests as possible - both read and write - using low-latency SSD. Data is also deduped in-line before being stored and compressed, in order to cram as much information into flash as possible. Similarly, replication of VMs only requires compressed, deduplicated data to be exchanged between systems, reducing the network burden - by as much as 95 percent, according to Tintri.
The Tintri OS also handles snapshotting and, because it works at the VM level, this can also be handled on a per-VM basis, rather than having to snapshot an entire LUN as with a traditional SAN. Cloning technology is also built in, enabling clones to be created incredibly quickly.
Another benefit is automatic migration of VMs between flash and HDD, to prevent variations in I/O patterns disrupting other VMs. Individual VMs can also be restricted to flash and dedicated I/O lanes assigned for performance reasons.
Management is via the inevitable web-based console. In a recent innovation, Tintri Global Center can manage up to 32 VMstore appliances and associated VMs together as one. Many of the features are also accessible from VirtualCenter and other platform consoles, making for a flexible solution that's quick to assimilate and very easy to live with.
The Nutanix and Tintri appliances both address the shortcomings of traditional SAN/NAS storage, employing a similar mix of SSD and SATA disk technologies assisted by a layer of custom virtualisation software to better integrate storage resources into the virtual machine world. As well as an easy-to-manage virtual SAN, however, the Nutanix appliance also provides the compute power to host virtual machines -- arguably making it a better choice for customers looking for a complete platform for a new application, project or datacentre.
Conversely, customers who already have a preferred hosting platform, and those simply looking to consolidate and virtualise storage, are unlikely to want the disruption involved in also migrating hypervisors and VMs to new hardware. Which is where Tintri's VMstore scores: it takes only minutes to deploy and, because it delivers up storage predigested and ready for immediate VM consumption, adds very little to the management burden in the process.
So, Nutanix for new projects and Tintri for existing ones -- although there are other performance, capacity and functionality differences to factor in before deciding. There are plenty of alternative solutions, of course, and you could also mix the two products covered here together.