Today’s virtual environments are very different than they were at the beginning of the virtual revolution.  In the early days, businesses looked to virtualization for server consolidation tasks; companies wanted to reduce the number of servers that existed in the data center in order to reduce hardware acquisition costs and reduce energy bills.

But, then a funny thing happened along the way.  As hypervisor vendors added more features, virtual environments took on a life of their own and have now become the first choice for most workloads.  That’s massively changed the data center dynamic.  Organizations have steadily virtualized more and more over time, too, thus further changing the dynamic.  Early on, “non critical” workloads were placed on virtual machines to save money but as time went on, organizations moved more and more to the new environment, eventually trusting it to mission critical workloads.  Today, the virtual environment runs even large, mission critical workloads.

You may wonder what I mean about the changing data center dynamic.  There was a day when almost all servers used local storage, which performed very well, but did not enable some of the capabilities provided by virtual environments, such as workload migration.  As workloads shifted to virtual environments and administrators discovered these capabilities, shared storage was implemented to enable these advanced capabilities.

It’s with shared storage that dynamics really shifted.  After all, a typical x86 server only saw utilization of 5 to 15%.  When storage was local, that single server and its local applications-often just a single application-got to enjoy the full capacity, performance and throughput of the storage and didn’t have to share those resources with other servers and other applications.

That’s all changed.  Now, storage has to be designed with many different and simultaneous use cases in mind.  That’s made storage the bottleneck for some emerging uses cases, such as VDI and big data.  The discussion has most certainly shifted from one of storage capacity alone to having to balance capacity with overall storage performance.  Further, no longer can organizations simply throw more spindles at a problem.  We’ve moved beyond that!

That was a long introduction to a company that I’ve been talking with about one of their solutions.  Tegile-at least publicly-is new to the game, but they’ve been working behind the scenes for quite some time with a set of beta partners and have created a pretty compelling solution that can help organizations that need to find that balance between capacity and performance.

According to the company’s literature, their Zebi arrays boast “5X the performance and up to 75% less capacity required than legacy arrays.”  The magic is in the way that the Zebi integrates solid state disks (SSDs) into the array.  Whereas an SSD performance tier is available from many other manufacturers, the Zebi array instead combines SSDs and traditional storage as a single tier and, behind the scenes, uses a combination of in-line compression and deduplication to reduce the amount of space that is consumed by the data.  This is an inline deduplication process, not a post-process deduplication, so the savings are immediately evident.

I’m going to skip the rest of the technical discussion here because I plan to take a deep dive into the Zebi array in an upcoming article in the Data Center blog here at TechRepublic.  For this blog, I want to focus on why this technology should be of interest to CIOs.

As your virtual environment changes and, in particular, as you begin to investigate virtual desktops (VDI), consider the significant storage implications that such initiatives carry.  The solutions are far more variable than one would imagine and, in certain situations-such as at boot time and login time-lack of storage performance can create boot or login storms that can result in users being unable to get a desktop for minutes or even hours.  If this is a regular occurrence, the entire initiative will be seen as a failure.  A hybrid storage system such as the Zebi can tackle these kinds of issues with ease due to the very high IOPS nature of solid state storage.

Further, as more and more data is stored, it becomes increasingly difficult to backup within a reasonable window.  The Zebi appliance includes integrated snapshot and remote replication features, meaning that the backup “window” can be a thing of the past since the Zebi appliance constantly replicates data to partner units.

As an aside, the Zebi also includes powerful compression and deduplication capabilities that can make it a perfect solution even for those that need a ton of space.  The company indicates that it’s possible to see a 50% reduction in the data footprint with compression along and a 3x to 5x improvement in capacity when compression is combined with deduplication.

From a cost perspective, the Zebi is surprisingly reasonable, too.  The list price for the SS1100, a single controller array with 14 TB of raw capacity, is only $16,000.  For those that like per TB metrics, that translates to around $1,150 per TB and, remember, this is raw capacity.  The company indicates that you can expect to see 3 to 5 times space utilization when using compression and deduplication (which I will talk about in the follow up to his piece).  As such, this could be a very affordable array!

If you’re looking at storage, don’t overlook Tegile’s Zebi.

Summary

The world of storage is getting very exciting!  Tegile is allowing me remote access to one of the lab-based arrays so that I can put one through the ringer.  I plan to do so and will follow up in the Data Center section in a few weeks.