Data Centers

Building an SSD strategy for your data center

As SSDs get cheaper and new solutions proliferate, admins are looking for ways to leverage the technology to improve storage performance. Here are some factors to consider.

 

ssd_iStock_000017995579Small.jpg
Image: iStockphoto.com/ludinko
 

Like you, I’ve been excited to see solid state drives (SSDs) come down in price and options increase on how to use the high performance storage resource. How to best utilize SSDs is a question many of us are facing, and the technology is moving very fast. A lot of data center administrators and decision makers have had excellent performance gains when using SSDs on end-user computing devices. But when it comes to increasing the storage performance in the data center, the process gets much more involved.

SSD options

One option is to leverage the hypervisor, with the new vSphere Flash Read Cache feature that comes with vSphere 5.5. Looking deeper into the VMware offering, the upcoming VSAN feature may also be an option. Much like the Flash Read Cache, VSAN introduces a caching solution (which is not a tiering solution by itself) across nodes to provide a logical storage volume from local storage systems. VSAN requires some amount of SSD in place and functions as a shared storage disk system for VMware VMs. Categorically, VSAN is a hyper-converged infrastructure technology that is combining compute (CPU/Memory) of servers with the role of shared storage (logically pooled). On the Hyper-V side, Windows Server 2012 R2 introduces a tiering solution with Storage Spaces.

Aside from the VMware offering, a slew of storage systems leverage SSDs today. Many of these storage systems have a mix of SSDs and traditional rotating storage, with varying levels of features and sophistication that will increase workload performance by leveraging SSDs where needed. It is important to note how these specific implementations differ, as comparisons among them are confusing. I’ve not gone through the duty of naming the players in this space because each has its own strengths that differentiate it from the others. Data center administrators just need to see performance improvements with economics that fit.

There are a few things we should know not to do. For example, don’t simply put SSDs in blindly. For one thing, there are a number of decision points among SSD types. There are consumer- and enterprise-class SSDs, and of course the data center implementation should be more in line with the enterprise-class models. Also consider disk controller systems. While they may support SSDs from a bus and interface standpoint, they may not use the disks intelligently. This can lead to uneven wear and accelerated failure as well as a wasted investment if the performance isn’t utilized effectively.

The SSD roadmap

So what’s one to do? Here is the thought process I would go through. The first step is to identify what will benefit from the SSDs. My natural choice today is the virtual infrastructure. Having a high performance disk tier available to VMs makes this decision easy due to the consolidated and portable nature of VMware and Hyper-V environments.

The next topic I’d tackle is whether I want a native hypervisor solution providing the SSD benefits or a storage system. This is a big change in terms of host configuration and possibly storage system cost. It would be a relatively straightforward process to change all hosts to have an enterprise-class SSD and leverage technologies like VSAN or vSphere Flash Read Cache. If the storage system route is chosen, that could be a larger investment but may introduce true tiering vs. only caching.

From there, I’d see what difference the various options make on the workloads that give me the most grief. We have to try these new technologies before we make a significant investment in our practice.

Your take

That is the recommendation today. It’s very possible that in a few months or years, the landscape will materially change (again). How are you addressing the influx of SSDs in your data center? What are you trying to accomplish or avoid? Share your comments below.

 

About

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

2 comments
sanman4
sanman4

The big challenge is truly understanding where flash/SSDs can add the most value as the technology is still expensive relative to HDDs - despite what the vendors claims.  If you only have a handful of servers, then adding flash cards to the servers will work fine and likely have good ROI.  If you have dozens (or 100s) of servers that need shared access to storage or you already have SANs/NAS installed, then going the networked storage route will be most cost-effective.  Then the question is "how much flash is justified?"   To answer this, users need to understand the storage I/O profile of their applications and create a workload model that can be used to test out the various hybrid and all-flash products before making deployment decisions.  Users should check out workload modeling and performance validation solutions from companies like Load Dynamix or SwiftTest.   Having the right performance requirements data that relates to your specific applications can easily cut your storage costs in half.