Data Centers

Can you get the measure of the strange, virtual world of datacentre scale-out and scale-up?

Server vendors are changing the way they look at datacentre resources...

To scale up or to scale out datacentre resources? That was the question. But now a bigger shift may be afoot in the way server vendors are tackling the issue, says Clive Longbottom.

In the beginning was the computer, and the computer was a mainframe. It represented a scale-up approach, with workloads being run on a set of highly defined resources. Then came the standard high-volume server.

Initially, a single application ran on each physical high-volume server, but first clustering and then virtualisation created a scale-out approach, where the need for more resources is dealt with simply by throwing more servers, storage and network at the problem.

This approach assumed that workloads needed a standard set of resources - and scale-out's shortcomings became apparent when it came to certain workload types that required more tuning of the available resources, such as CPU, network, storage and input/output (I/O).

Datacentre

The age of pure scale-up passed some time ago, and the rush for pure scale-out looks to be ending, so what next?Photo: Shutterstock

There is a place in an IT architecture for both approaches but a bigger change may be afoot in the way server vendors are tackling the problem.

Let's start with Cisco. In 2009, Cisco launched its Unified Computing System (UCS) architecture - a modularised approach to Intel-based computing .

Combining rack-mounted blades with top-of-rack networking components and storage, along with VMware virtualisation, BMC systems management and a Microsoft Windows operating system, UCS is aimed at highly specific Windows-based workloads.

In other words, UCS was essentially a scale-up architecture - a kind of mainframe built from lower-price, not-quite-commodity hardware building blocks.

Dell, HP, IBM, Oracle and SGI have all done something similar, based on either modular datacentre components or container-based systems. In each case, their systems can be tuned to provide certain characteristics, dealing with different workloads in different ways - again, more of a characteristic of a scale-up system than a high-volume server scale-out approach.

Servers, storage and networking equipment are engineered by rack, row or container to provide a greater range of workload capabilities than can be offered through just putting together a collection of individual resources. Enabling virtualisation and private cloud-based elasticity drives up utilisation rates and drives down energy costs.

However, there is a synergy between scale-up and scale-out. End-user organisations need to know that using a modularised approach of buying in pre-populated racks does not mean that if or when they run out of a particular resource - CPU, storage, network or I/O - they will not have to buy another full system to provide incremental resources.

Again, here is where virtualisation comes in. Additional resources can be applied as stand-alone components that can be exploited by the existing main system through its management software and used as needed.

This approach is a necessity if the new scale-up is to work. The promise of high-volume servers and virtualisation has been the essential commoditisation of the datacentre, with the various components making up the IT platform being of individually low cost. If a component fails, replacement is cheap and easy.

This approach works well in a pure scale-out architecture, but...

About

Clive Longbottom is the founder of user-facing analyst house Quocirca. As an industry analyst, his primary coverage area is business process facilitation.

0 comments