To scale up or to scale out datacentre resources? That was the question. But now a bigger shift may be afoot in the way server vendors are tackling the issue, says Clive Longbottom.

In the beginning was the computer, and the computer was a mainframe. It represented a scale-up approach, with workloads being run on a set of highly defined resources. Then came the standard high-volume server.

Initially, a single application ran on each physical high-volume server, but first clustering and then virtualisation created a scale-out approach, where the need for more resources is dealt with simply by throwing more servers, storage and network at the problem.

This approach assumed that workloads needed a standard set of resources – and scale-out’s shortcomings became apparent when it came to certain workload types that required more tuning of the available resources, such as CPU, network, storage and input/output (I/O).

Datacentre

The age of pure scale-up passed some time ago, and the rush for pure scale-out looks to be ending, so what next?Photo: Shutterstock

There is a place in an IT architecture for both approaches but a bigger change may be afoot in the way server vendors are tackling the problem.

Let’s start with Cisco. In 2009, Cisco launched its Unified Computing System (UCS) architecture – a modularised approach to Intel-based computing .

Combining rack-mounted blades with top-of-rack networking components and storage, along with VMware virtualisation, BMC systems management and a Microsoft Windows operating system, UCS is aimed at highly specific Windows-based workloads.

In other words, UCS was essentially a scale-up architecture – a kind of mainframe built from lower-price, not-quite-commodity hardware building blocks.

Dell, HP, IBM, Oracle and SGI have all done something similar, based on either modular datacentre components or container-based systems. In each case, their systems can be tuned to provide certain characteristics, dealing with different workloads in different ways – again, more of a characteristic of a scale-up system than a high-volume server scale-out approach.

Servers, storage and networking equipment are engineered by rack, row or container to provide a greater range of workload capabilities than can be offered through just putting together a collection of individual resources. Enabling virtualisation and private cloud-based elasticity drives up utilisation rates and drives down energy costs.

However, there is a synergy between scale-up and scale-out. End-user organisations need to know that using a modularised approach of buying in pre-populated racks does not mean that if or when they run out of a particular resource – CPU, storage, network or I/O – they will not have to buy another full system to provide incremental resources.

Again, here is where virtualisation comes in. Additional resources can be applied as stand-alone components that can be exploited by the existing main system through its management software and used as needed.

This approach is a necessity if the new scale-up is to work. The promise of high-volume servers and virtualisation has been the essential commoditisation of the datacentre, with the various components making up the IT platform being of individually low cost. If a component fails, replacement is cheap and easy.

This approach works well in a pure scale-out architecture, but…

…can be less easy in a scale-up one, where more proprietary components may be brought in to provide the workload performance required.

Tolerating single and multiple component failure

Some vendors are looking at functional overprovisioning in their modules, so that single and multiple failures of components can be tolerated.

Not only does this tactic work well for the provision of high availability in a scale-up system but it also works towards the use of ultra-high temperature systems.

For too long, managers have worked with datacentre temperatures of 21C or thereabouts. Various industry bodies are now advising that 34C will enable large energy savings with little loss of datacentre availability.

Now comes talk of the 50C datacentre – highly contained modular systems where the internal temperature is allowed to run hot with the associated higher rate of component failure being allowed for through overprovisioning.

Another side to how scale-up moves away from the commoditisation of components can be seen through the use of next-generation input-output optimisation. Fusion-io and Texas Memory Systems now provide flash-based memory systems that need to reside on the server board to bypass the relatively slow connections found connecting external storage to servers.

As soon as such storage is placed in a server, then the server is no longer commodity. It has relatively expensive components within it, and has to be managed to provide greater levels of availability.

Even with this concept, there are new approaches coming to market. NextIO provides a vNET system that uses high-performance peripheral component interconnect (PCI) cables to enable any PCI-compliant peripheral to be shared across a rack or racks of servers through its any-to-any connectivity capability.

For internal flash SSD drives, it provides a specialised vSTOR box that, in conjunction with the vNET appliance, removes the need for the storage to be placed directly within the server, so again pushing the server back towards being a commodity, and also making it possible for modular systems to be left untouched – often a pre-requisite for vendor maintenance agreements.

Multi-workload, scale-up systems

And back to IBM. It is getting in on the game for multi-workload, scale-up systems, bringing things full circle. For some time, it has been possible to use zLinux on the mainframe through the use of integrated facilities for Linux or directly through the use of z/VM.

However, for most non-mainframe users, this approach is not seen as attractive, even with the demonstrable benefits. What IBM has brought to market is the z/Enterprise – a platform combining a mainframe with a Power-based system with management capabilities to intelligently route workloads to where they will be best served.

As a combined scale-up, scale-out solution, the z/Enterprise has no match at the moment. But IBM has made the mistake of placing it within the z family, so prospective customers see it as a mainframe, and for those who see the mainframe as an old platform, the capabilities for a low-footprint, cost-effective, multi-workload engine are missed.

The age of pure scale-up passed some time ago, and the rush for pure scale-out looks like it is coming to an end. The future will be a hybrid approach but canny organisations will not need to build such hybrid platforms themselves.

With the increasing move by vendors to provide modular systems, the issue for buyers will be how to get the most out of these through buying additional systems – such as Fusion-io and NextIO – and through ensuring that suitable management software is in place.

Quocirca is a user-facing analyst house known for its focus on the big picture. Made up of experts in technology and its business implications, the Quocirca team includes Clive Longbottom, Bob Tarzey, Rob Bamforth and Louella Fernandes. Their series of columns for silicon.com seeks to demystify the latest jargon and business thinking.