Server vendors are changing the way they look at datacentre resources...
...can be less easy in a scale-up one, where more proprietary components may be brought in to provide the workload performance required.
Tolerating single and multiple component failure
Some vendors are looking at functional overprovisioning in their modules, so that single and multiple failures of components can be tolerated.
Not only does this tactic work well for the provision of high availability in a scale-up system but it also works towards the use of ultra-high temperature systems.
For too long, managers have worked with datacentre temperatures of 21C or thereabouts. Various industry bodies are now advising that 34C will enable large energy savings with little loss of datacentre availability.
Now comes talk of the 50C datacentre - highly contained modular systems where the internal temperature is allowed to run hot with the associated higher rate of component failure being allowed for through overprovisioning.
Another side to how scale-up moves away from the commoditisation of components can be seen through the use of next-generation input-output optimisation. Fusion-io and Texas Memory Systems now provide flash-based memory systems that need to reside on the server board to bypass the relatively slow connections found connecting external storage to servers.
As soon as such storage is placed in a server, then the server is no longer commodity. It has relatively expensive components within it, and has to be managed to provide greater levels of availability.
Even with this concept, there are new approaches coming to market. NextIO provides a vNET system that uses high-performance peripheral component interconnect (PCI) cables to enable any PCI-compliant peripheral to be shared across a rack or racks of servers through its any-to-any connectivity capability.
For internal flash SSD drives, it provides a specialised vSTOR box that, in conjunction with the vNET appliance, removes the need for the storage to be placed directly within the server, so again pushing the server back towards being a commodity, and also making it possible for modular systems to be left untouched - often a pre-requisite for vendor maintenance agreements.
Multi-workload, scale-up systems
And back to IBM. It is getting in on the game for multi-workload, scale-up systems, bringing things full circle. For some time, it has been possible to use zLinux on the mainframe through the use of integrated facilities for Linux or directly through the use of z/VM.
However, for most non-mainframe users, this approach is not seen as attractive, even with the demonstrable benefits. What IBM has brought to market is the z/Enterprise - a platform combining a mainframe with a Power-based system with management capabilities to intelligently route workloads to where they will be best served.
As a combined scale-up, scale-out solution, the z/Enterprise has no match at the moment. But IBM has made the mistake of placing it within the z family, so prospective customers see it as a mainframe, and for those who see the mainframe as an old platform, the capabilities for a low-footprint, cost-effective, multi-workload engine are missed.
The age of pure scale-up passed some time ago, and the rush for pure scale-out looks like it is coming to an end. The future will be a hybrid approach but canny organisations will not need to build such hybrid platforms themselves.
With the increasing move by vendors to provide modular systems, the issue for buyers will be how to get the most out of these through buying additional systems - such as Fusion-io and NextIO - and through ensuring that suitable management software is in place.
Quocirca is a user-facing analyst house known for its focus on the big picture. Made up of experts in technology and its business implications, the Quocirca team includes Clive Longbottom, Bob Tarzey, Rob Bamforth and Louella Fernandes. Their series of columns for silicon.com seeks to demystify the latest jargon and business thinking.