By Martin Banks
Consolidation of Intel-based servers is a well-established strategy for reducing the running costs and management of enterprise IT systems. But if IBM has its way, more and more organizations will begin to explore the potential benefits of consolidating servers on the mainframe.
Many enterprises continue to have mainframe systems at the heart of the IT function, running established mission-critical applications. And much of the needed server consolidation can best be applied to the growing army of Intel-based systems that sit around them. These systems run a variety of applications, from routine file and print services to the more complex applications and Web-based services needed for increasingly important e-business front-ends.
Many of these applications can run under the Linux operating system as well as under Microsoft’s Windows. Indeed, for some applications Linux is arguably the better option. This opens up an opportunity for IBM, the dominant mainframe supplier, to exploit the capability of its zSeries machines to run Linux as a native operating system, as well as to run partitions in which multiple virtual Linux servers can be mixed with existing mainframe applications on the same machine.
Tackling the costing process
IBM’s argument for consolidating around the zSeries is straightforward: Most enterprises already have at least one zSeries machine, and though undoubtedly more expensive as an initial purchase, they provide far better results in terms of total cost of ownership (TCO) over a lengthy period.
Estimating the total cost of consolidating on the zSeries, therefore, is a matter of some importance and demands that IT managers collect some important evidence. IBM’s approach is to compare the ways of running a specific application. So to start the costing process, the first job is to look at an application that’s running on Intel-based servers and size it. There are some important variables to consider, depending on the platform used. People costs are obviously important and can vary dramatically, as can the cost of the software licenses required.
How many heads?
To determine people costs, the object is to arrive at a figure for the number of servers per head of staff directly involved in the IT function. In the Intel-based server arena, this will vary according to the tasks being run. For example, where there are many “clone” servers all running the same simple tasks such as file and print, it is possible to have up to 30 servers per head. At the other end of the scale, where servers are running more complex tasks, such as Web or applications services, this figure drops to 10 servers per head or less.
A general rule of thumb suggests that an Intel-based server environment running a typical mix of tasks under Microsoft Windows is considered excellent if it averages 15 servers per head. In a Risc/UNIX environment, this figure drops to between two and eight servers per head, which appears significantly worse than that for an Intel environment, but there are other factors to consider. For example, the applications are usually more complex and critical to the business.
Cost of downtime and process utilization
With a mainframe, the servers-per-head ratio is inverted, and several staff members may be required per server. But this ratio has to be balanced against the mission-critical nature of the applications being run. Two other factors have to play a part in this equation. One is the reliability of the servers in a production environment. The other is processor utilization.
When it comes to reliability, mainframes have a well-established track record. There are also some cost factors that are not always immediately obvious when reliability is considered, since more than just the cost of maintenance is involved. These days, even planned maintenance downtime affects an enterprise operating a 24/7 environment, where multiple Intel-based servers must be set against one mainframe. The total cost of downtime has to include the cost to the business of not having the server(s) available in the production environment. This will include estimating the number of staff affected, multiplied by the average salary, including benefits. This leads to a real, direct, cost per minute of downtime for the business. In addition, you must factor in the cost to the enterprise of possible lost business while servers (and services) are unavailable.
Processor utilization is the amount of time the server processor actually spends performing useful work. With Intel-based servers, IBM estimates this to be normally between three and seven percent, although it can rise to as much as 30 percent for considerable periods of time with some applications. This means, of course, that even with compute-intensive applications, the processor spends more time inactive than actually working. With a mainframe, processor utilization is significantly higher.
The price of software
Software pricing is also an important issue, particularly where users have, or are considering, Intel as a server platform throughout the enterprise. This is also an area where mainframe systems have traditionally been at a disadvantage. However, the pricing models now available to applications vendors in the Intel server market could actually make software more costly for Intel systems than for mainframes.
For example, some applications in the Intel sector are priced per processor, while some are priced per processor engine. This is an increasingly important difference, which can be demonstrated by the following example. Take a typical Intel-based enterprise environment of 100 servers, each using four processors. Applications priced on the per-processor model could cost four times as much. And with the move by Intel’s OEMs toward eight-processor systems, this will obviously become even more significant. If an application is important to the business process, software licenses could easily grow to be both a cost burden and a complex management problem.
Linux applications are often priced on a per-processor-engine basis. So coupled with the mainframe’s ability to run virtual servers on a single physical processor engine, dramatic cost savings are possible.
Taking this thought a little further, 15 servers per head is considered to be a leading-edge environment for Intel-based systems, so the per-head figure for a mainframe looks poor by comparison. But in practice, the system design allows some significant cost advantages to be gained over Intel servers running either Linux or Windows. For example, under Z/VM, it is possible to double—and sometimes triple—the best Intel-based figure for the number of servers per head by creating multiple virtual servers within the mainframe.
Where the servers are running relatively simple tasks such as file and print, even greater long-term costs can be saved. This is because it takes a matter of a few seconds to clone a new virtual Linux server on the zSeries machines. Such virtual servers will be running on just one engine, so additional software license costs can be saved. In addition, these can be true clones that can be managed in exactly the same way. If another processor is added to the mainframe, the whole process can be replicated without requiring additional staff, thus extending the server-per-head figure even further.
This availability of a uniform environment for virtual servers also overcomes a practical problem when using supposedly “standardized” Intel servers, which are available from a wide range of sources. The common management approach will be to select servers that are the lowest cost needed to run a specific application. In the short term, this can appear to be a significant cost saving, but in the long term has several drawbacks. If the servers come from different suppliers, there will be the inevitable variations in core software from vendors seeking differentiation in the marketplace. These will rarely be “clone” systems and will therefore demand individual management. The alternative—to select one server type for all applications—will demand that the biggest and most powerful is selected for all applications.
Using these issues as arguments in a consolidation equation is, IBM believes, the way to identify the TCO issues over the lifetime of a production environment. As Web-based services proliferate around the core business functions, it is inevitable that the number of servers needed to run them will grow around the core mainframe. But if they can be run on the mainframe, often alongside mission-critical applications, costs of procurement, management, and licensing can be controlled and reduced.