Evaluate the true cost of consolidating servers on IBM zSeries mainframes
By Martin Banks
Consolidation of Intel-based servers is a well-established strategy for reducing the running costs and management of enterprise IT systems. But if IBM has its way, more and more organizations will begin to explore the potential benefits of consolidating servers on the mainframe.
Many enterprises continue to have mainframe systems at the heart of the IT function, running established mission-critical applications. And much of the needed server consolidation can best be applied to the growing army of Intel-based systems that sit around them. These systems run a variety of applications, from routine file and print services to the more complex applications and Web-based services needed for increasingly important e-business front-ends.
Many of these applications can run under the Linux operating system as well as under Microsoft's Windows. Indeed, for some applications Linux is arguably the better option. This opens up an opportunity for IBM, the dominant mainframe supplier, to exploit the capability of its zSeries machines to run Linux as a native operating system, as well as to run partitions in which multiple virtual Linux servers can be mixed with existing mainframe applications on the same machine.
Tackling the costing process
IBM's argument for consolidating around the zSeries is straightforward: Most enterprises already have at least one zSeries machine, and though undoubtedly more expensive as an initial purchase, they provide far better results in terms of total cost of ownership (TCO) over a lengthy period.
Estimating the total cost of consolidating on the zSeries, therefore, is a matter of some importance and demands that IT managers collect some important evidence. IBM’s approach is to compare the ways of running a specific application. So to start the costing process, the first job is to look at an application that's running on Intel-based servers and size it. There are some important variables to consider, depending on the platform used. People costs are obviously important and can vary dramatically, as can the cost of the software licenses required.
How many heads?
To determine people costs, the object is to arrive at a figure for the number of servers per head of staff directly involved in the IT function. In the Intel-based server arena, this will vary according to the tasks being run. For example, where there are many “clone” servers all running the same simple tasks such as file and print, it is possible to have up to 30 servers per head. At the other end of the scale, where servers are running more complex tasks, such as Web or applications services, this figure drops to 10 servers per head or less.
A general rule of thumb suggests that an Intel-based server environment running a typical mix of tasks under Microsoft Windows is considered excellent if it averages 15 servers per head. In a Risc/UNIX environment, this figure drops to between two and eight servers per head, which appears significantly worse than that for an Intel environment, but there are other factors to consider. For example, the applications are usually more complex and critical to the business.
Cost of downtime and process utilization
With a mainframe, the servers-per-head ratio is inverted, and several staff members may be required per server. But this ratio has to be balanced against the mission-critical nature of the applications being run. Two other factors have to play a part in this equation. One is the reliability of the servers in a production environment. The other is processor utilization.
When it comes to reliability, mainframes have a well-established track record. There are also some cost factors that are not always immediately obvious when reliability is considered, since more than just the cost of maintenance is involved. These days, even planned maintenance downtime affects an enterprise operating a 24/7 environment, where multiple Intel-based servers must be set against one mainframe. The total cost of downtime has to include the cost to the business of not having the server(s) available in the production environment. This will include estimating the number of staff affected, multiplied by the average salary, including benefits. This leads to a real, direct, cost per minute of downtime for the business. In addition, you must factor in the cost to the enterprise of possible lost business while servers (and services) are unavailable.
Processor utilization is the amount of time the server processor actually spends performing useful work. With Intel-based servers, IBM estimates this to be normally between three and seven percent, although it can rise to as much as 30 percent for considerable periods of time with some applications. This means, of course, that even with compute-intensive applications, the processor spends more time inactive than actually working. With a mainframe, processor utilization is significantly higher.
The price of software
Software pricing is also an important issue, particularly where users have, or are considering, Intel as a server platform throughout the enterprise. This is also an area where mainframe systems have traditionally been at a disadvantage. However, the pricing models now available to applications vendors in the Intel server market could actually make software more costly for Intel systems than for mainframes.
For example, some applications in the Intel sector are priced per processor, while some are priced per processor engine. This is an increasingly important difference, which can be demonstrated by the following example. Take a typical Intel-based enterprise environment of 100 servers, each using four processors. Applications priced on the per-processor model could cost four times as much. And with the move by Intel’s OEMs toward eight-processor systems, this will obviously become even more significant. If an application is important to the business process, software licenses could easily grow to be both a cost burden and a complex management problem.
Linux applications are often priced on a per-processor-engine basis. So coupled with the mainframe's ability to run virtual servers on a single physical processor engine, dramatic cost savings are possible.
Taking this thought a little further, 15 servers per head is considered to be a leading-edge environment for Intel-based systems, so the per-head figure for a mainframe looks poor by comparison. But in practice, the system design allows some significant cost advantages to be gained over Intel servers running either Linux or Windows. For example, under Z/VM, it is possible to double—and sometimes triple—the best Intel-based figure for the number of servers per head by creating multiple virtual servers within the mainframe.
Where the servers are running relatively simple tasks such as file and print, even greater long-term costs can be saved. This is because it takes a matter of a few seconds to clone a new virtual Linux server on the zSeries machines. Such virtual servers will be running on just one engine, so additional software license costs can be saved. In addition, these can be true clones that can be managed in exactly the same way. If another processor is added to the mainframe, the whole process can be replicated without requiring additional staff, thus extending the server-per-head figure even further.
This availability of a uniform environment for virtual servers also overcomes a practical problem when using supposedly “standardized” Intel servers, which are available from a wide range of sources. The common management approach will be to select servers that are the lowest cost needed to run a specific application. In the short term, this can appear to be a significant cost saving, but in the long term has several drawbacks. If the servers come from different suppliers, there will be the inevitable variations in core software from vendors seeking differentiation in the marketplace. These will rarely be “clone” systems and will therefore demand individual management. The alternative—to select one server type for all applications—will demand that the biggest and most powerful is selected for all applications.
Using these issues as arguments in a consolidation equation is, IBM believes, the way to identify the TCO issues over the lifetime of a production environment. As Web-based services proliferate around the core business functions, it is inevitable that the number of servers needed to run them will grow around the core mainframe. But if they can be run on the mainframe, often alongside mission-critical applications, costs of procurement, management, and licensing can be controlled and reduced.
No messages found
No messages found
Not so fast....
1. Performance: Users running Linux applications on the M/F report performance in the x486 range. IBM isn't talking much about this and has not performed any benchmarks on the frame. IBM responds to this concern by pointing out they can run "thousands and thousands" of Linux instances on a single platform. However, if you take a close look at their tests, you find that each instance was simply a shell and did not do any useful work or perform any outside communication.
2. Cost: IBM has special programs to make it "inexpensive" to run Linux on the mainframe. However, it is still quite a bit costlier than Linux on an x86 platform. I have heard that the mainframe version of the Linux O/S can cost over $200,000 - not sure if this is true, would appreciate feedback if I am wrong.
3. Application availability: The mainframe, not being x86 based, needs a special version of Linux that is being supplied by (I think) Suse. Linux applications for the mainframe also need modification to run on the 'frame. They will, at best, need to be recompiled. In some cases, theapps might need to be extensively tweaked to ensure reliable performance.
The author discusses the support personnel ratios for a variety of system architectures. In my opinion, he mis-states the case and makes the mainframe sound more advantageous than it really it. My experience in a variety of end-user organizations indicates that many organizations have a 30+ to 1 ratio of support staff to x86 based systems and a 10+ to 1 ratio of support staff per Unix system. The discussion of support staff-per-system can be useful, but it is only part of the story and part of the Total Cost of Ownership.
IBM mainframes are expensive too...
Mainframe maintenance requires a handful of highly specialized experts to administer them. Especially now where the number of I.T. who knows anything about mainframe development and maintenance are continuously dwindling.
I used to work with a software house company who supports, among other platforms, mainframe development with a headcount of 1500 staff. When we failed to renew our contract on our major (only mainframe contract) we had to cut down to 300 staff. 1200 staff, from developers to SA's and project managers for maintaining a mainframe system from 1 company is quite an army.
Even now in the current economy you can still find the need for mainframe contract developers, SA's and even Proj Managers here in hong kong.
Stable? yes mainframes are very stable. Cheap? no way. They paid very very good money to make it stable.
IBM has no shame
So IBM has a huge sunk cost in its mainframe development and is just trying to sell more of them cause no one else has them. But their sales pitch falls flat on a couple of points.
Too chunky - mainframe capacity is bought in too big of hunks leading to obvious waste and a tough sell.
Too centralized - applications and the people who run them in a corporation have decentralized too much to go to the centralized process that mainframes require.
Too few skilled practitioners - most of the IT people today have never dealt with a mainframe and are going to totally resist it.
There are no posts from your contacts.
Adding contacts is simple. Just mouse over any member's photo or click any member's name then click the "Follow" button. You can easily manage your contacts within your account contacts page.