Erik Eckel recommends a Mac server replacement plan that takes into account the cost of retaining machines that are past their prime.
The question of how often enterprise administrators should replace Mac servers is easy to answer. Justification is the tricky part; finance and accounting types might not like the answer.
A few weeks ago I presented four questions to ask to determine how often Mac workstations should be replaced. Mac servers possess vastly different lifecycle requirements, however.
Mac server v. workstation replacement cycles
There are at least three reasons Mac servers require replacement at different intervals than do desktop workstations:
- Mac servers typically fulfill more demanding video production and/or audio editing tasks than do Windows servers. Over time, new Mac server models boast improved architectures that introduce new performance capabilities.
- Mac servers, by nature of the video, audio and graphics work they frequently empower, typically host vast amounts of data. As a server progresses through its service life, it's not uncommon for disk space and storage issues to arise.
- Like Windows and Linux servers, Mac servers must deliver near uninterrupted uptime. The servers cannot crash or business operations and production are interrupted.
So just how often should enterprise administrators replace Mac servers? I say every five years. Organizations should build their budgets with server replacement planned twice a decade.
It makes no sense to try stretching life cycles longer, especially as accounting statistics suggest it costs more to maintain systems the more they age. Hard drives fail. Power suppliers fail. Motherboards fail.
The more hard disks, the higher the likelihood of failure. Due to the demanding tasks Mac servers empower, it's likely the servers utilize multiple hard disks. Even the Mac Mini Snow Leopard Server ships with two 500GB SATA drives, while Xservers can be configured to include up to 16 SATA or SAS disks using optional Promise VTrak E-Class RAID controllers.
Wait too long, and enterprise administrators will find themselves having to waste IT budget dollars for repairs to systems that should be being rotated out of the organization. That's not a solution; that's a temporary fix. And it's expensive. By cycling Mac servers every five years, organizations can ensure they do all they can to eliminate unplanned downtime, while also benefiting from faster architectures, greater amounts of storage and improved reliability.
The true cost of downtime, especially unplanned outages, is simply staggering. The formula is surprisingly non-complex. Just divide annual earnings by the number of workdays in a year and then subtract the amount of downtime from the available workdays.
For example, say an organization with just $10 million in annual sales suffers a server issue that requires just a single business day to repair, which is unlikely, as diagnosis and obtaining replacement parts typically requires at least one business day, with a second day being required to actually perform and test the repair. During the day the organization is down, due to the server being unavailable employees likely can't work, complete projects, process orders, send invoices, respond to email, etc. That means the organization just lost $40,000.
Skeptical? Check my math at $10,000,000 annual sales divided by 260 weekdays a year minus 10 federal holidays (250 workdays total) equals $40,000 a day lost due to downtime. Adding insult to injury, now the organization must invest precious IT dollars band-aiding a server that's just going to experience more difficulty as it continues aging.
Those figures should send a sinister shiver up the spine of the accounting and finance officers. Enterprise administrators can help ensure organizations avoid such catastrophes by ensuring IT budgets call for replacing server chassis every five years.