When it comes to the number of servers still running old versions of Windows, estimates (or should we say guesses) vary hugely. Depending on who you want to believe, for example, there could be around eleven million servers around the world still running Windows Server 2003. This statistic wouldn’t, ordinarily, raise many eyebrows, were it not for the fact that, as of 14 July this year, that version of Windows Server officially reaches the end of its life (‘EOL’ in Microsoft-speak) and all updates, including critical security patches, stop. Forever.

That’s a stark fact, which begs the question: if you’re still running good old Win2K3, what do you do come Bastille Day? Stick your head in the sand and carry on regardless, fingers crossed that no new vulnerabilities are found to compromise the security of your servers and the applications they host? That’s what a lot of customers may well do — just as many still run Windows XP, support for which ended in April last year, on their desktops. However, it’s an incredibly risky strategy. Far better, it would seem, to follow Microsoft’s recommendation and upgrade to the latest Windows Server 2012 R2 platform, which is not only fully supported but also quicker and, if Microsoft is to be believed, better too.

That said, a lot has changed in the intervening years and upgrading to the latest Windows Server OS will be both complicated and disruptive. If you’re going down the upgrade route, this might be just the opportunity to take a good look at look at what you’ve got, and perhaps consider alternatives to conventional in-house servers that were unheard-of when Windows Server 2003 made its debut.

Out with the old, in with the 64-bit

It’s important not to underestimate the amount of disruption that upgrading from Windows 2003 is likely to cause. Don’t assume, for instance, that you can get away with simply buying new software and installing it on your existing hardware. Very few sites will be able to do that — not least because system requirements have changed enormously in the decade or so since Win2K3 was first released. For many, this will mean updating hardware just to get Windows Server 2012 R2 to run.

When Windows Server 2003 first appeared, for example, it could be hosted on a single-core processor supported by just 128MB of memory. Furthermore, the processor didn’t even have to be 64-bit, so companies who bought their servers then (yes, they do still exist) will have little option but to replace their hardware before even thinking of migration to the current 64-bit only product.

On the plus, side such 32-bit laggards are likely to be thin on the ground. The majority will, at the very least, have already upgraded to the 64-bit R2 version of Win2K3 (released in the same year, 2006, as the first dual-core Xeon arrived) and refreshed their hardware to suit. Many will also have kept pace with further upgrades and advances.

But not all. A good proportion of companies looking to move on from Win2K3 will have to upgrade their hardware, and not just by adding extra RAM or a couple of bigger disks. Large numbers will need to get rid of what they’ve got and buy afresh which, according to yet more estimates, could result in anything from 1,000 to an incredible 25,000 servers a day having to be replaced in the coming months!

Virtualisation takes up the slack?

“But wait,” I hear you say, “those numbers can’t be right. They mostly come from server vendors with a vested interest in pushing new kit out the door. Besides, everything that matters these days is run on a hypervisor, so a lot of Win2K3 upgrades will involve virtual machines hosted on much more recent hardware which, at the very least, can be easily scaled to cope with the additional workloads.”

These are reasonable enough assumptions but, unfortunately, not necessarily the case. Physical servers are still very much in use — especially in small to medium-sized organisations. Also, larger companies will often use virtualisation to take consolidation to the max, running server hosts as close to capacity as possible. In which case the same considerations apply, as VMs upgraded to the newer OS will make bigger demands on compute, memory and storage resources which, in turn, will require physical hardware upgrades or bigger servers to cope.

Then there’s the little matter of server applications which have, similarly, moved on, putting even greater pressure on the supporting infrastructure. Exchange, SQL Server and other staples remain but, these days, data centre servers also have to cope with private clouds, vastly increased user mobility, big data analytics, virtual desktop deployments and much, much more.

Most of these were little more than pipe dreams when Win2K3 was conceived and although Windows Server 2012 R2 has most of these developments covered, this capability does come at a price. Especially when allied to a Storage Area Network (SAN), the technology now almost universally employed to handle the storage side of the equation and one reason why you’ll find some hardware vendors guiding Win2K3 EOL customers in the direction of convergence rather than upgrading their conventional server stacks.

Server-storage convergence

There are many reasons why convergence has been getting a lot of attention recently, most centred on the relationship between servers and SANs. Despite it being a common pairing, mixing together servers and SANs is far from cheap and, despite the best attempts of the industry to make the technology user-friendly, it can still take time and expertise to provision and manage the storage that servers require. This can be a real pain point in today’s world where companies want to bring new systems online and respond to changes in demand as quickly as possible.

At its most basic, convergence addresses these issues by delivering compute and storage resources together in a modular yet scalable format. Need more capacity? Simply slip another converged appliance into the rack to get both additional processors and storage, with platforms able to do this now readily available from all the leading server vendors. Not all dispense with SAN, however, and where they do, they tend to rely on the likes of VMware and its Virtual SAN platform or open-source alternatives to provide the virtualisation needed to join up the storage dots and make it all work.

Differentiating converged infrastructure from their more traditional server platforms isn’t easy for the big-name vendors either, which has enabled specialist startups to get a foot in the door and target customers looking to cope with Win2K3 EOL. Leading examples of such companies are Nutanix and SimpliVity, which deliver convergence using commodity server and storage hardware, packaged in an appliance format complete with integrated storage virtualisation for scalability and ease of management.

A rapidly-growing segment of the market, companies faced with having to upgrade from Windows Server 2003 could do worse than consider migration to a converged infrastructure as part of that process. It won’t be to everyone’s taste, but there are lots of benefits in terms of cost of ownership and ease of management, and it’s certainly one way of turning a potential problem into an opportunity for improvement.

Goodbye on-premises, hello cloud

Convergence isn’t the only option, however. Another would be to give up on in-house servers and storage altogether. After all, why bother with the expense and hassle of hosting applications in a costly and complex local data centre when you can simply rent it in the cloud? And what better time to investigate what the cloud has to offer than when Win2K3 EOL forces you to consider a change? This possibility is not lost on service providers, who have been busy marketing their wares to potential Windows Server 2003 upgraders — and not without success. Buyer beware, however, as it’s far from a ‘one size fits all’ solution.

One commonly suggested approach is to switch to renting infrastructure in the cloud, using Amazon Web Services (AWS), for example, or Microsoft Azure, and migrate existing server workloads to virtual machines hosted by those providers. That, however, can be a big leap for any organisation to take and, although potentially a lot cheaper, is not necessarily any less disruptive compared to simply upgrading an existing server setup. Also it doesn’t suit every application, and the end result is often just as complex to manage and maintain.

Another alternative would be to outsource servers or, possibly, a complete converged infrastructure to a third party to run for you and use that to deliver a private cloud. Or, better still, to target individual servers running Windows Server 2003 and look at replacing the applications they host with public cloud services.

Email and collaboration platforms running on Windows Server 2003 are prime candidates here, as hosted Exchange Server services, for example, are widely available both from Microsoft and others. It needn’t be that disruptive either, and there’s the added advantage that it makes life a lot simpler when it comes to supporting remote and mobile users. You don’t have to stick with Exchange Server either, as there are plenty of alternative email and collaboration services that are equally effective.

Other applications can, similarly, be migrated to the cloud rather than updating the W2K3 EOL servers that host them, and you can do that at your own pace. Or you could just bite the bullet and migrate to the latest Windows Server on a brand-new stack of servers. Or you could do nothing at all — which, judging by yet more of those estimates and a wealth of good intentions, is what a lot of Windows Server customers will end up (not) doing in the end.