Servers

My vision for the future of computing: One big system shared among many... then broken apart

In a rare aside, Scott Lowe discusses the current status of computing and where today's model fits with the past and the future and how this affects the CIO.

I've been in the IT field for just over 16 years. To many of the loyal readers at TechRepublic, I'm still in my IT infancy. You remember days where reel-to-reel was the backup method (and the data load method!) and when punch cards were the only way to do programming. Like me, you also remember the days of the single-purpose server -- one server ran for file shares, another acted as the print server, yet another was the database server, while that fourth system did e-mail. In order to keep workloads from conflicting with one another, IT continually added new servers to meet new needs -- this add-and-extend process became known as "server sprawl."

In the past few years, things have changed a lot. Server virtualization has caught the data center by storm and has reaped significant benefits to the organizations that adopted the technology. On the heels of server virtualization have come storage virtualization and, more recently, desktop virtualization/virtual desktop infrastructure (VDI).

A clear trend is emerging.

More and more of the raw computing power is being centralized and "right-sized" at the same time that organizations scale up their use of technology. What do I mean by "right-sized"? Let's use Westminster College (where I am CIO) as an example. A few weeks ago, I wrote a blog post discussing our successful (and, admittedly, modest in scope) server virtualization project. Under that infrastructure, we're running 28 virtual machines -- 28 separate workloads -- on three physical hosts. Under the "physical server sprawl" model of yesteryear, this would have required 28 physical servers if we were to maintain a 1:1 relationship between workload and computing resource sandbox. Now, with VMware building us virtual sandboxes, we get a major benefit: We get to use these centralized resources -- 3 servers' worth -- without huge wasted processing cycles. In fact, on average, even with the "sprawl" of 28 virtual machines, we're using about 10% of the combined total resources of the three physical hosts. This makes sense since a big part of our effort was focused on availability; we're able to lose a single server of the three hosts and still run all the workloads. As we need more processing resources, we'll simply add another ESX server to the cluster.

As I indicated, this whole concept of abstraction -- which is virtualization -- is happening in other areas, too. Storage virtualization aims to make more efficient use of storage pools in much the same way that virtual servers make better use of server resources.

Another area being affected by the continual move to centralization is the desktop computer. Many organizations -- Westminster College included -- are studying what could be major benefits in the area of desktop virtualization. I, for one, abhor the endless desktop replacement cycle. In most cases, people's jobs and the tools they use haven't changed in significant ways in the past three years, but it's become an accepted practice to simply remove the old computer in favor of a new one that has a new operating system and runs faster. This is done for a variety of reasons, including the difficulty supporting older computers, perceived speed issues, and, sometimes, the user's new duties and need for newer hardware. However, changing work styles (read: mobility) is another driving factor for ending the desktop replacement cycle in favor of a service that can both stabilize replacement costs and aid in mobility efforts.

I'm not going to extol the virtues of desktop virtualization -- whether you do it with newer VDI tools or with more traditional products like Citrix -- in this article except to say that I believe that, done right, it has promise. Moving down this path further centralizes computing resources in the data center.

So, we've now fully centralized all our servers and desktops across a bunch of virtual hosts. Let's take it to the next level.

Let's simply buy one huge computer that has fully redundant parts; we'll make sure it has massive amounts of memory, processors galore, and all the storage needed for the whole enterprise. We'll lock the big machine in one big room in the company. Now, we have to maintain only a single -- albeit complex -- computer that happens to run all the workloads upon which our organization's business runs!

Brilliant!

(As an aside, when I talk to my colleagues about virtualization, I like to say that "I envision a day when everything we do will run on a really big, single computer in the data center, and people will just access these services via terminals" just to see if people are paying attention.)

Just for the record, I write the last paragraph fingers-in-cheek (since I don't type with my tongue, I can't say tongue-in-cheek) as it illustrates the current status of the centralization/decentralization pendulum. We started with mainframes -- big centralized computers that both ran the company processes (workloads/servers) and connected to terminals for user access (desktop computers). Later on, we moved to distributed computing brought on by the advent of the affordable IBM PC and networking software from the likes of Novell, Banyan, and Microsoft.

Now, we're back on the path to centralization. Of course, what's happening today isn't exactly that same thing, but I think a clear case can be made that we've quickly moved back to centralizing resources, including both servers and desktops.

Moreover, there are clear signs that the pendulum is already starting to swing back to decentralized services in the form of cloud-based services (aka hosted services, aka Software as a Service (SAAS)). Individual organizations are centralizing computing resources in order to drive inefficiency out and reduce costs, while at the same time, cloud services are becoming more and more viable all the time. The same way that the mainframe guys watched the move to LANs and distributed computing, today's IT groups are starting to see cloud services chipping away at the neatly centralized computing environment that has been built. Where CRM software was once run in-house and moved to the virtualized, centralized infrastructure and access through Terminal Services or a VDI connection, many salespeople now use services like Salesforce.com. Even Microsoft has seen the light on this trend and is making online/hosted versions available for many of its products, including Exchange and SharePoint.

So, what does all of this mean for you, the CIO?  Well, it means that the constant change to which you've become accustom doesn't show any signs of slowing down. If anything, the pace of change is increasing.  It also means that you have many markets to watch. Even as cloud-based services whittle away (sometimes rightly so) at the services that have been traditionally hosted by internal IT groups, there are still significant benefits to be had by centralizing your computing infrastructure. After all, even if people are accessing that cloud-based Salesforce.com service you provide, there could still be benefit in having them do so using a VDI-provisioned desktop image that you provide.

As Robert Jordan used to write at the beginning of every book of the Wheel of Time series, "Ages come and ages go but the Wheel keeps turning."

He must have been talking about IT.

About

Since 1994, Scott Lowe has been providing technology solutions to a variety of organizations. After spending 10 years in multiple CIO roles, Scott is now an independent consultant, blogger, author, owner of The 1610 Group, and a Senior IT Executive w...

9 comments
TheProfessorDan
TheProfessorDan

" I, for one, abhor the endless desktop replacement cycle." We recently went virtual with 95% of our desktops. We use WYSE terminals and VMWare. I was giving some thought to this a few weeks back when it occurred to me that we are theoretically done with desktop migrations for pretty much forever. The thought is mind boggling. I also agree with Scott that I will not shed a tear to see this go away. It does make you wonder how things will go in the future with operating systems changing.

Deadly Ernest
Deadly Ernest

BTW: Science fiction is full of stories of systems like that were they go wrong or get misused. May I suggest you read Robert Heinlein's 'The Moon is a Harsh Mistress'

oldbaritone
oldbaritone

"Single Point Failure" First we had mainframes with centralized processing and timesharing. Then we had desktop computers in a standalone environment. Then came the internet and everything was connected together. Distributed processing with centralized storage. Then came Windows Terminal Server. And then we found out that TS needed GOBS of processor power, and that when "the box" went down, all of the work stopped. No matter how many "redundant parts", there always seems to be some one-of-a-kind thing that breaks. It's called Murphy's Law - "The Contributions of Edsel Murphy to the Understanding of the Behavior of Inanimate Objects." If anything can go wrong, it will go wrong, at the most inconvenient time or place. So we went back to desktops and distributed processing, with data replication. Local copies of the data keeps the users working continuously, and data warehousing protects the data and provides backup against system failure. Clustered servers provide redundancy at the Network Operations level. And still, the data carrier loses the backbone every now-and-then, like today after the Blizzard of 2010 in the Northeast.

tom.foale
tom.foale

Nice article. Have you ever considered what happens to computer architectures when we have effectively infinite bandwidth, huge amounts of memory and processing power for pence, etc? The current client/server paradigm is an economic trade-off - it made sense to use central servers when bandwidth, large amounts of processing power/storage was expensive. Take away the constraints - data and processing will go to where it is cheapest, and that for many purposes will often be at the edge of the network. Which is just what Skype did to the expensive central telephony switch.

boxfiddler
boxfiddler

it'll be further than mainframes.

Tony Hopkinson
Tony Hopkinson

Would that be like 640k, you'll bever use that up type of infinite. Not that long ago the internet was working quite happily on little modems. Now in the days of giga bit networks and broadband, pages on this site still take seconds to load...

tom.foale
tom.foale

Unfortunately bandwidth is about round trip delay, not line speed. A 10Mbps download speed requires a maximum RTD of 64ms - that's uplink and downlink, because the received packets have to be acknowledged. I've provisioned 10Mbps fixed wireless circuits that downloaded and uploaded faster than 100Mbps fibre circuit to the same site - and it was all due to the quality of the interconnect between ISPs and backbone providers.

Tony Hopkinson
Tony Hopkinson

You make more resources available, as a programmer, I'll use it up before you've finished patting yourself on the back....