Hybrid computing, not cloud computing, is the future of technology

TechRepublic member dcolbert believes that cloud computing will compliment -- rather than replace -- traditional local computing. Do you agree? Is hybrid computing the future of technology?

One of the few remaining print technology magazines with a viable market recently did a series on turning failure into success. Among these articles was a piece on Larry Ellison's vision of thin-client network computing.

In 1995, there was a sudden uptick among executive management across the world of secure thin-client computing devices. The concept was that a single piece of "big iron" in the background would house all data and applications, and that small, light-weight, and inexpensive thin-client network devices would sit on user desktops and access the server side data on the back end.

Oracle's Larry Ellison was one of the driving forces evangelizing this paradigm shift in how we approached the end-user computing experience. Supposed benefits included lower TCO due to reduced administration, plus less expensive equipment, longer life cycles, and increased security.

In reality, these machines turned out to be stripped-down PCs with very limited processors, no local internal storage, and often no optical, magnetic, or — at the time — still viable, floppy disk drives. Otherwise, they hooked up to the same industry standard 17" CRTs, PC 101 layout keyboards, and two or three button mice.

The improved security was always a dubious claim. The lack of any kind of disk input/output (we were still some years away from inexpensive flash media thumb drives) was supposed to prevent malicious employees from transferring data onto removable media in corporate espionage schemes. This limited the portability of data for other, legitimate reasons as well — but if the model was a centralized server with thin clients hooked into it, why would you want to have your data portable outside of your network anyhow, right?

At the time, I was working for MCI VANSIS/SGUS (Value Added Networking System Integration Services/State Government University Systems). A high-level executive asked me to perform an analysis on thin-client computing. I'm not sure that what I came up with was exactly what he had in mind, but I do know that MCI never instituted a large-scale thin-client computing initiative in that group.

Within my report, I stated that ultimately, these systems were closed architecture machines that had a very limited life span. I didn't know that there was a name for Moore's Law at that point in my career, but I had worked with PCs and related technology long enough to realize that things change quite a bit every couple of years.

I claimed that users want to be able to store their data locally, to copy it, take it to other machines, and even work on it from home. I felt that the lack of a local hard drive was a serious negative, and although I had not begun my experience as an early adopter and core expert on high-availability solutions, I also realized intuitively that the centralized server model introduced a single point of failure.

With the traditional model of PC computing, even today, if you have a copy of your data on removable media and your PC or your connection to the network or back-end server goes down, you can find another machine and keep going. With the thin-client model that was proposed in 1995, if any of these components failed, you weren't doing any work until the issue was resolved.

I presented all of these opinions to this executive, but I never heard back from him. I often wonder if he watched network computing devices arrive and then fail for most of the reasons I outlined in my presentation.

Some of the early adopters struggled with these proprietary network devices, which required special keyboards, mice, monitors, and non-standard power supplies — devices that locked them into small vendors who charged far more for these components than the same commodity PC equivalent.

Other shops ended up replacing these devices with full PCs a couple years later. After all, PCs had quadruple the processing power, huge (for the time) hard drives, and the ability to inexpensively write optical media – at the same price they had paid for the dead-end, dated, non-upgradable network computing devices.

Thin-client network computing devices (which, 20 years earlier, were called "dumb-terminal/mainframe" computing devices) quietly died a second death, for the same reasons they had been replaced by the IBM-compatible PC in business applications during the PC revolution of the ‘80s. I thought to myself, "Well, I nailed it, and I'll certainly never find myself concerned with that model of computing again."

But then something happened. The Internet became the single most important driver of personal computing. Everyone ended up on their PCs, hooked to the Internet, with broadband connections, on machines so incredibly powerful that companies like Sun gave up on what had always been considered "powerhouse, industrial, RISC computing platforms."

Coincidentally, people started suggesting that "cloud computing" was the new wave of how people would use their PCs. Over the last few years, that quiet buzz has turned into a crescendo of incessant chatter about how the future of computing is "in the cloud." Again, Larry Ellison is one of the most vocal proponents of this model, but it still has many of the same inherent risks as before.

The difference this time is that low-cost "disposable" machines are not the key selling point – instead, it's the convenience of centralized computing. And while apps and data are stored in a centralized machine, it isn't quite "thin-client" computing we're talking about. That's an important distinction that people seem to miss when they claim that the arrival of the cloud is the model that Larry Ellison and other network computing device advocates proposed in the mid-90s.

Thin-client computing utilizes an application and data on a central server. All of the heavy lifting occurs on a machine across the network. The network computing device — the thin-client — only handles screen refreshes and input/output. Citrix and Windows Terminal Server are two common examples of this paradigm of computing.

In contrast, when you load a web app, it loads and executes in a native engine on your PC. A slower PC will perform worse than a faster machine. The same basic principles of local computing apply with cloud computing, but you're still dependent on that remote back-end server. If it's down or unreachable, you can't get to your applications or your data.

Many cloud solutions, such as Google Docs, promise to deliver "local, offline" access to both apps and data. Again, this is not the same thing as thin-client computing. In fact, it is just a web delivery of the same model of local computing that has been popular since the PC revolution of the ‘80s. A copy of the app and a copy of the data sit on your local machine. Instead of using the web to execute the app and load the data, you access it locally — but through your browser (an added layer of complexity) instead of your desktop.

You might find it ironic, but I'm writing this with Google Docs, which means that I actually use the cloud-based model that I seem to be writing against in this piece. But do not misunderstand me. I'm not saying that the cloud model will fail. Cloud computing will complement — rather than replace — traditional local computing. In my honest opinion, a hybrid approach to computing is the future of technology.


Donovan Colbert has over 16 years of experience in the IT Industry. He's worked in help-desk, enterprise software support, systems administration and engineering, IT management, and is a regular contributor for TechRepublic. Currently, his profession...

Editor's Picks