For some reason, a huge number of people on ZDNet, both in TalkBacks and in articles (Mr. Berlind, are you reading?) confuse a central file server with “thin computing”.

Mr. Berlind’s example situation (a really bad one, at that) is simply using a web browser as a thick client application to access a central file server. Especially in the AJAX world, clients get thicker not thinner! For example, I have been doing some writing on this website right here. Their blogging software puts such a heavy demand on my system at home, that it was taking up to thirty seconds for text to appear on the screen. Each keystroke caused my cursor to be the “arrow + hourglass” cursor. Granted, my computer at home is no screamer (Athlon 1900+, 256 MB RAM) but that is what AJAX does. It requires an extraordinaily thick client to run. If I compared that AJAXed system to, say, using MS Word (a “thick client” piece of software) or SSHing to a BSD box and running vi ( a true “thin client” situation), that AJAX system comes out dead last in terms or CPU usage. And after my system does all of the processing for formatting and whatnot, what happens? It stores the data (via HTTP POST) to a central server, which stores that information somewhere, performs a bit of error checking and then makes a few entries into a database table somewhere. If I compare the CPU usage on my system by the AJAX interface, to the CPU usage of the server to file my entry, it sure doesn’t look like a “thin client/thick server” situation to me! It looks a heck of a lot closer to “thick client/dumb file server” story.

Some of the comments to this story are already making this mistake. “Oh, I like the idea of having all of my data on a central server.” So do I. This is how everyone except for small businesses and home users have been doing it for decades. The fact that most business people stick all of their documents onto their local hard drives is due to a failure of the IT departments to do their jobs properly, for which they should be fired. Why are people in marketing sticking their data on the local drive instead of on the server? Anyone who saves data locally and loses it should be fired too, because if their IT department did the job properly, this would not be possible without some hacking going on. The correct setup (at least in a Windows network, which is what most people are using, even if it’s *Nix with Samba on the backend) should that there is a policy in place which sets “My Documents” to a network drive. This network drives gets backed up every night, blah blah blah. The only things that go onto the local drive should be the operating system, software that needs to be stored locally (or is too large to carry over the network in a timely fashion), and local settings. That’s it. And if someone’s PC blows up, you can just flash a new image onto a drive and they’re up and running again in a few minutes.

Now, once we get away from standard IT best practices, what is “thin client computing”? We’re already storing our data on a dumb central server, but doing all of the processing locally. Is AJAX “thin computing”? Not really, since the client needs to be a lot thicker than the server. Is installing Office to a central computer “thin computing” but having it run locally? Not at all. Yet people (Mr. Berlind, for starters) seem to think that storing a Java JAR file or two on a central server, downloading it “on demand” and running it locally is thin computing.

Thin computing does not occur until the vast majority of the processing occurs on the central server. That is it. Even browsing the web is not “thin computing”. Note that a dinky little Linux or BSD server can dole out hundreds or thousands of requests per minutes. I challenge you to have your PC render hundreds or thousands of web pages per minute. Indeed, even a Windows 2003 server can process a hundred complex ASP.Net requests in the amount of time it takes one of its clients to render on the screen one of those requests. I don’t call that “thin computing”.

Citrix is “thin computing”. Windows Terminal Services/Remote Desktop is “thin computing”. WYSE green screens are “thin computing”. X Terminals (if the “client” {remember, X Windows has “client”  and “server” backwards} is not local) are “thin computing. Note what each one of these systems have in common. They are focused on having the display rendered remotely, then transferred bit-by-bit to the client, which merely replicates what is rendered by the server. All the client does is transfer the user’s input directly to the server, which then sends the results of that input to the client, which renders the feedback as a bitmap or text. None of these system require any branching logic or computing or number crunching or whatever by the client. That is thin computing.

Stop confusing a client/server network with “thin computing”. Just stop. Too many articles and comments that I have seen do this. They talk about the wonders of thin computing, like being able to have all of the data in a central repository to be backed up, or have all of my user settings in a central repository to follow me wherever I go, or whatever. I really don’t see anyone saying anything that is “thin computing” that does already exist in the current client/server model. The only thing that seems to be changing is that people are leaving protocols like NFS and SMB for protocols like HTTP. It’s a network protocol folks. Wake up. It’s pretty irrelevant how the data gets tranferred over the wire, or what metadata is attached or whatever. It does not matter. All that matters is what portion of the processing occurs on the client versus the server, and in all of these siutations people are listing, the client is still doing the heavy lifting.