Seeking a programming middle ground

Justin James feels as though the programmers who do the more logic-intensive work that he's attracted to are just not interested in or able to turn it into a useful, usable application. So where does he go from here?

The coincidental events in my life keep pushing me to think that the programming paradigm needs an update. Over the course of the last week, I stumbled upon confirmation that my suspicions regarding hardware are correct.

It crossed my mind that current hardware models are lousy at expressing parallel code. It turns out that programmers have been using Graphic Processing Units (GPUs) - which are typically found on video cards — to write parallel code for nongraphics purposes for quite some time. Also, many of the instruction sets for CPUs that have been aimed at gaming, such as MMX, SSE, and 3DNow!, are also designed for parallelism.

(I am sure that at least a few of you are chuckling because you've been wondering how long it would take me to notice this. Thanks for not cluing me in because I really enjoyed this learning process.)

Unfortunately, GPUs are optimized for working with graphics, which means performing mathematical calculations and not much else. The GPU is designed to perform the exact same calculation on a lot of numbers in tandem but not cooperatively. In other words, the operations do not work with each other at all. Another problem with this approach is that the languages used are much lower in level than what most programmers work in. Most programmers work in Java and .NET for pretty good reasons, and (as far as I know) neither of these systems allow working directly at the hardware level (at least not without some fancy footwork). These systems are their own VMs, which means that they abstract the idea of the underlying hardware out completely.

Where are we going with this? Well, I like the idea. It points to the idea that arrays of processing pipelines can make parallel computing relatively easy.

Let's pretend that we have a language that emulates such an environment on a standard, general purpose CPU like the x64 architecture. This probably wouldn't get us much. Outside of multimedia content creation (the days of upgrading your PC to have enough horsepower to decode an MPEG are long past), the vast majority of applications simply are not CPU bound; they are often not I/O bound either. After all, they are not reading and writing huge amounts of data at once; at most, they stream data from a source at relatively low speeds. I will go as far to say that the only real performance bottleneck for the majority of users out there is physical RAM — hitting the swap file is the big slowdown.

Let's imagine another example where the command line is still king. Outside of that, everything else is the same as our current environment. We use Lynx or something similar for Web access, vi, Emacs, or WordPerfect 5.1 to handle text input, and Pine (with all of Outlook's functionality) is the e-mail client of choice. These apps would barely touch the CPU because the fancy graphics is what is putting the hurt on our PCs — not what the applications themselves are doing (except in very rare cases).

Even on the server side, the story is much the same. The CPU gets creamed, mainly because the OS has one or two physical cores trying to timeslice to do one thread per request. It is the cost of context switching that hurts; this is the server scalability problem. No individual request really takes more than a small percentage of CPU time, but if you have more than a few requests, and the overhead of simultaneous processing artificially inflates the CPU needs.

I feel very alone right now in the programming world. I feel like the programmers who do the more logic-intensive stuff that I am attracted to are just not interested in or able to turn it into a useful, usable application. The programmers trying to build applications have this bland "me too!" style processing under the hood. The technologies that work great for logic do not seem like they are capable of being used in a system that real users can use, and the systems that work great for real users cannot seem to handle tricky logic. Where is the middle ground? I am trying to get a little bit closer to the envelope, and I am really stuck between the hammer and the anvil.

So where do I go from here?



Justin James is the Lead Architect for Conigent.

Editor's Picks