PCs

Examine the computing evolution's effect on programming

Justin James explores a number of points from Gordon Bell's essay, "Bell's Law for the birth and death of computer classes: A theory of the computer's evolution." He also describes what programmers will need to know if the smartphone class of computers comes to fruition.

Gordon Bell has a long and storied history in the computer field, and he certainly knows what he is talking about regarding the industry. Bell's Law is like a large-scale version of Moore's Law; although instead of dealing with processing power, it addresses the long-term jumps in computer "classes" such as "minicomputer" or "supercomputer." (He's also very unique -- for some time, he has been covered in cameras and microphones so he can archive his entire life.)

Today I took the time to read his essay Bell's Law for the birth and death of computer classes: A theory of the computer's evolution. (His essay seems to be somewhat rough in terms of a couple of paragraphs and graphics -- it is definitely not fully polished yet.)There are a handful of points in his essay that I feel are extremely crucial for forward-looking programmers to understand and that touch on topics I have been thinking about a lot lately. I examine each of these points in some detail.

The evolutionary characteristics of disks, networks, display, and other user interface technologies will not be discussed. However for classes to form and evolve, all technologies need to evolve in scale, size, and performance, (Gray, 2000) though at comparable, but their own rates!

(Bell's Law for the birth and death of computer classes: A theory of the computer's evolution, pp. 2 - 3)

In other words, a sudden leap in CPU, RAM, disk, network, display, etc. capability alone will not produce a new class of computers; they need to evolve in tandem with each other. It's equally foolish for developers to focus on just one issue of the programming end of things when trying to write software for new classes of computers.

Bell believes that smartphones and similar devices are the next class of computers. I am inclined to agree from the standpoint of consumer grade computing and a number of business purposes. Programmers targeting these devices are not looking at the big picture; they are treating the devices as slow PCs with low amounts of RAM, disk space, and slow networks. In reality, they are much more than that. Until programmers targeting these devices recognize this (as well as the reasons why people will be choosing them over PCs), the software for the devices will continue to be fairly lousy.

Nathan's Law, also attributed to Bill Gates, explains software's increasing demand for resources:
  • 1. Software is a gas. It expands to fill the container it is in.
  • 2. Software grows until it becomes limited by Moore's Law.
  • 3. Software growth makes Moore's Law possible through the demand it creates; and
  • 4. Software is only limited by human ambition and expectation.

(Bell's Law for the birth and death of computer classes: A theory of the computer's evolution, p. 16)

Bell is talking about the fundamental principle underlying "bloatware." We use general purpose PCs to do a lot of processing; if we shift to smartphones with limited RAM, CPU, and disk resources, we are looking at keeping this all on the server and using the smartphone as an input/output device. But how many desktop applications can legitimately run on an application server at a reasonable cost? It is one thing to slap a $600 PC on 10 desks; it takes a lot more than $6,000 to buy a server that can handle the same load that those 10 PCs can handle. Those "economics of scale" get even worse when the users are doing intensive work, such as multimedia content editing, and smartphones do not have the power to do these things themselves. It's a quandary, and developers are going to need to deal with it. Users and customers will understand that some functionality will be given up along the way, but how much are they willing to sacrifice?

In 2007, the degree of parallelism for personal computing in current desktop systems such as Linux and Vista is nil either reflecting the impossibility of the task or the inadequacy of our creativity. Several approaches for very large transistor count i.e. 10 billion transistor chips could be:
  • 1. system with primary memory on a chip for reduced substantially lower priced systems and greater demands that either require or imply proportionally lower cost software
  • 2. dedicated functions for networking and improve user interface including speech processing for text to speech and spoken commands
  • 3. graphics processing, currently handled by specialized chips is perhaps the only welldefined application that is clearly able to exploit or absorb unlimited parallelism in a scalable fashion for the most expensive PCs e.g. gaming, graphical design
  • 4. multi-core and multi-threaded processor evolution for large systems
  • 5. FPGAs that are programmed using inherently parallel hardware design languages like parallel C or Verilog that could provide universality that we have never before seen, and
  • 6. inter-connected computers treated as software objects, requiring new application architectures.
Independent of how the chips are programmed, the biggest question is whether the high volume personal computer market can exploit anything other than the first path.

(Bell's Law for the birth and death of computer classes: A theory of the computer's evolution, pp. 22 - 23)

I have echoed many of these sentiments, particularly the one about "the degree of parallelism for personal computing." Number 5 is where some folks are going now, but instead of having actual field-programmable gate arrays (FPGAs), they are using the shaders on GPUs, and the possibilities are rather limited as a result. Number 1 is scary to a lot of people who make a living programming. After all, it's more difficult to turn a profit on lower-priced software. When you combine that with the fact that the trend towards ad-supported software must end if the smartphone era becomes a reality (where will the ads appear on those screens and still leave a usable UI?), I suspect that we will see a resurgence in the paid-for software market, but with a few twists that we have never seen. I won't even try to predict what those changes might be.

I know I probably sound like Chicken Little, but this issue is real. Everyone who knows what they are talking about is talking about this (I'm not elevating myself to their ranks, by the way).

We went to multi-core CPU architecture on the desktop because current manufacturing techniques could not continue increasing clock speeds within the reasonable cooling and power capabilities of a standard desktop PC. The only way to maintain Moore's Law is to keep the cores relatively slow and increase the number of cores. On the mobile front, we are hitting the limits of battery lifetime and cooling. Thus, we should be seeing multi-core architecture on the smartphone platform fairly soon. On a standard PC, software developers can get away with being lazy and not doing parallel computations, because even the cheapest desktop CPU can run a single thread very quickly. That luxury does not exist on the mobile platforms due to the resource constraints, especially if we expect them to pick up more and more of the work currently done on overpowered desktops. Programmers who do not learn parallelism will not be able to successfully write software that meets market needs in the (somewhat) near future.

Since the majority of PC use is for communication and web access, evolving a small form factor device as a single communicator for voice, email, and web access is quite natural. Two things will happen to accelerate the development of the class: people who have never used or are without PCs will use the smaller, simpler devices and avoid the PC's complexity; and existing PC users will adopt them for simplicity, mobility, and functionality e.g. wallet for cash, GPS, single device. We clearly see these small personal devices with annual volumes of several hundred million units becoming the single universal device evolving from the phone, PDA, camera, personal audio/video device, web browser, GPS and map, wallet, personal identification, and surrogate memory.

(Bell's Law for the birth and death of computer classes: A theory of the computer's evolution, p. 23)

He is absolutely right -- most people do not do much with their PCs most of the time, but almost everyone goes beyond communication and Web browser at least some of the time. I predict that we will end up with a universal docking station type of system, with a single cable to connect to devices. There is no reason why it cannot happen now. Simply plug a USB-type cable into your smartphone at one end and a powered hub at the other end, and you can have video, keyboard, mouse, optical storage, sound, and even wired networking -- all for under $400 per docking station at today's prices. This is compelling, especially if those applications are usable at a limited functionality on just the phone itself. For example, a BlackBerry-style messaging system becomes a client equivalent to Outlook when docked. Why sync when the device could replace the computer? This will not be a perfect solution for many workers, just as the client server never fully replaced mainframes. In the future, I suspect that PCs will be considered a legacy class of computers.

Finally, Bell's grand slam statement:

New applications will be needed for wireless sensor nets to become a true class versus just unwiring the world.

(Bell's Law for the birth and death of computer classes: A theory of the computer's evolution, p. 24)

He has summarized the entire problem with the current state of affairs in the mobile computing mindset. Programmers are not looking at a new class of computing -- they just want to do the same tired tasks but without wires.

Folks, no one is happy with the way they compute currently. It is why people are running like rats from a sinking ship to the Web, and they are quickly discovering that the Web is a burning house and not a safe haven. The Web as an application platform is not a different computing paradigm; it is a gussied up version of "green screen" systems, which is a paradigm that proved inadequate in the late 1970s.

If the smartphone class of computers comes to fruition, programmers will need to know how to do the following:

  • Distributed clients, single-server computing
  • Offline operation with no interruption and minimal reduction in functionality with instant, transparent syncing and concurrency resolution, while maintaining the safety of "data at rest" on remote devices and "data in transit" to/from the device
  • Efficient multi-core, low RAM computing
  • Elastic/liquid UIs that adapt on-the-fly to changes in display and input technologies; in other words, UIs that can transition from 240 x 320 displays and thumb keyboards to 1650 x 1250 with full keyboards and a mouse with a single cable connection, and no warning (and back!) and appropriately scale up and scale down capability
  • Monetize software without advertisements

It is going to be a very different world if Bell turns out to be correct, and I feel that he probably is. In many Asian countries, the cell phone is the computing platform of choice. In the Third World and developing countries, cell phones trump PCs because they are simpler, more reliable, and require less infrastructure.

The use of the PC as a primary work tool forces users to act in completely unnatural ways, like being tied down to a desk with minimal face-to-face interaction with peers and colleagues. For all of the supposed advances in communications that the PC has given us, most people communicate worse with PCs than in person due to the one-dimensional nature of the PC; most people use them as text editors with a variety of data storage and transmission options. The smartphone class makes video, still images, and sound infinitely easier to create and consume. I am fairly certain that this is where the market is headed in the future, so now is the time to prepare.

J.Ja

About

Justin James is the Lead Architect for Conigent.

18 comments
Joules Ampere
Joules Ampere

Predictions like these must be made in the most careful and tentative way, it is quite easy to use one's feelings and ideas to judge how things will be in the future, in the meantime human habits and traits along with the physics of silicon take center stage, and may hold us in a rut indefinitely. It is one thing to be an overly enthusiastic geek, practicality is another. Now I am not saying you are such a person, I too believe the mobile and wireless technologies are going to change things significantly but be careful, there are some strange and not so strange variables in the world affecting the way we humans operate and progress, there are constraints, books and cars have the same problems, no more evidence is necessary Do not read too much into the sale of mobile phones in developing countries, many are dumb little phones that are purchased just to stay in touch, when people want to do real computing they turn to internet cafes or visit a relative, many still can not buy smart phones which very often carry desktop PC prices. The people who buy smart phones usually have a desktop PC.

mikifin
mikifin

Rather than write good compact code more resources allow bad progammers to write bad bloated code.

sboverie
sboverie

I started with main frames in the late 70's,moved to mini computers in the 80's and onto PC's, networking and the internet in the 90's. Mainframes were huge, but expensive enough that only large companies could have them. Mini computer systems brought the cost down so that smaller companies could afford them. PC's and networking made jumping from one function to another easy. I had not thought of smart phones as the next step in computer evolution. I do have a feeling that the curreny state of the art for PCs (including Mac and Linux machines)is a dead end. I hope that the different capabilities of smart phones and similar devices will force better standards of code that work within the limitations of mobile devices. Perhaps the new devices will set a better standard for the next generation of computers that do more than sit on a desk. I also hope that the next generation of computers is not designed to be so trusting and automatically download crapware that promises to be helpful but kills performance. I am looking forwards to dealing with a computer system that is as intelligent as a dog; right now, my best computer has less intelligence than an earthworm.

Tony Hopkinson
Tony Hopkinson

I think people will want more and more out mobile devices. Us old chaps remeber how to squeeze more out of less, but there was always a limit, it was simply beyond consumer expectation. This is no longer true. I want my mobile device to have all the functionality I require and I require much more than simply email, voice and sms web alerts. The best thing current tech can offer me at the moment is a portable hard drive with a status display and a usb port. Or I could use my portable.... All this waffle is yet another attempt to create a market, 3G revisited....

Tony Hopkinson
Tony Hopkinson

Us boneheads cost more than machine time. The scary thing is this has been going on so long, a lot of those programming today would be completely knackered if you took their cookie cutters off them. I'd have to start dredging deep in the memory banks myself :p

rclark
rclark

Why should mobiles be any different that any other device type that has come out in the last 40 years? My personal vision is a mini computer who's parts you wear but combined make a gestalt computer that rivals current level mini computers. Perhaps power generators in the soles of your shoes. Disk drives and I/O (smart phone) on your belt. DVD reader in your shirt pocket. Front and back panels to pull in solar power. Propeller on your cap to get the wind power. Wii activated gloves. Video in your glasses with audio in the ear pieces. Most of the devices would not even have to be connected by wires. Heat could be recaptured as power, or in cold climates, a body warmer. Since the parts are physically seperate, heat generation/dissipation wouldn't be as large a problem anyway. The video glasses instead of a 19" flat screen and a laser generated optical keyboard is possible today. Economy of scale is all that really stops them from being mainstream. At that point, your upgrade path is very simple and much cheaper. Need more disk? Take your old one in and swap for a new one. They'll even move your data from your old to your new one. Need more horsepower? Unclip your CPU and swap it and boot for your new xxGhz cpu. We've come a long way from the days when BatBelts were uncool. Geek is Cool now.

Justin James
Justin James

You make a great point about limits and expectations. From what I've heard from a lot of poeple in your generation, a lot of programming was working around hardware limitations, and because the user pool was so small, there were limited expectations towards what could be done. I know that today's users, particularly on consumer-grade stuff, the expectation has been set that anything is possible, which or course is impossible to meet. J.Ja

Tony Hopkinson
Tony Hopkinson

but fairly correct. We've been spoilt rather badly. Something like available ram (and it's speed) has rocketed since I started. I' never managed to use up a whole 640k when it was a megabyte. :D After tyat things just got silly :p Maybe I should have put some more bloat my code, but that's a habit I never developed. I really want to see some of these puddings who use invisible listbox controls to store aggregates in have a go at mobile devices.

Jaqui
Jaqui

the client wanted to get away from windows / iis to apache, but didn't want to have to rewrite the web app. asp2php doesn't actualy convert a functional asp site to a functional php site. so I recommended they either stay with what they have or redo the site entirely.

Tony Hopkinson
Tony Hopkinson

Run ASP on yout linux box , huh. ASP clasiic or .NET is far too tied in to IIS for it to work well away from it. PHP is less proprietry but I wouldn't do fresh start for PHP under IIS either. It's alright talking about cross platform, but only a muppet would use a mix of Apache and IIS to provide the same functionality. More trouble than it's worth, pick one and be done with it.

Jaqui
Jaqui

nope, it isn't as complete as needs to be for it to be used in a corporate environment. perl or php or ruby have more robust functionality and less resource consumption than implementing a mono / .net solution. I looked at using mono to run an asp scripted website, not asp.net, just asp and it couldn't handle it. Had to tell th client that the support for asp isn't there on a linux based server. asp2php is also seriously lacking in functionality.

Tony Hopkinson
Tony Hopkinson

think of a single business case for using it though.

Jaqui
Jaqui

since it is one perl interpreter for every os other than windows, instead of multiple interpreters for every os. The Active State perl interpreter on windows being 100% compatable with the "official" perl interpreter.

Justin James
Justin James

Jqui - You point is quite valid. My comments about "Microsoft's bloat is my leanness" only applies to a server that is going to already be running .Net (ie, a Windows server), and not a server that is capable of running .Net but is not doing so by deafult (ie, a Linux machine with Mono). That being said, much of the usefulness in the .Net Framework is either not implemented on Mono or is Windows specific. Eric Brewster of the Paint.Net project had some great stuff to say about porting that project to Mono. In a nutshell, going to Mono from the .Net CLR proper is indeed a "port", and not one of the "recompile and test" variety, either. It is sad, to be honest. I think that if Microsoft was really serious about capturing developer mindshare, they would work very hard to get .Net working on non-Microsoft platforms, just like Sun worked hard to get the JVM working on a huge number of platforms, and has done a fairly good job at reducing cross-platform problems. J.Ja

Jaqui
Jaqui

if you don't have an MS server then you have to add that bloat to a linux / bsd system, making the bloat a part of your app. The actual app code may be lean, but if you have to implement the framework on a non ms platform, the entire framework bloat really needs to be concidered part of the app.

Tony Hopkinson
Tony Hopkinson

so you aren't adding to bloat, well given you can code lean yourself. To be quite honest, choosing an other solution on a windows box that is already running .net code (most) is well wasteful. :D It's good enough to only load the bits you want, though this could be more than you need. That however has always been true of OOP based environments, it's dead easy to end up with a swathe of code loaded in order to use a small percentage of it.

Jaqui
Jaqui

Actually, if you have to implement the .NET framework on a server, that bloat is yours. even a LAMP stack can support the framework, and would have the bloat on it. [ which kills the performance, so is an excellent reason to not use .NET ]

Justin James
Justin James

One thing I really like about .Net is that the Framework itself is so rediculously bloated. As a result, it is pretty easy to keep a .Net program in and of itself quite trim. Someone else (if not the OS itself) is going to have the CLR and the Framework loaded into memory and sucking up the bytes, so you might as well make use of it. Or to rephrase it, Microsoft's bloat is my leanness. :) J.Ja