By Larry Seltzer

Remember the early ’90s, when we were transitioning from 16-bit to 32-bit operating systems? Some people were unimpressed, but I think most of us could see that 32-bit systems were going to solve an awful lot of problems. The market was quickly dominated by 32-bit processors, even just to run 16-bit operating systems that incorporated 32-bit hacks, such as Windows for Workgroups 3.11’s VFAT system and QEMM.

Rearchitecting the operating system and applications was a major endeavor, but 32-bit operating systems quickly displaced their inferior ancestors. Gone were segmented programming, extended vs. expanded memory, and any practical limits on physical memory. Certain programming problems went away too, such as having 16-bit integers. All in all, it was a big improvement.

And now it’s time for 64-bit
Why isn’t there even a trace of the same effect for 64-bit computing? The 64-bit processors have been available on certain RISC architectures for many years, and many of them even have 64-bit savvy operating systems. But neither Intel nor Microsoft seems to be in a major hurry to move the world to 64-bit software. In fact, unless Windows for 64-bit systems turns out to be completely compatible with Win32 code, I have a hard time seeing a large-scale migration to it. The benefits are not compelling enough, and there’s way too much 32-bit code out there right now.

The benefits of 64-bit architecture relative to 32-bit aren’t as obvious as those of 32-bit architecture relative to 16-bit. The first consideration that gets mentioned now is the limitation on physical memory. A 32-bit pointer can address 4 GB of memory; a 64-bit pointer can address 16 gazillion foofoobytes. Actually, whatever the limit is, it’s going to be a long time before we have to worry about the limitation showing up in real systems. (Personally, I think limitations in semiconductor manufacturing processes will slow things down before that happens.) If you want an image of how large it is, first imagine the maximum 4,294,967,296 bytes in a 32-bit address space. Now imagine 4,294,967,296 32-bit address spaces; that’s one 64-bit address space.

Intel long ago found temporary hardware hacks to put off the 4-GB limit. The Xeon processors introduced the PAE-36 and PSE-36 modes to allow 36-bit addressing and therefore support up to 64 GB of RAM. And the highest-end server versions of Windows have supported the extended addressing since Windows NT 4.0 Enterprise Edition. Even today, 64 GB is a lot of memory for all but the largest clusters of servers. But it will be an issue eventually, and before too long. In the average desktop, though, I think even 32-bit addresses will be good enough for several years more at least, as long as we continue to use memory the way we do now.

Finding uses for 64-bit processors
The trick to making 64-bit processors desirable is to design new types of applications that make use of them. Consider that obscenely large address space I described earlier: What if we were to design memory-mapped file systems? Imagine that you didn’t have to open and close files, but that you manipulated them through data structures directly, and that it was the operating system’s job to page the data into and out of memory as needed. The 64-bit addresses are still large enough to handle any file system. It makes programming a lot easier.

Or imagine using integers for a lot of math problems that call for floating point now. The number space for a 64-bit integer (or perhaps a 128-bit integer that spans two registers) is very large, and using it should improve the performance of such applications a great deal.

However, Microsoft seems more interested in a simple, smooth introduction of Win64 than in using it to introduce radical new programming techniques, and it’s probably right. As I said before, there’s an awful lot of 32-bit code out there, and when people start buying 64-bit servers, for whatever reason, they’re going to be most concerned at first with running their existing code on those servers and porting it to the new architectures.

The initial versions of 64-bit Windows are actually designed to support 32-bit programs and essentially use the existing API set. Although new 64-bit data types are not the priority right now, 64-bit Windows also adds them. There’s an emulation system called WOW64 for running Win32 programs. And applications, by default, have a stingy 8-terabyte address space.

For at least a few years, there’s almost no reason for mainstream businesses to consider, let alone adopt, 64-bit systems. Eventually, the migration will become easy enough, and the hardware cheap enough, that people will do it because there will be little reason not to. Extremely computing-intensive applications, like predicting the weather, will always migrate to the fastest platform available. But in the interim, all but a few business applications have a greater need of improvements in existing 32-bit systems: better security, more streamlined administration, and other things that doubling the size of our registers and address lines does nothing to address.

End sum
When looking into your crystal ball for the future of the PC industry, it’s usually a good idea to predict the most conservative amount of change. There may have been a time when 64-bit computing was going to solve big problems, but it’s turned out to be just another evolutionary increment in computing.