Data Centers

Why 64 bit is the 'new' catchword

With chip makers chomping at the bit to update systems to create a 64-bit world, CIOs need to ask the tough question, "why?" This article provides compelling arguments for the switch.


The catchword on the technology horizon is “64-bit computing.” Intel has spent quite some time wooing the high-end data center with its IA64 processors running various flavors of UNIX. AMD is prepping its x86-64 processors to be able to run 32-bit and 64-bit code natively for a fall launch. Microsoft has seen the potential of a new market it has been unable to compete in and is in the early beta stages of releasing a 64-bit version of Windows.

The question you should be asking now is: Why? Contrary to the current hullabaloo, 64-bit computers aren't new; many companies were using 64-bit systems on their mainframes years ago and still are. Most flavors of UNIX support 64-bit processors, if they're not outright designed for it. Even Linux is no stranger to 64-bit processors since it started running on the then-DEC Alpha processor in 1996. However, the majority of corporate IT was developed on 32-bit Windows machines. A number of midsize and larger businesses never had anything other than Windows, or possibly Linux, running on x86 processors in the data center.

So what has changed now that 64 bit is a watchword for the Wintel world? Well, it isn't cost. The 64-bit processors from Intel and AMD will be significantly more expensive than those available in the past from vendors such as DEC and Sun. And applications for these new processors will be quite thin for some time to come, or limited to the same applications already available on UNIX machines running on other 64-bit processors. The short answer to the 64-bit question is need. The 64-bit processors fill a need—one that was so minimal in the general commercial market back in 1999 that Microsoft stopped developing Windows on the 64-bit Alpha with the end of Windows NT. That said, let's look at just when 64-bit processors are a necessity.

Math and encryption
The blatant advantage of a 64-bit processor is large-number math. Of course, large is a relative term. The integer range that a 32-bit processor can handle natively is -2.1 billion to 2.1 billion. Alternatively, it can natively handle a number with nine significant figures. Tricks can be used to deal with larger numbers that amount to multiple memory addresses for each value and use advanced functions available in the programming compilers. Tricks are useful, but not fast.

Advanced math rears its head in large financial systems, computer simulations, CAD/CAM workstations, graphics rendering, and more importantly, encryption. This article is not a primer on encryption, but understand that in the networked world we live in, encryption is commonplace and rapidly expanding. Half of the digital security system is based on the algorithm used to encrypt the data; the other is the size of the keys used to archive and extract the data. A strong algorithm with a weak key can be defeated by raw brute force in a short time, so large keys are a necessity. Today, with 32-bit processors, a strong key is 256-bit requiring eight addresses per value (8 x 32 bit = 256 bit) and lots of math tricks. A 64-bit processor will use only four addresses per value for the same key and significantly increase the speed of the encryption processes.

Memory
Memory is the most often discussed aspect of a 64-bit computer, since so many 32-bit servers run out of memory while having their motherboard maxed out. A processor keeps track of data by recording the address within the memory that data resides in. A 32-bit computer can natively handle only 4 GB of memory (approximately 232 bits). While 4 GB seems like a lot, many corporate databases have indexes that are larger. Modern application development can easily require several gigabytes of memory to handle the libraries. CAD workstations, with their myriad linked components, can quickly eat up RAM just like multimedia and video editing will use every scrap of memory it can acquire. Simple things like Web servers can gain significant performance boosts by loading static content into memory rather than waiting on the slower drives.

However, there are already a number of ways to allow a 32-bit computer to address more than 4 GB of memory using memory windowing. Windowing is the trick of using multiple sets of memory address tables, kind of like having a table of contents for each chapter in a book. The trouble is that windowing can significantly slow down the computer as it adds extra steps. For example, if a nonpaged processor and a paged processor wanted to add the value A and the value B, and store the value C, it would look something like Figure A. Naturally, a windowed processor tries to minimize these steps, possibly by looking up A and B at the same time, but it still reduces efficiency and requires extra circuitry on the processor to make up for it.
Figure A
Non-paged processor Paged processor
Look up A Look up A window
Look up B Look up A
Add A + B Look up B window
Store C Look up B
  Add A + B
  Store C window
  Store C

But don’t expect the new crop of 64-bit processors to handle the 18 million TBs possible with 64-bit memory addressing. While there is a need for more memory, the chipsets limit the number of memory modules in use, and those memory modules will hit a limit. So it's unlikely that you'll see Windows-capable 64-bit processors that will access more than 64 GB of memory in the next five years.

Drive space and efficiency
Addressing applies to disks just as it does for memory. This requires a change to a different file system, but a file system with more bits can use smaller data clusters, access larger disks, and handle more files. In the short term, efficient use of the current disk space would be the goal. As you'll recall, Windows 95 changed the disk system from the 16-bit FAT16 file system that was limited to about 2 GB per partition to a 32-bit file system with a theoretical limit of 8 TBs per partition, although that requires using 32-KB data clusters. Since a cluster is the smallest increment on a disk, any file smaller than 32 KB leaves wasted space, and any file that does not evenly fit into 32-KB increments leaves wasted space. Smaller clusters mean more files on the same disk. The inverse is using larger disks with the same cluster sizes.

The number of files is also an issue with large volumes because a 32-bit file system is limited to about 4.3 billion files. Again, it seems like a lot, but when a desktop workstation may have a RAID array of nearly half a TB, how long will that last for the servers or something like a storage area network?

64 bit: "Husky-sized" data
Bigger is not always better. Since the system uses 64-bit values instead of 32-bit values, memory needs to increase. Simply storing the value 0 requires a 64-bit block (8 bytes) in RAM. The exact amount of memory increase will vary with the application because memory needs don't directly double since executable binary data will be managed differently, but expect at least a 10 percent increase in used memory space.

On a less visible front, the processor’s cache memory will be hit quite hard. Like the active memory, the executable data will not suffer from the doubling, but the cache doesn’t store as much executable data. The use of 64-bit registers and 64-bit values in the cache will create a notable dent, probably reducing the effective space in the cache by 30 percent.

This will increase the cost of 64-bit processors in two ways. The first is that cache memory is far more expensive because the processor cache runs at processor speeds rated in GHz rather than the more sedate MHz of RAM. The second is that memory requires transistors, and the more transistors you have on a chip, the greater the odds that some of them will be bad, increasing the percentage of dud processors being churned out. Both combine to increase the cost of a 64-bit processor.

Is 64 bit for your organization?
Naturally, the only one who can answer the 64-bit question is the informed CIO. For now, it can be ruled out on the desktop, but the data center and high-end workstations are likely candidates. Also, Web servers full of static content, VPN gateways, and database servers are all good choices. The truly hard choice is how to make the leap.

If you need Windows on your 64-bit processor, you'll first have to wait until the fall, and then you can use either the Intel Itanium or possibly the AMD Opteron. If you don't absolutely require Windows, the choice is much easier. There are several 64-bit processors with years of testing under a variety of UNIX and VMS environments that have a sizable arsenal of proven applications at roughly the same price. No one said being an informed CIO was easy.

Editor's Picks

Free Newsletters, In your Inbox