RAM. Your PC has to have it. And the better it is, the better your PC works. It used to be that all you had to worry about was how much you had, but now there’s a fight over what kind of memory you’ll be using.
Intel says, “Buy Rambus!” However, no one else seems very enthused about Rambus. And what’s with these new acronyms, VC-SDRAM and DDR? Good question. Too bad the answers are a bit clouded by marketing. Let’s strip away the haze as best we can with these somewhat rare technologies.
Intel and Rambus: The plot thickens
First, a quick lesson in marketing and market strategy. Intel wants to break into the server market, but it can’t match the high-speed (and very, very patented) memories used by Sun and SGI. So, it teamed with Rambus Inc. to produce high-bandwidth memory that is ideal for use in servers. The cost was high, and mass production was required to make it competitive; thus, Intel decided to make it a new standard for all high-end systems.
The rest of the industry was less than pleased. There was a license fee, paid to Rambus, for every Rambus memory module (RIMM) produced, which was at odds with SDRAM. The people who made competing chipsets weren’t too keen about paying for the Rambus license either, especially since it was a no-brainer that Intel would have an edge in producing better motherboards due to its partnership with Rambus. The PC manufacturers were more than a little miffed, as their research indicated that other, less expensive memory types could perform as well as Rambus (and maybe even better).
The muscle behind the speed
Of course, now we must define “performance.” Two things define memory performance in a computer: latency and bandwidth. Latency is how long it takes for the memory to get around to transmitting the data, and it’s measured in nanoseconds. Bandwidth is how much data it can send in gigabytes. Here’s a little analogy to put it in perspective: A hyperactive six-year-old will begin digging a hole within three seconds of being asked, but will only dig out about 10 cubic feet of dirt per day. A 17-year-old won’t begin digging the hole for three days but will move about 100 cubic feet each day. If you need to only dig a small hole the child will do the job, but to dig a trench you’ll have to nag a teenager.
So how much dirt do you need to move? We can answer part of the question by looking at the biggest dirt shaker in your PC, the video card. At its AGPx4 worst, it won’t need more than about one GB of bandwidth. Naturally you want some overhead, but there’s no real point in going too far over the top. Latency is really more of a performance killer than bandwidth. Given that your typical PC has at least two or three programs or devices all trying to get to the memory, performance drops the longer each request takes to be filled. Even when you have a single massive program hogging the system, latency is still an issue because that program is probably using memory to do a lot of different things. If you consider your average PC game, you’ll have a lot of memory used. But, part of it is for the map, some for the player’s and opponents’ inventory, graphics, sound effects, etc. The only type of program that really needs a lot of memory for a single thing is a database server. Oh, wait, that’s where Intel wants to go, isn’t it?
Types of memory
SDRAM has been our friend for a few years now and is the accepted standard in its 168-pin DIMM form. It relies on a 64-bit memory bus that runs at either 66 Mhz with a 0.5 GB bandwidth for PC66 memory, or 100 Mhz PC100 memory with 0.8 GB bandwidth. While 133 Mhz SDRAM with one GB of bandwidth is available, it is only now becoming widely accepted. SDRAM is not exceptionally responsive, as its latency is about 40 nanoseconds.
All the major manufacturers make the three flavors of SDRAM. PC133, previously used only by overclockers, is now supported by the Via Apollo Pro 133 chipset, and various Intel chipsets are scheduled for release in 2000.
Rambus uses its own special 16-bit memory bus, the Direct Rambus Channel, to interface with the processor. The memory is double-clocked to operate on both the rising and falling cycle of the bus speed, doubling its 400 Mhz to an effective 800 Mhz with a bandwidth of 1.6 GB. Less expensive 600 Mhz (1.2 GB bandwidth) and 711 Mhz (1.4 GB bandwidth) memory can also be used. RIMMS have similar physical properties to DIMMS but are not compatible. Their latency is about 50 nanoseconds.
Because of the problems with Intel’s i820, virtually all the major players have stopped manufacturing Rambus. What’s even more telling is the rumor that all Rambus memory will have to be cash on delivery, which is a radical departure from the normal corporate invoice system. PC manufacturers are suitably upset over this turn of events, and Intel is considering bundling RIMMS with their boards to ease OEM fears.
Double Data Rate memory (DDR) is double-clocked SDRAM that also works on the rising and falling cycle of a 64-bit memory bus. Running 266 Mhz (double the 133 Mhz bus speed), iDDR’s bandwidth is 2.1 GB. DDR has similar latency timings to SDRAM. Specialty devices and some peripherals, like video cards, currently use DDR.
DDR probably made its way into a computer near you this Christmas, although not in the system memory. The newest uber-graphics card from nVidia uses DDR memory to claim the title of “fastest video card on the market.” Sometime in late 2000 there will be Athlon workstations and servers pairing the double-clocked EV6 bus with DDR, more than likely in multiprocessor configurations. Three of the seven largest memory manufacturers have signed on to produce DDR.
Virtual Channel memory (VCM) is designed to be low-latency memory. It runs at 133Mhz and has the same 1 GB of bandwidth as PC133 SDRAM but shaves off about 10 nanoseconds latency. It does this by using special “fast” registers that keep track of memory pages. These registers provide a fast link, or channel, to the memory used by an application. VCM actually works better for complex applications like games and databases that would have memory spanning multiple memory banks.
PC’s using the new Via Apollo Pro 133+ chipset, such as Micron’s Pentium III 733, have VCM support built in. Unfortunately, NEC is currently the only manufacturer publicly pushing VCM. The fact that VCM and PC133 SDRAM can be used interchangeably in the Apollo Pro motherboards will encourage other manufacturers to jump on the bandwagon.
The cost of performance
Tests done by various labs have verified that, under normal PC applications, Rambus has about a 3-5 percent performance boost above PC133 SDRAM. VCM and DDR, on the other hand, provide 3-25 percent increases, depending on the type of application. VCM proves superior with many small operations, while DDR wins out where bandwidth is needed. Because VCM and DDR technologies modify different aspects of SDRAM, they could theoretically be combined in the same components, resulting in high bandwidth, low latency, double data rate memory with virtual channel registers. It’s unlikely that VCM could be used on Rambus without massive redesigns because of the special bus, memory controllers, and exclusionary patents.
Performance does include another factor: cost. In the figure below, the prices for PC133 SDRAM are from various Web sites a few weeks ago, and Rambus prices are from Kingston Technology. DDR and VCM are still specialty parts, so the prices are based on cost differentials against components made with normal SDRAM; actual PC-ready memory modules may be more or less expensive. You should also remember that the price of all types of memory is likely to drop as production increases. (If you happen to know the cost of DDR or VCM memory modules, please post it in the “Post a Comment” section below along with the source.)
|Type||Approximate cost per MB|
|800 MHz Rambus||$8.60|
Using cost vs. performance vs. availability, you can see that there is still life for SDRAM to be found in PC133. Until the market chooses a clear successor, the cost of SDRAM will likely stay half that of the next closest competitor. And, often, doubling the memory in a system will provide similar performance increases as the superior memory types. But I really don’t think it will take long for the new memory standard to appear and the costs to drop significantly. After all, we can already pick the loser, can’t we?
James McPherson is a systems administrator for a nationwide ISP. Having built more than his fair share of boxes, he knows a thing or two about memory. We’re still awaiting word on how much dirt he can displace in a day, though.
If you’d like to share your opinion, please post a comment below.