Hardware

Motherboard chipsets—the good, the bad, and the ugly

What makes your computer lightning fast? Not your CPU, but your chipsets. In this Daily Drill Down, James McPherson explains their history and what makes these chips good, bad, or just plain ugly.


Pop quiz: What are the components of a typical computer? Odds are you said motherboard, processor, memory, hard drive, video card, monitor, CD-ROM, and a handful of other components. These days, almost anyone can explain the benefits of PC133 memory over PC66 memory and why a 600-MHz Celeron doesn’t compare to a 600-MHz Pentium III. Tyros will debate the merits of SCSI versus IDE while the self-styled elite compare Rambus to DDR-RAM.

For some reason, the single component holding it all together is all too often neglected. Everything depends on the humble motherboard: what processors you can use, how fast the memory is, how many expansion ports you have, and essentially the maximum potential of your computer. Apparently at some point, this was seen as the computer’s limitation, as in, “the motherboard will ONLY let you run a 600-MHz Pentium II processor,” causing it to be exiled from the limelight.

Ignore the motherboard at your own risk. If the CPU is the brains of the computer, then the motherboard is the nervous system, arms, legs, and back, carrying all the other devices. The evils of buying an outdated motherboard are well known, but it is no less a waste to have a tricked-out motherboard running underpowered components. It would be like dropping a lawn mower engine into an Indy car.

Because the motherboard is such a versatile device, I’ll focus on the motherboard chipset: the nervous system of the motherboard.

What’s a chipset?
Once upon a time, this was an easy question to answer. There simply weren’t any. Motherboards provided a bus (ISA or EISA) that everything plugged into: memory, controller cards, video cards, processor, keyboard, etc. Naturally, all these devices, including the CPU, ran at the same speed: 2 MHz, 4 MHz, whatever. Thus, every new processor generation required completely different components, memory, devices, etc. Since the bus ran at processor speed, everything on the bus had to run faster. Sure, some manufacturers tried to make multispeed devices that could work with different available bus speeds, but quality was always an issue. Can you say, “incredibly incompatible” boys and girls? I knew you could.

To reduce compatibility problems (and to save some cash), in 1985, Compaq separated the bus speed from the processor speed. Now, every new processor didn’t require the manufacturing of totally new versions of expansion devices. This was fine, but required a special interface so the memory could continue to run at the same speed as the processor. As more time passed, processor speeds exceeded the cost-effective speed of memory. Again, as a cost-saving measure, the CPU was made to run at a different speed than both the memory and the system bus.

Motherboard manufacturers began providing more chips for specialized components rather than using more expansion cards. Keyboard controllers were the first, but floppy drives, serial ports, and hard drive controllers all began moving to the motherboard around 1994. Say “how do,” to the first integrated component motherboards.

After suffering through very, very slow buses with many limitations, Intel introduced the PCI bus in 1994. PCI was intended as a general expansion bus rather than a processor-specific system bus. Since PCI spoke its own unique language, it came with a Bridge chip that translated the PCI commands into a language the CPU and RAM could understand. Get a Bridge chip configured for your flavor of processor and memory interface, and you, too, can use the PCI bus. Hey, it worked for Macintosh, Sun, and DEC/Alpha.

So, now you’ve got a Bridge at one end of the PCI bus that connects CPU and RAM. For kicks I’ll call this the North end of the PCI bus. But what about the old computer bus, EISA, and the now-integrated components, such as com ports, keyboard, and mouse controllers, that are used to talking to it? Build another Bridge, a South Bridge, that talks PCI to ISA. Give it a difficult-to-remember title like 810e or 440BX, and you have a chipset.

A closer look at the North Bridge
The original purpose of the North Bridge chip was to provide an interface between the CPU, system memory, and the rest of the computer, embodied in the PCI bus and the South Bridge. It still serves that purpose, but over time it has become less CPU-focused and more focused on managing memory access. Today, multiple processors need immediate access to a shared memory bank, even on the most basic computers sold.

While a big deal when it was introduced, most people forget that the lowly graphic card’s Advanced Graphics Port (AGP) interface can directly access the main memory and at a rapid pace. AGP 4x, the current incarnation, just exceeds 1 GB/s worth of bandwidth. Then, there’s the fact that on low-cost integrated systems, the North Bridge will take a portion of the system memory and dedicate it to the video card. Thus, even the cheapest computers have multiprocessor support built in.

But, what about “true” multiprocessor computers with more than one CPU? Naturally, the North Bridge has to have the correct control logic to support the particular commands needed to deliver data to and from more than one processor, but it is the interface between the North Bridge and the CPU, a.k.a. the Front Side Bus (FSB), that dictates what those commands would be. Intel uses the GTL+ bus, which shares a single connection to the chipset among all processors. AMD is using a version of the EV-6 bus licensed from Alpha that provides each processor a dedicated link to the chipset like a switch.

Details on the South Bridge
The South Bridge acted as the interface between the PCI bus and the legacy EISA bus. That also made the South Bridge the gateway for keyboard, mouse, serial ports, and anything else that previously had been an EISA expansion card or integrated component. This was a very simple job; the EISA bus only required 32 MB/s of bandwidth, a far cry from the 133 MB/s the PCI bus was capable of channeling.

Every time a new component was introduced on the computer, it was linked to the South Bridge. IDE and floppy controllers were among the first, adding about 17 MB/s of bandwidth. Then came serial, parallel, and PS/2 ports. The bandwidth was negligible, but it required a number of additional circuits. USB was next to add its 1.5 MB/s worth of bandwidth to the mix; no big deal, though, with less than 65 MB/s of bandwidth.

Until, that is, IDE drives were enhanced rapidly. UDMA/33 doubled the hard drive’s bandwidth to 33 MB/s in 1996. In 1999, they doubled it again to UDMA/66 66 MB/s, and in 2000, it was increased once more to 100 MB/s with UDMA/100. The total load on the South Bridge increased to 145 MB/s.

Of course, there is a snag. Notice that under the incredibly rare situation when all the devices that attached to the South Bridge go into a maximum load condition, they require 145 MB/s, more than the 133 MB/s provided by the PCI bus. Ahh, no big deal. That won’t be an issue under real conditions, right? The PCI bus has plenty of bandwidth for the South Bridge. Doesn’t it?

Sure it does, as long as you don’t want to USE any of the PCI bus’ bandwidth. Surely with the AGP card on the North Bridge, there’s not much hanging off the bus. Surely.

Except that 100-Mb Ethernet card. Or the digital sound card. Or an MPEG capture/decoder card. Or a SCSI card. Hmmm, this is a dilemma. We’ve got about 50–100 MB/s of bandwidth dangling off the PCI bus. Oh well, what could we possibly want to add? After all, no one could want to add the 50 MB/s of hot-swappable Firewire, could they? Or perhaps the 60 MB/s of USB 2?

Houston, we have a problem…
And, naturally it is up to chipset designers to solve it. Intel and AMD have both chosen to remove the PCI bus as the connection between the North and South Bridges. Great minds think alike, but not exactly.

Intel
Intel’s solution, marketed in the 810, 820, and 840 chipsets, also includes separating the controller logic from the bridging circuits, albeit within the two chips. The interconnects operate in a way similar to a network hub, hence Intel’s decision to rename the chipset components hubs. The North Bridge is replaced by the Memory Controller Hub (MCH) and the South Bridge is succeeded by the I/O Controller Hub (ICH). Even the BIOS was replaced with a Firmware Hub (FWH).

The MCH is very much a North Bridge but with a 1.6 GB Rambus port instead of a 1 GB/s SDRAM interface. The MCH is only designed to work with the very expensive Rambus, and Intel’s attempts to provide a low-cost Rambus to SDRAM adapter blew up when the Memory Translation Hub turned out to have stability and performance problems.

In addition to their changes to the MCH, the 8xx series Hub-based chipsets also include a new high-speed interconnect to the ICH running at 266 MB/s, twice as fast as the PCI bus previously used. The PCI bus was moved to be nothing more than a subcomponent of the ICH, sharing the bandwidth of the interconnect with the other components. Oh, and the FWH added a random number generator in addition to storing the BIOS data, a minor rearrangement as far as such things go.

It would be nice to say there was a significant performance increase between an Intel Bridge and an Intel Hub. However, there aren’t really comparable Intel Bridges and Hubs requiring either overclocking an older Bridge and having to avoid testing the extra features of the Hub or ignoring the fact that Hubs operate much faster devices. Neither will produce an acceptable apples-to-apples comparison. The best that can be done is to affirm that the performance increase of the new features on a Hub is comparable to previous types of feature increases implemented on Bridges.

AMD
AMD took a different route. The EV-6 bus used between the North Bridge and the CPU acts like a network switch, with each processor having its own connection. Building upon that switched foundation, their Lightning Data Transport (LDT) system dynamically assigns bandwidth between devices to maximize available bandwidth. Latency is reduced as multiple devices can utilize the bus simultaneously rather than having to take turns.

The LDT system is also a scalable system. At full capacity it can provide 6.4 GB/s in each direction. Ouch! That could prove troublesome to Intel. And, “scalable” means that it can be increased or decreased. Expect to see “lite” LDT connections attached to the components between the components. After all, there’s no reason to have a 6.4 GB/s connection to the PCI bus.

LDT is still three to six months from reaching the market, so there is no performance data outside of marketing materials. However, since AMD is licensing the LDT system to various companies, I hope we will be able to see equivalent products available by spring that are equipped with and without LDT.

Conclusion
The world of chipsets never stagnates. There is no doubt in my mind that this article will be horribly behind the times in six months. When that time comes, you will recognize what changes time has wrought, and you will be prepared to recognize the good, the bad, or the just plain ugly. This knowledge will give you that edge over other IT Admins, friends, and family ensuring your alpha status within the techno-tribe. And, of course, your computer will run faster than theirs, which is even better.
The authors and editors have taken care in preparation of the content contained herein but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for any damages. Always have a verified backup before making any changes.

Editor's Picks