A new device can increase its data transfer speed only so much before it becomes bottlenecked by the bus interface. Some devices are phased out slowly, like ESDI hard drives; others come and go in a flash, much like the VL Bus. Regardless of the time frame, as these devices reach obsolescence, new bus standards are currently under development (or are already developed) to keep pace with this increase in bandwidth. This Daily Drill Down will explain what current bus standards are on the horizon so you can know which approaching technologies are viable and which should be abandoned.
Bus overview
A bus allows multiple devices to communicate, as compared to an interface, which is the connection from one device to another device or bus. Most buses include a standardized set of interfaces you can use to attach devices to the bus. There may be multiple interfaces to a given bus, however, reflecting various performance levels or generations. For example, the soon-to-be-obsolete EISA bus supports either the short 16-bit interface or the longer 32-bit interface.
A bus’s performance and capabilities can be measured by these four features: data width, cycle rate, device management, and type. The data width and cycle rate are used to determine the bandwidth, or the total amount of data that the bus can transmit. For example, an 8-bit bus (1-byte data width) that operates at a cycle rate of 1 MHz (1,000,000 times per second) can transfer 8 Mbps (1 MBps).
The device-management specification indicates the maximum number of supported devices, how they connect, and the difficulty of configuring them. There are two types of bus communications: serial and parallel. On a parallel bus, all devices have their own interface to the bus, which until recently was the norm. Serial devices are tied together in a series; the last one has to talk “through” the first one. This can obviously cause performance problems, but it allows more devices to be connected to the system up to the limit of serial addresses available.
Serial ATA
The darling of Intel, Serial ATA is the company’s response to Apple’s FireWire (IEEE 1394) and its intended replacement for parallel ATA, which is the interface used by EIDE devices. Serial ATA is a serial bus that daisy-chains drives together in a way that is software-compatible with the current ATA standard. Despite being a serial interface, it is designed to provide two point-to-point connections, one for each drive, eliminating the master/slave issue. Furthermore, it uses the same connector for both full-size drives and floppy drives, removing another point of confusion.
First-generation Serial ATA, which may be available later this year, provides 150 MBps of bandwidth per drive. Second-generation Serial ATA (Serial ATA II) will allow up to 300 MBps of bandwidth and will be backwards-compatible with Serial ATA I. Contrast this to the 100 MBps per controller of ATA/100 and the not-yet-standardized 133 MBps of ATA/133, and you can see how little impact first-generation Serial ATA will have. Admittedly, Serial ATA will enable both drives on a chain to operate simultaneously, and effectively double the bandwidth. However, the current limit of the ATA bus is due to the fewer drives per controller and the speed of IDE hard drives. This inherent address limitation, along with Serial ATA’s one-meter total cable length, implies that this solution is more of an incremental upgrade.
Nonetheless, Serial ATA controllers will be cheaper than SCSI controllers, and the smaller cables are intended to be easier to use than Parallel ATA, while allowing more airflow in the case. However, until ATA hard drives match the performance and durability of SCSI drives, Serial ATA will remain a desktop technology not much different from Parallel ATA.
HyperTransport
This new bus standard from AMD was pioneered to replace the EV6 bus on motherboards. Since that time, it has been adopted by a number of companies for a variety of roles. At its core, HyperTransport is a scalable and variable-bandwidth bus using prioritized data packets. Most buses are designed with a time-slicing technique for sharing bandwidth. So, if you have a 100-Mb Ethernet card and a 56K modem on the same bus, the Ethernet card will get the full bandwidth for more time, and any lag is covered by data caches. Other buses use master/slave configurations in which multiple groups of devices can communicate, but the master can supercede the slave within their pairing, and no one device can use the full bandwidth of the bus.
HyperTransport contrasts this master/slave configuration with the ability for any device to use the full bandwidth available or multiple devices to use various fractions of the total bandwidth. This ability can be reassigned dynamically based on a priority system, ensuring key components receive the bandwidth they need.
HyperTransport does not have a specified bandwidth because the data width can be varied at manufacture. Thus, HyperTransport is a high-level bus that will typically connect other buses or systems. Motherboard manufacturers see great advantage in HyperTransport because it removes the PCI bus as the primary link for the IO system in a cost-effective manner. Expect to see HyperTransport appearing in a variety of multi-IO devices. This should help decrease the cost of PDAs, PCs, and laptops as the industry standardizes on HyperTransport.
VLink
Developed by VIA Technologies, the VLink is a dedicated 266-MBps motherboard bus that connects the memory controller and CPU with the other peripherals. This fixed-configuration bus competes with HyperTransport on the motherboard. Only available with VIA chipsets, it is an in-house solution to the bandwidth problems motherboards face. While VLink has performed admirably in its chosen role, it will likely require revision in the coming months as more and more high-bandwidth devices become standardized.
A-Link
ATI’s response to VIA’s VLink is the A-Link. It too is a dedicated 266-MBps motherboard bus that connects the memory controller and CPU with the other peripherals. Like VLink, it has so far performed its job as expected, but A-Link will need some revision in the future to keep up with the proliferation of near-future devices.
FireWire 2/IEEE 1394b
FireWire has become a favorite for the digital video crew and users who need fast external storage due to its 400-Mbps of bandwidth and lengthy 4.5 m cables. FireWire 2 doubles the baseline speed up to 800 Mbps, and in proposed optional variations, can reach over 1 GBps for lengths of up to 100 meters using a variety of cabling. The new version is intended to compete with Fibre Channel and SCSI over IP in storage area networks (SANs) or as inexpensive short-hop gigabit networks. Desktop versions of FireWire 2 will be backwards-compatible with FireWire 1 and will advance the nonpowered iLink variant. Multiple wiring formats will be supported, ranging from CAT 5 to optical fiber.
As of this writing, it is not yet clear whether the new version has been approved as an IEEE standard. The IEEE Web site states that a vote recommends accepting the standard, but there are no official notices of a final version on the FireWire site. If all goes well, we should see FireWire 2 devices on the market in the next six months.
The memory factor
As processor clock speed increases, memory speed has become the main limitation on computer performance. I’ll examine some of the newer memory technologies and how they affect bus development.
DDR II
DDR, or as it will soon be known, DDR I, provided a cost-effective way to increase bandwidth over SDRAM. DDR is currently available at speeds up to 333 MHz. 400 MHz DDR I is in development, but it involves such precise tolerances that it is not expected to be cost-effective.
DDR II will be a revamped implementation of DDR based on a smaller 0.13-micron process. Basic operating commands will be the same, but DDR II will have double-size data prefetch, multiple burst lengths, and read/write latency settings. Also, memory latency will no longer include half-cycle latencies, which will simplify timing issues. PC versions of DDR II will initially be available at 400 MHz (3.2 GBps), with 533 MHz (4.3 GBps) and 667 MHz (5.4 GBps) to follow. Higher-speed DDR II for video and switching devices will be available at speeds of up to 1 GHz.
DDR II is expected to provide a noticeable boost to performance, even at the same bandwidth. So far, asynchronous memory and processor front-side bus speeds have proven to be troublesome. 533 MHz DDR II has no more bandwidth than dual-channel 266 MHz DDR I but should be far more efficient on the new Pentium 4’s 533-MHz interface than the current 333-MHz DDR I. It is uncertain how long it will be before DDR II will reach the market. However, DDR II will be necessary in the next year to keep from bottlenecking the processors that will be available by then.
RDRAM
Current-generation RDRAM operates at speeds of 800 MHz (1.6 GBps). The upcoming increment of RDRAM operates at 1066 MHz to provide 4.2 GBps in a fixed dual-channel configuration (2x 2.1 GBps). The 1066-MHz speed will provide a synchronous connection to the 533-MHz Pentium 4 front-side bus, which is far superior to using the 800-MHz variant. How well it will compare to 533-MHz DDR II will be of great interest, because DDR II will require fewer components and have better latency. RDRAM’s advantage is that 1066 MHz will be reaching the market soon, with 1200 MHz (2.4 GBps per channel) on the far horizon.
Rambus II
While Rambus II is not the exact moniker of the new Yellowstone technology to be introduced by Rambus, it will do for our purposes here. Yellowstone is a technique for increasing the clock speed of RDRAM. Rather than the normal double data rate format of two transfers per clock of current RDRAM and DDR, Rambus II will use octal data rates (ODR) of eight transfers per clock. This allows an effective 3.2 GHz from a mere 400-MHz clock. Yellowstone will scale to an effective 6.4 GHz. It is intended to use a mere 1-byte data path, but at these speeds, this will provide an initial 3.2 GBps per channel, far superior to current RDRAM, with final speeds of 6.4 GBps per channel.
High bandwidth will be welcome, but latency may be a problem. Cost will also be a concern for this new technology, as an octal clock will require severe tolerances. While manufacturing will be a bit easier (given the slower core clock), the intended low voltages will make sensing the minute shifts required for an octal clock difficult. Only time will tell how well Rambus II will perform in the market.