The Small Computer System Interface (SCSI) is a high-performance device interface that has undergone many changes since it was introduced in the mid-1980s. In this Daily Drill Down, I will go over the differing SCSI specifications, compare the format to IDE and FireWire devices, and offer some Web resources for those of you interested in learning more about SCSI.
Originally, the SCSI was targeted toward PCs, Apple Macintosh, UNIX workstations, and minicomputers but not mainframes, hence the “small” in its name. SCSI was designed, from the beginning, to support hard drives, scanners, optical drives, and other high-capacity/high-bandwidth devices not necessarily mounted inside the computer.
SCSI was the first accepted format to put the controller on the device, requiring only a host adapter to transfer data between the computer and the hardware. The large number of pins used allowed many devices to communicate safely over a long distance. This design feature helped contribute to SCSI’s implementation of RAID.
Because SCSI is a high-performance product, SCSI components are often manufactured to a high tolerance and possess a long operating life. Western Digital SCSI drives have warranties up to five years long; no IDE drive I’ve seen has more than a three-year warranty. The high performance of SCSI devices can also be seen in drive speeds. While IDE devices have a maximum rotation speed of 7,200 rpm with sustained transfer rates around 35 MB/second, SCSI drives are readily available at 10,000 rpm at sustained transfer rates of 45 MB/second. Even the on-board cache is faster on SCSI devices; IDE peaks out at 100 MB/second while SCSI reaches up to 320 MB/second.
Performance does not come cheap, however. Apple adopted SCSI with the Macintosh computer to provide a flexible, high-performance bus, earning it a reputation for performance well ahead of the times. It also earned a reputation for being painfully expensive, as SCSI components were, and are, much more costly than their alternatives. Apple executives were criticized for using such expensive parts but responded by saying that the Macintosh was designed to be the best, and nothing else did the job better. If you need performance and have the money, SCSI is still the fastest out there.
SCSI is also one of the easiest technologies to implement. Install a host adapter, and plug the devices to the chain. The last device in the chain must be terminated. New devices are self-terminating. Earlier generations were single ended (SE), meaning you had to plug in a terminator. If you want a device to boot, you use a jumper to specify it as ID 0 and go.
SCSI has gone through seven distinct generations over the years as additional features were included to support new technology. The number of variants and nicknames cause confusion. Here’s a description of the various SCSI types.
SCSI-1 (also known as “narrow” SCSI) was based on an 8-bit bus and relied on asynchronous transfers to transmit data and commands. It used a 5-MHz clock to reach a maximum transfer rate of 5 MB/second. While the transfer rate may seem especially slow today, up to eight devices (including the host adapter) could be linked on a single chain. Those devices could fit a wide variety of needs, making it an excellent choice for the times. You should think of SCSI-1 as a predecessor to USB, not just a hard drive interface.
This format relied on a low-density 50-pin internal connector and a low-density 50-pin external connector. (It’s also known as the Centronics connector. To see a Centronics connector, take a look your printer cable.) SCSI-1 is now considered obsolete, but it was used with many devices: hard drives, scanners, tape drives, and the predecessor to the CD-R, the write-one-read-many (WORM) drive.
SCSI-1 had two ways to drive the electrical signal through the bus: single ended (SE) and differential. SE devices could use up to a six-meter cable, while differential devices (now called High Voltage Differential, or HVD), could use up to a 25-meter cable because it’s less sensitive to noise. Because they used different pinouts, SE and HVD devices could not be interchanged.
SCSI-2, or “fast” SCSI, was something of a hodgepodge that first showed up around 1990. At its core was a “fast” system using a 10-MHz bus to enable higher throughput. It also allowed for both the “narrow” 8-bit bus and a new “wide” 16-bit bus. It allowed asynchronous commands with synchronous data transfers, another new feature. The SCSI-2 supported a 10-MB/second transfer rate for narrow systems and 20 MB/second for wide systems. The downside of all these new features was that it took until 1994 for SCSI-2 to be fully standardized.
SCSI-2 used the low-density 50-pin internal connector, low-density 50-pin external connector (non-Centronics), and a high-density 50-pin connector.
As for typical devices, SCSI-2 was commonly used for low-speed devices, at least by today’s standards. CD-ROMs, scanners, tape drives, Zip drives, Jaz drives, and other external non-hard drive devices were, and still are, common. The bandwidth of SCSI-2 is not sufficient for CD-R or CD-RW drives.
SCSI-2 was the only type of host adapter available in an ISA format. SCSI-2 devices on an ISA controller wouldn’t reach the full potential of the SCSI bus, however, as the ISA bus has a maximum of 8 MB/second, well below SCSI-2’s 10 MB/second and wide SCSI-2’s 20 MB/second.
Fast Wide SCSI
Out of the chaos of SCSI-2 came SCSI-3. The fragmented and very slow process that created SCSI-2 inspired a fast and highly organized development for the next generation. As a result, SCSI-3 became a standard in 1996, a mere two years after SCSI-2’s introduction. The new process separated the bus into different components: the physical/electrical properties of the interface, a primary command set, an overall architectural model, and specific protocols. The specific protocols allowed for improvements in various technologies without a total rewrite of the SCSI standard. For example, in this specification, hard drives are segmented into blocks and use the Block Command protocol while tape drives are linear devices that use the Stream Command protocol.
To try and eliminate the confusion of the different SCSI architectures, the generation number was removed and the official standard became simply “SCSI.” The differentiation was supposed to be made by the designation preceding “SCSI.” Unfortunately for consumers, this did not stop vendors from referring to it as SCSI-3. To add to the confusion, consumers were bombarded with devices called SCSI-3, Ultra SCSI, and Ultra SCSI-3.
The significant difference in Fast Wide SCSI from previous versions was the addition of eight more data bits to the bus, now referred to as a 16-bit bus. This change doubled the data rate to 20 MB/second. The bus speed, however, was the same as before. Fast Wide SCSI was available in SE and HVD versions. Because of the wide bus, it needed more conductors. This led to the development of a 68-pin very-high-density connector (VHDC), sometimes incorrectly called a SCSI-4 or SCSI-5 connector, and an 80-pin single connector attachment (SCA) that includes a power feed which enables hot swapping on wide bus devices.
The core technology addition to Ultra SCSI, also known as Fast-20, was a 20 MHz bus speed. Like Fast Wide SCSI, Ultra SCSI supported both a narrow 8-bit and a wide 16-bit bus with transfer rates of 20 MB/second and 40 MB/second, respectively.
All types of devices can take advantage of Ultra SCSI. While it is a bit behind the times, the performance levels are on par with current ATA/33 devices and have less impact on the processor.
Yet another extension of the “Fast” SCSI core, Ultra2 SCSI added a 40-MHz core. Because the increased data transfer rates over long cables could become vulnerable to signal degradation, a new signaling process was developed, called Low Voltage Differential (LVD). LVD allowed the bus speed to double again.
In keeping with previous SCSI generations, Ultra2 is available in both narrow (40 MB/second) and wide (80 MB/second) buses. It was standardized in 1999 as part of a new timetable intended to rapidly increase the performance of SCSI devices to compete with the low-cost ATA/66 devices that were entering the market. The majority of Ultra2 devices are hard drives or other forms of fast storage.
Ultra3 SCSI (Ultra 160)
Ultra 160 is the first SCSI generation to indicate bandwidth in the name. Ultra 160 is only available in a 160-MB/second wide bus, and no High Voltage Differential (HVD) devices are supported. Like Ultra2, it is natively an LVD system and also supports single ended devices. It introduces commands in a packet format. Data integrity is guaranteed by the introduction of Cyclical Redundancy Checks (CRC) to verify the data.
Ultra 160 targets high-speed devices or systems. Normal desktop computers will not need the performance of Ultra 160 for some time, but high-end workstations performing video editing benefit from the high speed. Internet servers are also able to take advantage of Ultra 160 disk arrays to provide an uninterrupted stream of data to clients. A Gigabit Ethernet segment can transfer a maximum of 125 MB/second, allowing an Ultra 160 system with a RAID array to take full advantage of a single gigabit connection. There’s even enough performance left over to operate the server.
Ultra 320 continues along the path set out by Ultra 160 as an LVD architecture available only in a wide bus format with no support for HVD devices. Ultra 320 has a 320 MB/second bandwidth and uses the same packet-based command system and CRC method of ensuring data integrity as Ultra3 SCSI.
Ultra 320 is aimed at systems that need as much data as possible. On a commercial front this fills the niche for file servers on multiple gigabit segments. This is of particular value to high-speed backup systems that can safely back up or restore multiple gigabit-capable servers at once, minimizing downtime or performance hits. Ultra 320 devices are slowly trickling onto the market as part of enterprise-level systems.
Comparison to IDE
IDE stands for Integrated Device Electronics, which in plain English means the device controller is mounted on the drive (rather than on the host adapter, as in SCSI). It was the result of a 1986 collaboration between Compaq and Western Digital to develop a cheap drive with good performance. They decided to limit the number of pins and the cable length, as it was intended for lower-end systems that would not need a large number of internal devices.
Because each device has its own controller, only two devices can be on each chain (preventing excess interference). Newer IDE host adapters can operate two chains, each with a master and a slave. The master can interrupt the slave device at any time, making it inappropriate for the primary system drive or sensitive devices like CD-R, CD-RW, and tape drives to be slaves. The maximum of four devices per controller (two chains with two devices each) limits the number of devices an IDE system can handle.
For a long time, CD-ROM devices were only available in SCSI. This was because CD-ROMs were high-end components, and also because IDE simply did not support the commands needed. It took the creation of the ATAPI (ATA Packet Interface) to enable CD-ROM on IDE. Non-ATAPI CD-ROMs required special software drivers or custom interface cards to operate. If you check older OEM soundcards, you may find a SCSI CD-ROM controller port.
The current generation of integrated IDE host adapters requires some management by the computer’s processor, creating a load on the system. Early CD-Rs were unreliable on EIDE interfaces as they would often run out of data in mid-write as the processor got hung up on other tasks. The various implementations of direct memory access (DMA) protocols helped transfer data to the computer’s RAM from devices with less management by the processor. Transfer rates increased from 11.1 MB/second to 16.66 MB/second, a significant increase, but still a bottleneck for PCs.
In 1996, the ATA/33 specification utilized newer DMA techniques to reach 33 MB/second transfer rates. Also known as ultra DMA or UDMA/33, it was completely backward-compatible with previous devices and became the standard for PC hard drives.
1999 saw the introduction of the improved ATA/66 format. The 66 MB/second system relies on a 40-pin cable and connectors similar to previous IDE formats but uses 80-conductor cables to ensure signaling. To operate at 66 MB/second, only ATA/66 devices can exist on a channel. The controller can still operate earlier IDE devices, but the existence of non-ATA/66 devices forces a channel to drop down to ATA/33 speeds. ATA/66 has been widely adopted and is standard on many new computers.
In 2000 ATA/100 was introduced. Early products were sometimes called ATA/66+. It continues the use of the 80-conductor cable and provides transfer rates up to 100 MB/second. IDE devices are quite inexpensive, and due to the improved DMA functions, do not impact system performance as much as in the past. IDE drives can be used in a RAID configuration but with a four-drive limit.
Developed by Apple and ratified by the IEEE (a subset of ANSI, the organization that also ratifies SCSI and IDE), FireWire is a high-bandwidth hot-swap interface that supports up to 63 devices at a transfer rate of 50 MB/second. It has not been widely accepted as it is competing with SCSI, a highly established interface. Because it is also a high-performance design and suffers from low initial sales volume, it is unable to compete on the merits of price the way IDE can. FireWire could theoretically be used on a RAID device, but its relatively low bandwidth compared to ATA100 and any of the recent SCSI generations would only take advantage of the redundancy.
FireWire is targeted at external hard drives and video editing systems. A number of FireWire video cameras and digital video editing boards are available.
I’ve included Fibre Channel (FC) for completeness. FC is an implementation of SCSI over a Gigabit IP network. It scales from 33 MB/second to 500 MB/second across ranges up to 10 km. As an IP system, it can use packet frames of varying sizes, changing the bus “width.” FC is more a data distribution network than an interface as it relies on SCSI for device operation. Because the FC network can support a large number of FC device controllers, it essentially supports an infinite number of devices. FC was introduced in 1988, after the introduction of Gigabit Ethernet, and was standardized in 1994. The point to remember is that an FC device is really a SCSI device with an optical FC connector. You can find Ultra, Ultra2, and Ultra 160 SCSI FC devices.
For more information
Organization that defines SCSI standards under ANSI
SCSI Trade Association
Organization that defines ATA/IDE standards under ANSI
Apple’s FireWire site
IEEE site with IEEE1394/FireWire information
Fibre Channel Industry Trade Association
- For a comparison of SCSI types, see the SCSI Trade Association overview of standards.
James McPherson has served his time in the trenches of technical support, honed his skills as a network administrator, and still managed to complete a B.S. in Engineering. After working for four different companies without changing offices, he is currently a freelance consultant and the bane of computer salesmen everywhere.The authors and editors have taken care in preparation of the content contained herein but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for any damages. Always have a verified backup before making any changes.