Networking

How fast is your network client, really? The facts behind the numbers

In this Daily Drill Down, James McPherson walks you through networking arcana: peak and total bandwidth, packet collision avoidance and detection, and more. When you've finished reading, you'll know how fast your network client really is—and why.


Chances are, in the next few years, you’ll be connecting client machines to a wireless LAN (WLAN). As hardware prices drop, performance is on the rise. But just what is the real performance of wireless or even traditional wired networks? If you’re new to networking, you may be surprised. In this Daily Drill Down, I’ll cover the real network performance of LANs and WLANs compared to their advertised bandwidth.

Latency and bandwidth
Computer network performance is measured with two values: latency and bandwidth. Latency—which refers to the time it takes for a particular piece of data, called a packet, to get from point A to point B—is usually measured in milliseconds (ms). Think of latency as response time; any request you make to a computer takes that much time before you get a response. The most common tool used to test latency is the ping command, which sends packets out to a specific machine and times the response. Therefore, latency is sometimes referred to as the ping time.
For more information about ping, read the TechProGuild Daily Feature "New to networking? Introducing the ping command.”
While latency refers to response time, bandwidth refers to data speed. The bandwidth of a network indicates how much data, in bits, flows from point A to point B in a given time frame. Because networks are fast, the data rate is measured in millions of bits (megabits) per second (Mbps). Today’s ultrahigh-speed connections work at billions of bits (gigabits) per second (Gbps). Notice that these values are in bits, not bytes. As there are eight bits to the byte, 8 Mbps of bandwidth can transfer 1 MB every second.

How duplexing affects peak and total bandwidth
The most common short-distance (under a kilometer) type of wired connection is Ethernet. Ethernet runs at 10 Mbps, Fast Ethernet at 100 Mbps, and Gigabit Ethernet runs at—you guessed it—1 Gbps. So far, your network speed seems very cut-and-dry. Now let’s add some complexity.

Most network communication is done over wires, using what is called a carrier signal. Basically, a carrier signal is a timing signal that lets both ends know when to talk. In some cases, your network hardware can handle bidirectional communication, or sending and receiving information at the same time (called full-duplex). The trouble is, sometimes your network hardware is configured to handle only one direction at a time (called half-duplex). On a half-duplex Ethernet, each signal would still have a rate of 10 Mbps, but as signals are flowing only one direction at a time, overall performance is reduced, and certain kinds of latency issues crop up. Note that duplexing differences affect total, not peak, bandwidth.

Peak bandwidth is the most data you can send in one direction. Total bandwidth is the most data you can send in both directions. A 10-Mb half-duplex device has a peak bandwidth of 10 Mb and a total bandwidth of 10 Mb. A full-duplex card also has a 10-Mb peak bandwidth but can reach 20 Mb of total bandwidth when sending and receiving at the same time.

Therefore, the performance hit of half-duplex does not affect bandwidth or latency when a single operation is performed. After all, that operation is going to transmit in only one direction at a time. It’s when you’re performing multiple network functions that you can detect the performance hit of half-duplex devices. The latency generated by switching from a “send” state to a “receive” state doesn’t affect ping times—which are a round-trip time—so much as affecting the transmission of data packets that care about how long it has been from the time they are created to the time they are delivered.

Examples
Let’s say you’re using a half-duplex 10-Mb Ethernet device to ping a server. When ping is the only application or service using the network, your ping times will be similar to a full-duplex device, which we’ll call 20ms. You then download a file from the server, and again, your bandwidth and transfer times will be similar to a full-duplex device, which will be 9.5 Mbps for this example.

Now you decide to ping while transferring another file. Even on a full-duplex device, we expect the bandwidth of the file transfer to decrease by the bandwidth required by the returning ping packets and possible additional latency (1-5ms) due to the extra work of the computer keeping track of the two transfers. For this example, the most bandwidth ping could use will be 2 Mbps, giving us a peak file transfer bandwidth of 7.5 Mbps (9.5 Mbps incoming file minus 2 Mbps incoming ping) on a full-duplex device. (See Table A for a summary.)
Table A: How network operations affect file transfer rates

Type of network
Operation
Actual peak bandwidth
File transfer rate
10-Mb half-duplex File transfer 9.5 Mbps 9.5 Mbps
10-Mb full-duplex Ping and file transfer 9.5 Mbps 7.5 Mbps
10-Mb half-duplex Ping and file transfer 9.5 Mbps 5.5 Mbps
 
The half-duplex device gets hit twice by ping; it has to send out 2 Mbps of data and receive 2 Mbps of data, and it can’t do both at the same time. That leaves only 5.5 Mbps (9.5 Mbps incoming file minus 2 Mbps incoming ping minus 2 Mbps outgoing ping = 5.5 Mbps) of bandwidth for the file transfer.

Ping times, however, remain unaffected. A simple half-duplex process would be to pause for 1-3ms after sending a packet to see if the other device needs to talk. Because it would be alternating back and forth evenly between send and receive, ping times would be unaffected since the ping packet would get immediately sent back by the server after it was received. The 1-3ms pause while the devices switch direction occurs roughly at the same time the pinged server marks the packet "return to server” and therefore is not perceptible.

Let’s say that after reading your file you move on to a videoconference with a coworker in another state. The connection between your offices is a standard leased line with 1.5 Mb of bandwidth. That bandwidth shouldn’t be a problem because your videoconferencing tool works adequately on 512 Kb—adequately if you are on a full-duplex connection, that is.

Let’s look at the situation: You’re sending and receiving video and audio data. With a half-duplex device, you can send or receive but not both. Your pictures become jerky and sound gets choppy from the switching back and forth. Storing packets to send at less than real time doesn’t do much good unless you want there to be a lag from the time you speak to the time the other person hears you.

Three-fourths duplex wireless
Most wireless devices would qualify as half-duplex—sort of. Wireless networks using Direct Sequence Spread Spectrum (DSSS) technology broadcast their signals simultaneously over different frequencies, called channels. On any channel, wireless network hardware can either send or receive, but they cannot do both. Notice the single-channel disclaimer. The DSSS wireless devices use multiple channels that allow you to communicate bidirectionally using one or more carrier signals for each direction. Therefore, the device can send and receive simultaneously but only at the expense of peak bandwidth.

Like normal half-duplex cards, total bandwidth and peak bandwidth are identical. However, peak bandwidth can be reduced to enable a kind of full-duplex operation. For 802.11b wireless—which has a theoretical peak and total bandwidth of 11 Mbps—you could either send or receive 11 Mbps in half-duplex mode or switch to a full-duplex operation that lets you send at 5.5 Mbps and receive 5.5 Mbps simultaneously, or any mixture that totals 11 Mb. If you put several half-duplex network cards in a single computer, you would have the wired equivalent of DSSS.

Bandwidth theory versus reality
In addition to the effect of duplexing on network speed, there is the difference between theoretical bandwidth and actual bandwidth. Theoretical bandwidth applies to a perfect world, where everything works exactly as conceptualized, from the voltage of devices down to the spin of electrons. In reality, you can expect some loss in bandwidth due to a circuit not running at full speed or because of lost packets resulting from packet collisions.

Packet loss occurs because few network interfaces can talk to more than one other device at a time. Because these interfaces talk really, really fast, this inability to hold many simultaneous conversions isn’t normally an issue. However, when a lot of devices (and two is a lot) try to talk at once, packets get ignored, forgotten, or lost.

Typically, packet collision is dealt with by collision-detection routines. Collision detection is a simple process where network devices keep track of packets entering and leaving, and determine when they go missing. When a device senses that collisions are occurring in the network, it responds by slowing down the timing signal. This slowdown does reduce bandwidth below the maximum, but the net result of reducing collisions is a loss far less than the bandwidth loss caused by retransmitting the same packets over and over.

Wireless reality
In a wired world, every device has its own link to the hub. However, the wireless world has to share signaling channels if there are more clients than channels. (The 802.11b standard for DSSS is to divide the 2.4-GHz band into fourteen 22-MHz channels; the U.S. standard uses 11 channels; in Europe and Asia, the standard uses 13 channels.) Packet collision will occur on a wireless network in the same way it would on a wired network up to the point that the number of clients equals the number of channels. Once that number is exceeded, packet collision at the hub level ramps rapidly as the data signals become mixed and mangled.

A normal hub has between six and 24 ports to connect network devices that it manages with collision detection. Wireless networks use base stations that, like wired hubs, connect multiple network devices. The big difference is that in the case of 802.11b wireless, the specification allows up to 63 clients per base station. Compared to a normal hub, the wireless base station is at 2.5 times capacity, adding probably a fivefold increase in collisions. In these stressed situations, packet-collision detection would not be very effective because the overlapping messages would block the “not so fast” message used to manage the connections.

That is where packet-collision avoidance is used. Whereas packet-collision detection waits for an accident and then puts out a yellow flag, packet-collision avoidance lowers the speed limit. This technique can be compared to a highway speed limit that lowers automatically during rush hour to prevent accidents, rather than waiting for accidents to occur before slowing down the road. However, if collision avoidance is always on, it reduces the peak bandwidth below the theoretical maximum at all times as it attempts to maintain a lower bandwidth that’s more functional under virtually any conditions.

Effect of packet-collision avoidance
Because of this lowering of throughput, packet-collision avoidance is used only in particular cases. Wired networks use collision avoidance only where there are shared wires, like in some loop-based networks, and collision detection isn’t feasible. Most wireless networks use collision avoidance; the most notable example is the 802.11b protocol.

The collision-avoidance routines take 802.11b’s theoretical 11 Mbps of bandwidth and reduce it to a functional maximum bandwidth of 6 Mbps. The performance hit seems very harsh when there aren’t very many clients, but compared to what would happen when 63 clients share channels to communicate with a single hub, it is well worth it. Even collision detection would have problems at that level because the device would almost always be in collision mode.

Evolutionary, not revolutionary
Overall, wireless won’t change how we do what we do, but it may change where we do it. Ubiquitous wireless connections will be a wonderful convenience as long as no one has unreasonable expectations or unreasonable fears. Don’t expect wireless, or any network, to live up to the glossy product brochures. Instead, understand how the different aspects of networking interact so that you can make an informed judgment about the effectiveness of your network and perform effective troubleshooting.
The authors and editors have taken care in preparation of the content contained herein but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for any damages. Always have a verified backup before making any changes.

Editor's Picks