There are a few tools available to measure network performance metrics such as throughput, packet loss and jitter (sometimes known as delay variation). One tool I have used from time to time is an open source tool called iperf. This piece of software is simple to run, but provides enough options and information to enable you to troubleshoot network issues.
Before starting, a quick review of the transport layer: An application using the transport layer can use TCP or UDP depending on the application requirements. In general, if the application requires error free in-order delivery of packets, then TCP is best. If the application can tolerate some packet loss, then UDP will probably fit the requirements. The two types of transport protocol each have their uses. The difference with TCP as opposed to UDP is that TCP may be slower than UDP for packet delivery. In contrast, UDP may be faster, but it makes no guarantee that the packets will arrive at the destination. Email and web browsing use TCP. VoIP can tolerate some packet loss (usually around 5%) and therefore uses UDP.
The basics of using iperf are simple. Install it on a server and a client, then run iperf -s on the server and iperf -c <IP address of server> on the client. The -s option signifies server, whilst the -c option indicates client. You need to specify the IP address of the server to connect to with the -c option. Running iperf with -h prints out the options. The -i option defines the interval in seconds that iperf reports the metrics. The default time that the iperf client will run is 10 seconds, but this can be changed using the -t option. By default, the iperf server listens on port 5001, but this can be changed if required.
The first example uses TCP. To start iperf on the server use
iperf -s -i 1
As mentioned earlier, the -i option defines the interval that metrics are printed out. In this case, we have specified an interval of 1 second. The command string to use on the client is
iperf -c <server IP address> -i 1 -t 60
The area of interest on TCP is bandwidth. In this example we have two laptops on the same subnet. One laptop is connected via wired Ethernet, whereas the second laptop is connected via an 802.11g wireless router. The theoretical bandwidth (or throughput) for 802.11g is 54MBits/sec. The output from the server side is as below.
root@gudaring:~# iperf -s -i 1
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.0.2 port 5001 connected with 192.168.0.4 port 2790
[ ID] Interval Transfer Bandwidth
[ 4] 0.0- 1.0 sec 1.77 MBytes 14.9 Mbits/sec
[ 4] 1.0- 2.0 sec 1.87 MBytes 15.7 Mbits/sec
[ 4] 2.0- 3.0 sec 1.77 MBytes 14.9 Mbits/sec
[ 4] 3.0- 4.0 sec 1.85 MBytes 15.5 Mbits/sec
[ 4] 4.0- 5.0 sec 1.92 MBytes 16.1 Mbits/sec
[ 4] 5.0- 6.0 sec 1.92 MBytes 16.1 Mbits/sec
[ 4] 6.0- 7.0 sec 1.88 MBytes 15.8 Mbits/sec
[ 4] 7.0- 8.0 sec 1.89 MBytes 15.8 Mbits/sec
[ 4] 8.0- 9.0 sec 1.91 MBytes 16.0 Mbits/sec
[ 4] 9.0-10.0 sec 1.90 MBytes 15.9 Mbits/sec
From this, we can see that the actual bandwidth from the client to the server is around 16MBits/sec. Given that one laptop is using wireless, we might expect the actual throughput to be lower than the ideal throughput.
Now we take a look at UDP. The command string to use on the server side is
iperf -su -i 1
The -u option specifies UDP. On the client side, we use
iperf -c <server IP address> -b 36M -i 1 -t 20
Note that the -b option will automatically cause iperf to use UDP. If -b is not specified, then you need to use -u to indicate UDP is the transport protocol. In this case, we are specifying that the transmission rate should be 36MBits/second.
root@gudaring:~# iperf -su -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 112 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.0.2 port 5001 connected with 192.168.0.4 port 2818
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0- 1.0 sec 3.36 MBytes 28.2 Mbits/sec 1.357 ms 0/ 2397 (0%)
[ 3] 1.0- 2.0 sec 3.41 MBytes 28.6 Mbits/sec 0.516 ms 0/ 2430 (0%)
[ 3] 2.0- 3.0 sec 3.41 MBytes 28.6 Mbits/sec 0.669 ms 0/ 2431 (0%)
[ 3] 3.0- 4.0 sec 3.42 MBytes 28.7 Mbits/sec 0.849 ms 0/ 2438 (0%)
[ 3] 4.0- 5.0 sec 3.43 MBytes 28.8 Mbits/sec 0.736 ms 0/ 2448 (0%)
[ 3] 5.0- 6.0 sec 3.38 MBytes 28.3 Mbits/sec 0.553 ms 0/ 2408 (0%)
[ 3] 6.0- 7.0 sec 3.39 MBytes 28.5 Mbits/sec 1.106 ms 0/ 2421 (0%)
[ 3] 7.0- 8.0 sec 3.37 MBytes 28.3 Mbits/sec 0.892 ms 0/ 2404 (0%)
[ 3] 8.0- 9.0 sec 3.40 MBytes 28.5 Mbits/sec 0.457 ms 0/ 2425 (0%)
[ 3] 9.0-10.0 sec 3.42 MBytes 28.7 Mbits/sec 0.992 ms 256/ 2698 (9.5%)
[ 3] 10.0-11.0 sec 3.36 MBytes 28.2 Mbits/sec 1.183 ms 715/ 3112 (23%)
[ 3] 11.0-12.0 sec 3.38 MBytes 28.4 Mbits/sec 1.091 ms 655/ 3069 (21%)
[ 3] 12.0-13.0 sec 3.38 MBytes 28.4 Mbits/sec 1.256 ms 654/ 3065 (21%)
[ 3] 13.0-14.0 sec 3.38 MBytes 28.3 Mbits/sec 0.604 ms 632/ 3040 (21%)
[ 3] 14.0-15.0 sec 3.35 MBytes 28.1 Mbits/sec 0.659 ms 675/ 3065 (22%)
[ 3] 15.0-16.0 sec 3.35 MBytes 28.1 Mbits/sec 0.675 ms 674/ 3067 (22%)
[ 3] 16.0-17.0 sec 3.37 MBytes 28.3 Mbits/sec 0.763 ms 660/ 3063 (22%)
[ 3] 17.0-18.0 sec 3.37 MBytes 28.2 Mbits/sec 0.713 ms 668/ 3069 (22%)
[ 3] 18.0-19.0 sec 3.35 MBytes 28.1 Mbits/sec 0.512 ms 692/ 3085 (22%)
[ 3] 19.0-20.0 sec 3.37 MBytes 28.3 Mbits/sec 0.865 ms 639/ 3044 (21%)
[ 3] 20.0-21.0 sec 3.35 MBytes 28.1 Mbits/sec 0.574 ms 697/ 3087 (23%)
[ 3] 21.0-22.0 sec 3.35 MBytes 28.1 Mbits/sec 0.543 ms 678/ 3067 (22%)
[ 3] 0.0-22.0 sec 74.4 MBytes 28.4 Mbits/sec 1.336 ms 8295/61351 (14%)
The output shows the throughput, the delay variation (known as jitter) and packet loss. In this case, we are simulating applications using more bandwidth than is actually available for use. The bandwidth we are getting is around 28Mbits/sec, whereas we have specified 36Mbits/sec. This causes the packet loss rate to hit around 22%. In contrast, if we specify 28M for the bandwidth, we get the following output on the server side.
------------------------------------------------------------
[ 3] local 192.168.0.2 port 5001 connected with 192.168.0.4 port 2949
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0- 1.0 sec 3.32 MBytes 27.8 Mbits/sec 0.648 ms 0/ 2367 (0%)
[ 3] 1.0- 2.0 sec 3.30 MBytes 27.7 Mbits/sec 0.560 ms 0/ 2355 (0%)
[ 3] 2.0- 3.0 sec 3.37 MBytes 28.3 Mbits/sec 0.657 ms 0/ 2405 (0%)
[ 3] 3.0- 4.0 sec 3.34 MBytes 28.0 Mbits/sec 0.592 ms 0/ 2385 (0%)
[ 3] 4.0- 5.0 sec 3.33 MBytes 28.0 Mbits/sec 0.641 ms 0/ 2378 (0%)
[ 3] 5.0- 6.0 sec 3.34 MBytes 28.0 Mbits/sec 0.677 ms 0/ 2379 (0%)
[ 3] 6.0- 7.0 sec 3.34 MBytes 28.0 Mbits/sec 0.661 ms 0/ 2384 (0%)
[ 3] 7.0- 8.0 sec 3.33 MBytes 28.0 Mbits/sec 0.700 ms 0/ 2377 (0%)
[ 3] 8.0- 9.0 sec 3.34 MBytes 28.0 Mbits/sec 0.628 ms 0/ 2385 (0%)
[ 3] 9.0-10.0 sec 3.34 MBytes 28.0 Mbits/sec 0.638 ms 0/ 2381 (0%)
[ 3] 10.0-11.0 sec 3.34 MBytes 28.0 Mbits/sec 0.673 ms 0/ 2380 (0%)
[ 3] 11.0-12.0 sec 3.15 MBytes 26.4 Mbits/sec 1.492 ms 0/ 2248 (0%)
[ 3] 12.0-13.0 sec 3.33 MBytes 28.0 Mbits/sec 0.506 ms 0/ 2378 (0%)
[ 3] 13.0-14.0 sec 3.43 MBytes 28.8 Mbits/sec 0.572 ms 0/ 2447 (0%)
[ 3] 14.0-15.0 sec 3.40 MBytes 28.5 Mbits/sec 0.530 ms 0/ 2424 (0%)
[ 3] 15.0-16.0 sec 3.36 MBytes 28.2 Mbits/sec 0.821 ms 0/ 2400 (0%)
[ 3] 16.0-17.0 sec 3.34 MBytes 28.1 Mbits/sec 0.695 ms 0/ 2386 (0%)
[ 3] 17.0-18.0 sec 3.34 MBytes 28.1 Mbits/sec 0.593 ms 0/ 2386 (0%)
[ 3] 18.0-19.0 sec 3.33 MBytes 27.9 Mbits/sec 0.732 ms 0/ 2376 (0%)
[ 3] 19.0-20.0 sec 3.31 MBytes 27.8 Mbits/sec 1.300 ms 0/ 2363 (0%)
[ 3] 0.0-20.0 sec 66.8 MBytes 28.0 Mbits/sec 1.427 ms 0/47621 (0%)
In this case, there is no packet loss. For both runs, the jitter is mostly less than 1ms, which is within the tolerance levels of VoIP.
In summary, iperf is a simple but useful tool. It can be used to check throughput, packet loss and jitter. It provides enough basic information to troubleshoot UDP and TCP issues.