Data Centers

Noise and jitter: Nemesis of digital communications

As time-sensitive applications like VoIP and telepresence become more commonplace, there will be increased pressure on network professionals to understand signal integrity and how to maximize it.

Michael KassnerNoise and jitter detrimentally affect all and any digital technology, how and why is what I'd like to take a look at. First, to make sure we're all on the same page, let's define some basic yet crucial concepts. A simple way to understand digital systems whether they're wireless or wired is to consider them as communication networks, with each device in the network having a transmitter and receiver. Next concept we need to understand is how digital information (bits and bytes, or logic zeros and ones) get from the transmitter to the receiver.

Digital signals

To start, trapezoidal waveforms represent digital signals, as shown in Graph A (courtesy of Wikipedia). Please notice the individual components of the waveform.

digitalsignal.JPG

Graph A
In order to mimic binary logic (ones and zeros), digital information is transmitted in the form of logic one (high level) or logic zero (low level), both of which are represented by specific amplitude ranges on the voltage waveform. For example, Graph B is a representation of an ideal waveform (courtesy of cs.umbc.edu) with any voltage above Vih considered logic one (high level) and any voltage below Vil considered logic zero (low level). Understanding the need to use voltage ranges instead of a specific voltage becomes apparent when we look at signal noise.idealwaveform.JPG
Graph B

Signal sampling process

Digital receivers use a sampling process to determine whether the waveform is logic one or zero. That makes sense, but how does the receiver know when to sample the voltage waveform? The receiver could very well be sampling at the wrong time, possibly reading logic zero instead of logic one. A timing application common to both the transmitting device and receiving device takes care of the synchronization. Looking at Graph C below (courtesy of cs.umbc.edu), one can see how the rising edge of the clock signal triggers the data sampling process at the receiver.

samplingpattern.JPG

Graph C

To review, we know what the graph of an ideal waveform looks like, where the receiver needs to sample the waveform, and when to sample it. Next step is to inject realism into our ideal world and see why we need signal integrity.

Signal Integrity

The term signal integrity (SI) refers to how well an actual voltage waveform compares to the ideal voltage waveform when looking at the following characteristics:

Quality: Did the signal arrive in good shape? Timing: Was synchronization maintained?

In an ideal world, a digital signal wouldn't lose any integrity on its journey to the receiving device, making every network engineer very happy. We all know that's not going to happen because there are many processes that degrade electronic communications. When talking about digital communications, noise and jitter are two good examples of such disruptive processes.

Noise in the digital world

Electronic noise is a fact of life and has numerous causes. Entire doctorate dissertations by more intelligent people than me have been written about it. For our purposes, it will be sufficient to define noise as anything that causes a digital signal to deviate from the ideal trapezoidal waveform. Fortunately, digital circuits are more immune to noise-related problems than analog circuits. Even so, noise still affects digital signals and can cause errors in data sampling by distorting the voltage amplitude (vertical axis of the graph). Graph D depicts a realistic voltage waveform encountered by the receiving device.

noise.JPG

Graph D

The departure from an ideal waveform is quite apparent and typically caused by one or more of the following noise processes:

Reflection noise: Is a common problem due to impedance mismatch Crosstalk noise: Is due to unwanted electromagnetic coupling between signals Power and ground noise: Is due to the generation of parasitic frequencies
Hopefully everyone isn't bored to tears by now, as the concepts are finally starting to fall into place. For example, assume Graph D is a digital signal trace from a system we're troubleshooting. Also assume that Vih (logic one or high level, remember Graph B) has a value of one. It becomes apparent that there are portions of each waveform with a specified high level value that's less than the Vih of one. That would be a problem if data sampling occurred at that specific moment in time, since the receiver wouldn't know what value (logic one or logic zero) to give that particular sampling. It may sound minor, but if the error happens with sufficient regularity, the performance of that circuit would be noticeably degraded.

Jitter and altered timing

Jitter is another hard characteristic to define. Many confuse it with latency, in reality they're very different. Jitter has more to do with variations in wave periods. One official definition of jitter is:

"The short-term variation of a significant instant of a digital signal from its ideal position in time."
Alrighty then. In my world, jitter alters the trapezoidal waveform of a digital signal with reference to time (horizontal axis of the graph), where as noise affects the voltage amplitude (vertical axis of the graph). Looking at Graph E (courtesy of TEK.com) will make everything a whole lot clearer.

jitter.JPG

Graph E
If you remember, noise can sufficiently interfere with the data sampling process to make it impossible for the receiver to differentiate between logic one and zero. The same applies to jitter. If the timing misalignment of the Actual pattern of Graph E (compared to the Desired pattern) is sufficient, sampling could occur on the rising or falling edge of the waveform, instead of the upper or lower plateaus. This again prevents the receiver from distinguishing between logic one and zero.

What's it all mean?

To recap, if the digital signal is noise and jitter free, the receiver will sample the waveform at the appropriate location and amplitude. If noise, jitter, or both alter the sampling location or amplitude, there will be a digital bit error. Throughout the article, we've been concerned with just one waveform or bit; however in reality there's billions of logical bits circulating in a typical digital system, which means that it wouldn't take very long to have a significant number of bit errors if noise or jitter are affecting the network. It's important enough a measurement to require a metric called Bit Error Rate or Ratio (BER). BER is used to determine the ratio of bit errors to the total number of bits transmitted. BER is used by network engineers to quickly ascertain network performance. In order to get a better handle on what this means to the end users, let's consider the example of a marginal network that's running a VoIP application. Introduce sufficient jitter and the calls will become choppy due to the timing variations. If the jitters are severe enough, there will be lengthy delays in the conversation and even the possibility of the call being dropped.

Final thoughts

Digital signal integrity has always been important to a network's health, but not a real concern to the end users. That has all changed with the increased use of interactive and time-sensitive applications like VoIP and telepresence. End users can tell right away if there's a problem, and we all know what that means.

—————————————————————————————————————————————————————————

Michael Kassner has been involved with wireless communications for 40 plus years, starting with amateur radio (K0PBX) and now as a network field engineer and independent wireless consultant. Current certifications include Cisco ESTQ Field Engineer, CWNA, and CWSP.

About Michael Kassner

Information is my field...Writing is my passion...Coupling the two is my mission.

Editor's Picks

Free Newsletters, In your Inbox