Why Total Jitter cannot be measured on an oscilloscope

-January 15, 2014

Jitter is composed of a combination of random and deterministic signal impairments. Since RJ (random jitter) follows an unbounded distribution the combined total jitter distribution is unbounded.

Since trying to measure the peak-to-peak value of an unbounded quantity is silly at best and ill-defined and unrepeatable at worst, the residing poo-bahs of high speed serial technology have handed down a well defined peak-to-peak-like quantity called "total jitter defined at a bit error ratio." I call it TJ(BER) and so should you. TJ(BER) because TJ depends on your choice of BER (bit error ratio).

Here (see figure below) is a drawing of a bathtub plot that includes the definition of TJ(BER). A bathtub plot is the graph of BER, the ratio of the number of bit-errors to the total number of bits transmitted, as a function of the time-delay, x, at which each bit is sampled, BER(x). BER is high near the crossing points of the eye diagram and drops as the sampling point moves toward the eye center—as you'd expect. TJ(BER) is defined as the amount of horizontal eye-closure at a given BER.

Bathtub plot, BER(x), showing the definition of TJ(BER) (graphic courtesy of Anritsu Corp.).

The graphic shows TJ(1E-12), which is the industry standard.

If you perform jitter analysis, you probably use an oscilloscope (see "Setting up your oscilloscope to measure jitter"). Jitter can be analyzed on either a real-time or equivalent-time sampling oscilloscope, but TJ(BER) cannot be measured on an oscilloscope.

You might be scratching your head and thinking, "But I read TJ(1E-12) off of my scope all the time."

Oscilloscopes estimate TJ(BER), but are incapable of measuring it. In fact, not only do scopes estimate TJ(BER), but they do so by extrapolating their measurements by a factor of about a million.

Where interpolation consists of estimating what happens between two contiguous measurements, extrapolation is guessing what happens beyond your last measurement. The uncertainty of an interpolation is roughly the difference in the two measurements surrounding the interpolated point. The uncertainty of an extrapolation is...well, there's no rule of thumb for the uncertainty of an extrapolation.

TJ(BER) can only be measured by a BERT (bit error ratio tester).

The reason that neither real-time nor equivalent-time oscilloscopes can measure TJ(BER) has to do with the tiny and huge numbers involved. To measure BER as low as 1E-12, you must analyze at least 1E13 bits.

It works like this. Say the true BER is exactly 1E-12. If you transmit 1E12 bits, there's a 50% chance that you'll get an error. If you transmit 1E13 bits, there's about a 30% chance you'll get an error. To get a 10% uncertainty in the BER, you need to transmit 1E14 bits. That's a lot of bits!

For a real-time oscilloscope to analyze the jitter of a hundred-trillion bits, you'll need at least ten samples for each bit to find the point where the edge crosses the slice threshold. To measure TJ(1E-12) a real-time oscilloscope would require a memory depth of about 1E15 bytes, a petabyte—a factor of about a million larger than is currently available.

The situation is worse for an equivalent-time oscilloscope. ET scopes under-sample signals, but aren't limited by memory depth. They acquire and acquire and just keep acquiring as long as they run, but they do it slowly. So slowly that a typical 100-Msample/second ET scope would take over two years to acquire enough data to measure TJ(1E-12).

You might be thinking that 1E14 bits is more than necessary. And, if you're lucky, you're right.

We only really need a few billion over six trillion bits to measure TJ(1E-12) with 10%-ish accuracy (which is the best you can possibly hope for). The idea is to bracket rather than measure x-left and x-right. Since typical bathtub plots have such steep slopes below BER = 1E-9 or so, it's pretty easy to set the sampling point just inside x-left and let it run for 3E12 bits without getting an error and we're then assured to a 95% confidence level that the BER is less than 1E-12 at that x position.

Repeat the process on the right edge to constrain x-right. To get the outer constraint, use the standard fastBert scan technique down to about 1E-9. This is called the bracketing technique. Marcus Mueller and I wrote a paper about it several years ago and Agilent implemented it in software - the technique is not patented, by the way, so feel free to implement it yourself.

While a scope still can't acquire six trillion bits, which would mean 60 trillion bytes at ten samples per bit period, 60 Terabytes is closer to reality than a petabyte.

The accuracy of any TJ(BER) measurement is limited by two factors. First, the quality of your BERT. Look for excellent error detector sensitivity, the minimum voltage swing that the error detector can decipher - 10 mV is state of the art - and excellent timing linearity with quick pattern synchronization. You'll need to be careful about whether you need a recovered clock to drive the error detector, too.

The second factor, I can't help you with. This is where the whole RJ, DJ, TJ, business might unravel.

We typically measure TJ(BER) on signals with several jitter impairments on a repeating test pattern. The timing of jitter sources that correlate to the test pattern, like ISI (inter-symbol interference) due to channel response, and those that are not correlated to the pattern, like PJ (periodic jitter) due to pickup of periodic noise and large RJ fluctuations, causes timing excursions of different amplitudes on different pattern transitions.

To have a chance of sampling the combined effects of ISI, RJ, and PJ we should analyze many repetitions of the pattern.If the test pattern is the standard PRBS31 (pseudo-random binary sequence with every permutation of 31 bits) and we want to repeat it 100 times, we need a lot of bits; scads and oodles of bits, more than it's polite to write down on the Internet.

Loading comments...

Write a Comment

To comment please Log In

FEATURED RESOURCES