Test Pattern Redemption!

-January 15, 2013

I feel like Bob Marley. No, not because of the election results in Washington and Colorado, because I have been redeemed! Bask in my schadenfreude as I share the ugly details.

Marty Miller, house genius at LeCroy, will present a paper this year at DesignCon that proves the folly of long test patterns. The problem exposed by Dr. Miller is easy to understand in the context of the Central Limit Theorem.

[Click here to register for DesignCon 2013, Jan. 28-31 at the Santa Clara Convention Center. Options range from an All-Access Pass to Free Expo Admission, which includes the option to attend a dozen tech training sessions.]


The Central Limit Theorem:  A combination of a large number of small, independent effects follows a Gaussian distribution.


Figure 1: The Central limit theorem at work.

The pulse response of even a tiny interconnect spreads over a certain number of bit periods. If you double the data rate, it will spread over twice as many bits.

Consider the word “small” in the Central Limit Theorem. As we count up the number of bits over which the pulse response spreads, we have to decide when the pulse response is “small” enough to ignore. But in the world of The Central Limit Theorem, “small” means nonzero. In theory (and reality, it turns out) the pulse response decays to zero asymptotically but never actually makes it to zero.

Those wicked-tiny nonzero divergences from zero are exactly what the Central Limit Theorem is talking about.

Let’s get back to test patterns. The PRBS31 (pseudo-random binary sequence, 231-1) pattern includes every permutation of 31 bits. That means it has a billion edges, each preceded by a unique bit sequence; a billion different ISI (inter-symbol interference) induced jitter displacements that almost combine into a Gaussian distribution. By “large,” the central limit theorem doesn’t mean a billion, or even billions of billions. Large, in the context of the theorem, means arbitrarily large, less than infinity, but bigger than any number that comes to mind.

All those tiny ISI effects combine into a truncated Gaussian. It’s Gaussian enough to look like RJ (random jitter) to jitter analyzers (every jitter analyzer: real and equivalent time sampling oscilloscopes, time interval analyzers, and BERTs (bit error ratio testers) whether they’re fitting tails or measuring spectra will see the combination of ISI as RJ), but it’s actually DJ (deterministic jitter).

The effect is easy to observe. Connect a nice cable to the output of a high data rate pattern generator, 10 Gb/s is good, 25 Gb/s is better. Generate a short PRBS (pseudo-random binary sequence), like PRBS7, the 127 bit 27-1 pattern, measure RJ, and call the result RJ7. Now generate a really long pattern like PRBS23 or PRBS31 (It’s better to use PRBS23 because your jitter analyzer makes more accurate measurements on shorter patterns). Measure RJ and call the result RJ23. You'll find that RJ23 > RJ7.

Remember, we only care about jitter if it causes errors. A given level of DJ causes less than a tenth the trouble that the same level of RJ causes. When you use a long test pattern, your analyzer thinks all that tiny ISI is RJ and over-estimates TJ(BER) (total jitter defined at a bit error ratio) by that same factor of ten.

If you use a BERT and measure TJ(BER) you can compare with the TJ(BER) estimated from RJ and DJ and see how much ISI has been confused for RJ. The measurement doesn’t take nearly as long as most people think if you use the bracketing method I came up with a couple of buddies when I toiled for Agilent.

So not only is PRBS31 worse, costlier, slower—it's also soul crushing!

Loading comments...

Write a Comment

To comment please Log In

FEATURED RESOURCES