Trading signal complexity for bandwidth
In days of yore, PLLs (phase locked loops) recovered clocks from data signals. Since the clock is embedded in logic transitions, PLL clock-recovery circuits couldn't handle waveforms with long strings of identical bits. To assure that the clock recovery circuit would never have to face more than five identical bits in a row, we introduced 8B/10B encoding, which dedicated an extra pair of bits to every eight bits of data. The 20% overhead seemed necessary. But technology moves forward.
PLL-based clock-recovery circuits take up too much chip space, use too much power, and engineering innovation waits for nothing. Phase interpolators came along and reproduced PLL performance—more or less—with lower power while taking up less area. Then the phase interpolator evolved into a DLL (delay-locked loop). DLLs can maintain a synchronized data-rate clock signal even with dozens of bits without a transition. Of course, even DLLs need some transitions, so we implemented 64B/66B (100 GbE and FibreChannel) or 128B/130B (PCIe), and chanted, "Halleluyah!" at our gain of nearly 20% bandwidth.
In going from 8B/10B to 64B/66B encoding, FibreChannel kept the overhead in their rate-doubling naming scheme. While 1GFC runs at 1 Gbit/s, 2GFC at 2 Gbits/s, and so on, up to 8GFC at 8 Gbits/s, in switching to 64B/66B they doubled the payload data rate but 16GFC runs at 14 Gbits/s and 32GFC at 28 Gbits/s. I guess it makes more sense to confuse engineers than analysts and users.
The nearly 20% bandwidth reclaimed by switching from 8B/10B to 64B/66B or 128B/130B wasn’t free.
The waveform of an 8B/10B signal uses bandwidth from 1/10th the data rate to about 3/2 the data rate; for a 10 Gb/s signal, that’s 1 GHz to 15 GHz. The waveform of a 128B/130B signal covers a bandwidth from not quite 1/64 to 3/2 the data rate; for 10 Gbits/s, that’s about 160 MHz to 15 GHz.
To see where those numbers come from, let's start with a square wave.
A the high end, 3/2 the data rate, or 1.5fd, comes from the fact that high speed serial standards typically set minimum required rise and fall times and that limits the square wave Fourier components to just the first and third harmonics, though elements of the fifth can show up, too. At the low end, you can see how CIDs (consecutive identical bits) introduce low frequency content by picturing the digital waveform and drawing in the fundamentals for each sub-sequence, as I did in this graphic.
A digital signal with frequency content indicated for different subsequences
(Copyright Ransom Stephens).
Limiting the number of CIDs, like the 5 CIDs for 8B/10B, puts a floor on the low frequency content. Remove that limit and the floor falls into the basement.
Who cares about extra low frequency content?
Your equalizer cares and if it cares, than so does your BER (bit error ratio). The equalizer’s job —whether at the transmitter in the form of pre- or de-emphasis or at the receiver in a combination of CTLE (continuous time linear equalizer) and DFE (decision feedback equalizer)—is to correct ISI (inter-symbol interference). ISI is caused by the frequency response of the channel. The wider the signal bandwidth, the greater the ISI.
Time averaged eye diagram so you can see all the ISI.
The tradeoff in going from 8B/10B to 64B/66B or 128B/130B to reclaim that 20% payload bandwidth is that more sophisticated equalization schemes are required to maintain a low BER.
Once the equalizer is implemented in the serdes, it’s cheap.
Well, not so fast. Don’t forget your looming crosstalk problems. Increased pre-/de-emphasis blasts high-frequency noise through the circuit board making aggressors ever more aggressive. CTLEs selectively amplify high frequency content which makes victims ever more vulnerable. DFE, on the other hand, with its magical, nonlinear logic feedback, doesn’t affect crosstalk, but will DFE be enough? And we’re back at making the receiver ever more complicated.
Once you have four lanes at 28 Gbits/s or eight at 50 Gbits/s, that extra bandwidth might not look so cheap.