Wideband error correction elevates time-interleaved ADCs
Time-Interleaved ADCs and Mismatch Errors
Achievable resolution and spurious performance of ADCs are tightly connected to the maximum sampling frequency of the device. Today, in mid-2013, sampling rates of commercially available 16-bit monolithic, single-core (non-interleaved) ADCs are limited to 250 MS/s  while 14-bit ADCs can be found up to 400 MS/s . The corresponding value for single-core 12-bit ADC designs is 1500 MS/s .
There are however many applications where more dense sampling grids and higher instantaneous bandwidths are needed than what can be supported with a single-core ADC for a given resolution. The remedy to this problem is time-interleaving. This is a well-known and well-used technique where the sampling frequency is increased by using an array of ADCs and where each ADC is clocked with a unique, phase-skewed sampling clock relative to the other ADCs . The principle of two time-interleaved ADCs is shown in Figure 1.
For resolutions of some 10 bits or more, time-interleaved sampling alone is, however, not enough to solve the sampling frequency problem in practice, since the time-interleaving principle requires that the ADCs used behave identically from an input-output perspective. If not, differences in gain and phase-delay responses as well as DC offset between the ADCs in the array will create a nonlinear distortion effect called aliasing. The aliasing consists of new frequency components not present in the original input signal. Aliasing is, in fact, frequency-shifted versions of the desired input signal spectrum.
The plot in Figure 2 illustrates how aliasing appears as a new frequency component as a result of a (large) gain mismatch between two ADCs. The total signal (dashed blue curve) is the composition of two components, one at the desired frequency (solid black curve) and one undesired aliased component (solid red curve) occurring at a different frequency. In the frequency domain, the aliasing of a two-way time-interleaved system occurs as an attenuated image of the input signal spectrum, mirrored in a quarter of the aggregate sampling frequency.
The net effect of time-interleaved ADC mismatch is that it degrades the ADC effective resolution and spurious response. For resolutions of 10 bits or more, calibration and/or post-processing is necessary to remove the mismatch errors, thereby effectively emulating an array of identically-behaving ADCs with resolution preserved to that of each ADC in the array.
Frequency-Dependent Mismatch Errors and Compensation
Having reviewed the basics of time-interleaving and mismatch effects, we will now discuss typical ADC spurious performance measured with the state-of-the-art 14-bit time-interleaved single-core ADCs  as well as commercially available digital post-processing IP-cores that compensate ADC mismatch over frequency .
The gain mismatch shown in Figure 2 is the mismatch simulated for a specific input signal frequency. When applying an input signal of another frequency, a different mismatch is likely to be observed. An example of typical measured gain and phase-delay mismatch when time-interleaving two 400 MS/s, 14-bit ADCs is shown in Figure 3. From the figure, it is evident that the mismatches vary over frequency (labeled “Uncalibrated”) and in order to compensate or remove the errors, the relative gain and phase delay cannot be compensated fully with a static (frequency-independent) gain and delay compensation. A static compensation would shift the mismatch curves vertically in Figure 3, but it would not correct for the shape of the gain- and phase-delay mismatch curves. Thus, it would not fully remove the mismatch errors. A frequency-dependent error correction, however, can alter the shape of the mismatch and thus in, principle, completely remove aliasing distortion in arrays of time-interleaved ADCs. Today, such frequency-dependent error correction techniques are commercially available as digital IP-cores and the topic of mismatch error correction has attracted some interest in the research literature. Academic work and future directions for error correction in time-interleaved ADCs are for example highlighted in .
The resulting effective mismatch after having done compensation over frequency is shown as the curves labeled “Calibrated” in Figure 3. The worst-case uncompensated spurious free dynamic range (SFDR) measured here was 53 dBc. Applying a static compensation, the SFDR was improved to 62 dBc while mismatch error compensation over frequency pushed image spurs down towards the noise floor.
Example amplitude spectra of the time-interleaved 14-bit, 800 MS/s ADC array with applied digital mismatch error correction are shown in Figure 4. The test signal is a widely separated two-tone signal for which the digital post-processing provides well over 40 dB suppression of the aliasing spurs. Such performance improvement can only be achieved using frequency-dependent mismatch error correction, in particular for widely separated input signal frequencies. Static (frequency-independent) correction usually fails for such tests. This is because, although gain and sampling time can be adjusted for each frequency independently using a static approach, it cannot compensate frequency-dependent errors that are likely to be different when measured for each of the two frequencies separately.
The instantaneous bandwidth over which mismatch errors are corrected is in this case 360 MHz, i.e., 90% of the first Nyquist band.