Design Con 2015

ADC programmable digital gain allows tradeoffs in SNR and distortion performance

Matthew Guibord -January 14, 2013

The achievable signal-to-noise ratio (SNR) and distortion performance of high-speed analog-to-digital converters (ADCs) is directly related to the full scale voltage swing of the analog input. A larger input voltage swing allows higher SNR, whereas a smaller swing increases the overall distortion performance. Of course, improved performance in one aspect comes with decreased performance in the other.

Traditionally this tradeoff has been left to the ADC designer, where a fixed full scale input voltage swing is chosen to meet certain SNR and distortion performance, with 2 Vpp being the de facto industry standard. Now, the introduction of programmable digital gain in ADCs has left the SNR and distortion tradeoff up to the system designer in order to meet their specific system requirements. Further, using digital variable gain amplifiers can simplify system design and flexibility.

Digital gain

The digital gain feature in modern high-speed ADCs is simply a multiplication of the captured digital signal by a constant. If a 2 Vpp analog signal fills the full digital range and the constant multiplier is greater than 1, then the full scale signal must be smaller than 2 Vpp to avoid clipping in the digital domain. Thus, the digital gain can be seen as a digital adjustment of the analog full scale input range. Although digital gain in ADCs has been around for a few years, newer implementations include attenuation, in addition to amplification of the digital signal, which allows for both an increase and decrease in the full scale voltage.

The benefit of using the digital gain feature to modify the input full scale range, rather than using an analog method, is that the analog portion remains constant. This makes way to predictable variations in SNR and distortion performance over the various digital gain levels. The SNR and distortion trends over digital gain are shown in Figure 1.

 

Figure 1: SNR and distortion performance of dual-channel ADC over various digital gain levels

 

Noise performance

To predict the ADC noise performance, the total ADC noise needs to be broken down into the individual noise sources and analyzed with the digital gain in mind. The three main sources of noise in a pipelined ADC are: thermal, quantization, and clock noise. The total noise performance can be estimated by calculating the noise of the individual noise contributors and then combining them using Equation 1. VFS_NOM is the nominal peak-to-peak full scale voltage, GD is the digital gain in decibels, and the noise terms are in volts.

 

            Equation 1

 

Thermal and quantization noise

Thermal noise can enter at any point in the signal path from the input pins to the final pipeline stage, and is the dominant source of noise in high-speed ADCs at low frequencies. Since digital gain does not change the analog path, thermal noise voltage stays constant over all digital gain levels. It is fairly intuitive that increasing or decreasing the allowed signal swing also increases or decreases SNR by the same amount, respectively.

Quantization noise enters the signal at the conversion from an analog signal to a digital signal, due to the limited number of digital codes. Since the digital gain does not affect the analog components, the error voltage associated with a least-significant bit (LSB) stays constant. Thus, the maximum quantization error stays constant. Therefore, quantization noise follows the same trend as thermal noise.

In order to estimate the change in SNR due to the digital gain, the thermal noise level needs to be known. The combined ADC’s thermal and quantization noise can be extracted from the SNR for the lowest listed frequency in the datasheet, where clock noise is negligible. The total noise voltage at low frequencies can be calculated using Equation 2, where SNRLOW_FREQ is the low frequency SNR from the datasheet.

 

        Equation 2

 

Clock noise

Noise on the sampling clock causes a deviation in the sampling period from cycle to cycle, called clock jitter. This deviation of the ideal sampling point causes the incorrect input voltage to be sampled and, therefore, degrades SNR. Clock jitter becomes more important at higher input frequencies and higher input voltages because the larger slew rates result in larger possible voltage deviations. Equation 3 calculates the estimated SNR for a given input frequency in the presence of a known amount of ADC aperture jitter, from the datasheet, and external clock jitter given in seconds.

 

      Eq. 3

 

Notice that this is a dBc quantity, referenced to the input signal level. Therefore, the maximum noise voltage due to clock jitter varies with the full scale input voltage. The noise voltage due to clock jitter in the presence of digital gain is given by Equation 4, where VINPUT_dBFS is the expected analog input voltage in terms of decibels referenced to the full scale voltage (dBFS).

 

             Eq, 4

 


Loading comments...

Write a Comment

To comment please Log In