20-bit DAC demonstrates the art of digitizing 1 ppm, part 1: exploring design options
click here for comparator code associated with this article.
If you design and use precision instruments, a 1-ppm measurement is a nearly impossible dream. This level of performance used to be attainable only from large, slow, and expensive instruments that require extreme care in handling and use. However, now a DAC design that uses a simple, powerful feedback loop can capture the magic 1-ppm level using approximately $100 worth of components (Figure 1).
High-precision, instrumentation-grade digital-to-analog conversion has undergone significant progress (see sidebar 'A history of digital-to-analog conversion'). Ten years ago, 12-bit DACs were premium devices. Today, 16-bit DACs are available and increasingly common in system design. These DACs are true precision devices with less than 1-LSB linearity error and 1-ppm/°C drift. Nonetheless, some DAC applications require even higher performance. Automatic test equipment, instruments, calibration apparatus, laser trimmers, medical electronics, and other applications often require DAC accuracy beyond 16 bits. You can find 18-bit DACs in circuit-assembly form, although they are expensive and require frequent calibration. Designs that use manually switched resistor ratios can achieve DAC resolutions of 20 and even 23 (0.1 ppm) bits or more. The most accurate resistor-based DAC of this type is Lord Kelvin's Kelvin-Varley divider, which can achieve ratio accuracies of 0.1 ppm. These devices, although amazingly accurate, are large, slow, and extremely costly. Standards laboratories are typically the only places where these types of DACs are still in use. (Part 2 discusses this type of DAC in more detail.)
Thus, a practical, 20-bit (1-ppm) DAC that is easy to construct and does not require frequent calibration is a useful development. The scheme in Figure 1 is based on the performance of a true 1-ppm ADC with scale and zero drifts less than 0.02 ppm/°C. The DAC uses this device, the 24-bit LTC2400, as a feedback element in a digitally corrected loop to realize 20-bit performance. The ADC has an integrated oscillator, 4-ppm nonlinearity, and 0.3-ppm rms noise. It uses delta-sigma technology to provide extremely high stability.
ADCs and DACs reverse roles
In practice, the 20-bit DAC's overall output is the 'slave' of this monitoring 'master' ADC, which feeds digital information to a code comparator. The code comparator determines the difference between the user-input word and the master ADC's output, and it presents a corrected code to the slave DAC. In this fashion, the loop continuously corrects the slave DAC's drifts and nonlinearity to an accuracy that the ADC and VREF determine.
This arrangement represents a turnabout for DACs and ADCs. For years, designers have used DACs in feedback loops to make ADCs. In this case, however, the ADC performs the role of circuit component rather than the traditional stand-alone role in which it feeds back a loop to form a DAC.
This loop has a number of desirable attributes. The ADC and its reference set accuracy limitations. The sole DAC requirement is that it must be monotonic. No other components in the loop need to be stable. Additionally, loop behavior averages low-order bit indexing and jitter, obviating the loop's inherent small-signal instability. You can use classical remote sensing or digitally based sensing by placing the ADC at the load. The ADC's SO-8 package and lack of external components make this digitally incarnated Kelvin-sensing scheme practical.
The circuit realization of the scheme in Figure 2 has a resolution of 1 ppm, with software correction, and full-scale error drift of 0.1 ppm/°C (Table 1). In Figure 3, the circuit's linearity-versus-output-code data shows that overall linearity is within 1 ppm. Output noise, measured in a 0.1-to-10-Hz bandpass, is approximately 0.2 LSB (Figure 4). Equipment limitations, which set a noise floor of approximately 0.2 µV, somewhat corrupt this measurement. The ADC's conversion rate combines with the loop's sampled data characteristic and slow amplifiers to dictate a relatively slow DAC response. The full-scale slew response requires approximately 150 µsec. Full-scale DAC settling time to within 1 ppm (±5 µV) requires approximately 1400 msec (Figure 5a). A smaller step of 500 µV needs only 100 msec to settle within 1 ppm (Figure 5b).
Establishing and maintaining confidence in a 1-ppm linearity measurement is uncomfortably close to the state of the art. And measuring settling time is not straightforward, even at the relatively slow speeds involved in this scheme. Part 2 discusses linearity, settling time, and noise-measurement techniques in detail.
Bit overlap ensures loop capture
The slave DAC actually comprises two DACs (Figure 2). The circuit feeds the upper 16 bits of the code comparator's output to a 16-bit DAC, the MSB DAC, while a separate DAC, the LSB DAC, converts the lower bits. Although the circuit presents a total of 32 bits to the two DACs, 8 bits of overlap assure loop capture under all conditions. The composite DACs' resultant 24-bit resolution provides 4 bits of indexing range below the 20th bit, ensuring a stable LSB of 1 ppm of scale. The 8-bit overlap assures the loop can always capture the correct output value.
IC1 and IC2 transform the LSB and MSB DAC output currents into voltages, and IC3 adds the two voltages. The circuit arranges IC3's scaling so that the correction loop can always capture and correct any combination of zero- and full-scale errors. IC3's output, which is the circuit output, drives the ADC through IC4, which provides buffering to drive loads and cables.
The code comparator takes the difference between the input word and the ADC's digital output and produces a corrected code. The circuit applies this corrected code to the MSB and LSB DACs, closing a feedback loop. The ADC and voltage-reference errors determine the loop's integrity. The resistor and diodes at the 5V-powered ADC protect it from inadvertent IC4 outputs, such as power-up spikes, transients, and a lost supply. IC6 is a reference inverter, and IC5 provides a clean ground potential to both DACs.
The code comparator enforces the loop by setting the slave DAC inputs to the code that equalizes the user input and the ADC's output. The code comparator's digital hardware consists of three input data latches and a PIC-16C55A processor (Figure 6). Inputs include user data, such as the DAC inputs, curvature correction for linearity via DIP switches, a convert command (DAC ), and a selectable filter-time constant. The DAC RDY output indicates when the DAC output has settled to the user's input value. Additional outputs and an input control and monitor the analog section in Figure 2 to effect loop closure. Comparator code by Florin Opresch drives the processor. Click here to download the code.
ADC sets the linearity
The ADC's linearity determines overall DAC linearity. The LTC2400 ADC has approximately ±2 ppm nonlinearity. In applications for which this error is permissible, you can ignore this error. Figure 7's lower curve shows the ADC's inherent nonlinearity, along with the first-order correction you need (upper curve) to get nonlinearity inside 1-ppm (center curve). If you need true 1-ppm performance, as in Figure 3, you can use software-based correction, which is part of the previously mentioned comparator code. The software generates the desired 'inverted bowl' correction characteristic. You can set the correction to complement the residual nonlinearity characteristics of any individual LTC2400 using DIP switches at the code comparator.
The LTC2410 offers another approach to improved linearity. This LTC2400 variant has improved linearity but specifies a maximum input range of 2.5V. Figure 8a divides the DAC output with a precision-resistor ratio set, which allows you to use the LTC2410 while maintaining the 5V full-scale output. The disadvantage of this approach is the ratio set's additional 0.1-ppm/°C and 5-ppm/year error contribution. Figure 8b is similar to Figure 8a, although the ratio set's new value in Figure 8b permits a 10V full-scale output.
Modifying the output range
Some applications may require outputs other than the text circuit's 0-to-5V range. The simplest variation is a bipolar output (Figure 9). This circuit, a summing inverter, subtracts the DAC output from a reference to obtain a bipolar output. You can vary resistor and reference values to obtain different output excursions. The LT1010 output buffer provides drive capability, and the chopper-stabilized amplifier maintains 0.05-µV/°C stability. The resistors introduce a 0.3-ppm/°C error contribution.
Another approach for achieving voltage gain divides the DAC output before feedback to the ADC (Figure 10). In this case, the 1-to-1 divider ratio sets a 10V output, assuming an ADC reference of 5V. As in Figure 9, the resistors add a slight temperature error of approximately 0.1 pm/°C for the specified ratio set.
Figure 11 uses active devices for voltage outputs as high as ±100V. A chopper-stabilized amplifier drives the discrete high-voltage stage in a closed-loop fashion. Q1 and Q2 furnish voltage gain and feed the Q3 and Q4 emitter-follower outputs. Q5 and Q6 set the current limit at 25 mA by diverting the output drive when voltages across the 27? shunt resistors become too high. The local 1-M?/50-k? feedback pairs set stage gain at 20, allowing the drive from the LTC1150 to cause a full ±120V output swing. The local feedback reduces stage gain-bandwidth, easing dynamic control. Frequency compensation for this stage is relatively simple because only Q1 and Q2 contribute voltage gain. Additionally, the high-voltage transistors have large junctions, which result in low fTs, and special high-frequency roll-off precautions are unnecessary. Because the stage inverts the input, feedback returns to the amplifier's positive input. Frequency compensation consists of rolling off the amplifier with the local 0.005-µF/10-k? pair. Using four individual resistors minimizes heating and voltage-coefficient errors in the feedback term. Trimming involves selecting the indicated resistor for an exactly 100.0000V output with the DAC at full-scale.
A fourth approach increases output-current capability with a current-gain stage inside the DAC output amplifier's feedback loop (Figure 12). This stage replaces buffer IC4 in Figure 2. Two options that differ in output capacity are possible. Note that as output current rises, wiring resistance becomes a potentially large error term. For example, at an output of only 10 mA, a wiring resistance of 0.001? introduces a 10-µV drop, which is a 2-ppm error. Because of this wiring-resistance-induced error, the circuit should supply heavy loads using short, highly conductive paths and use remote sensing.
Choose a voltage reference
The 5VREF input to the ADC in Figure 2 requires some attention. You can use self-contained references, which are convenient and easy to apply. Some references, such as the LM199A and the LTZ1000A, require external circuitry but offer high performance. In general, select a reference that offers the lowest time and temperature drifts. Regardless of the reference you choose, you must trim the reference to establish absolute DAC accuracy.
A circuit built around the LTZ1000A offers high stability (Figure 13). IC1 senses the LTZ1000A die temperature and accordingly controls the IC heater via the 2N3904. IC2 controls reference current. Kelvin connections sense the Zener reference level, minimizing voltage-drop effects. A single-point ground eliminates return-current mixing and the attendant errors that this mixing produces.
Other choices for reference buffering employ chopper-stabilized amplifiers augmented with buffer-output stages (Figure 14). Buffer error is extremely low. Figure 14a shows a simple unity-gain stage that transmits the input to the output with low error and minimal reference loading. Figure 14b takes moderate gain, allowing a 7V-reference input to produce 10V at the output. Figure 14 coffers two ways to get 5V from the nominal 7V input. A precision divider lightly loads the reference in one case. In the optional case, the circuit avoids loading the reference by placing the divider at the output and driving the ADC's reference input from the divider output.
A history of digital-to-analog conversion
People have been converting D/A quantities for a long time. Probably among the earliest uses was the summing of calibrated weights in weighing applications (Figure A, left center). Early electrical D/A conversion inevitably involved switches and resistors of different values, usually arranged in decades. The application was often the calibrated balancing of a bridge or reading an unknown voltage, via null detection. The most accurate resistor-based DAC of this type is Lord Kelvin's Kelvin-Varley divider (figure, large box). Based on switched resistor ratios, it can achieve a ratio accuracy of 0.1 ppm (23 bits or more), and standards laboratories still largely employ it. See part 2 (April 26, 2001) of this series for the details of Kelvin-Varley dividers. High-speed D/A conversion resorts to electronically switching the resistor network. Early elec-tronic DACs were built at the board level using discrete precision resistors and germanium transistors (figure, center foreground, a 12-bit DAC from a Minuteman missile D-17B inertial navigation system, circa 1962). In the mid-1960s, Pastoriza Electronics probably produced the first electronically switched DACs available as standard products. Other manufacturers fol-lowed, and discrete and monolithically based modular DACs (figure, right and left) became popular by the 1970s. The units were often potted (figure, left) for ruggedness, performance, or preservation of proprietary knowledge. Hybrid technology produced smaller packages (fig-ure, left foreground). The development of Si-Chrome resistors permitted precision monolithic DACs, such as the LTC1595 (figure, immediate foreground). In keeping with all things mono-lithic, the cost-performance trade-off of modern high-resolution IC DACs is a bargain. Think of it! A 16-bit DAC in an eight-pin IC package. What Lord Kelvin would have given for a credit card and LTC's phone number!
Jim Williams is a staff scientist at Linear Technology Corp (Milpitas, CA, www.linear-tech.com), where he specializes in analog-circuit and instrumentation design. He has served in similar capacities at National Semiconductor, Arthur D Little, and the Instrumentation Laboratory at the Massachusetts Institute of Technology (Cambridge, MA), where he first encountered serious 1-ppm measurement using the Kelvin-Varley divider. A former student at Wayne State University (Detroit), Williams enjoys art, collecting antique scientific instruments, and restoring old Tektronix oscilloscopes.
Thanks to Patrick Copley, Jim Brubaker, Florin Opresch, Josh Guerrero, and VJ Sun for their contributions.