Design Con 2015

Characterizing mixed-signal ICs for production

-February 20, 2013

Silver samples are often used to show the stability of the test solution over temperature. They also can be stored for use as reference parts for later verification of the test solution. The testing procedure for these devices is to test three parts at three temperatures, hundreds of times each.

Using the gathered data, you can prove the stability of every device with the help of a statistical report tool. For example, a double distribution or instability can be seen immediately. You can keep the test results for later reference, comparing future measurements with the stored data.

Assume the device to be tested has the following operating conditions: a minimum operating temperature of −40°C, a typical operating temperature of +25°C, and a maximum operating temperature of +125°C. The black bar in the graph in Figure 3 is the value at room temperature. The green bar shows the value at high temperature (+125°C); the blue bar shows the value at cold temperature (−40°C). The data clearly shows an increase and decrease of the value at high and low temperatures, respectively.


Figure 3
This silver-sample report shows the drift between temperatures. The data clearly shows an increase and decrease of the value at high and low temperatures, respectively.

GR&R for plastic parts

In addition to testing at the wafer level, engineers must test packaged parts to determine that no damage has occurred during the packaging process. To verify the repeatability and reproducibility on different testers, test a minimum of 50 plastic packaged parts on one tester twice in the same order, then repeat the procedure on another tester and compare the test results obtained using the different testers. An optimal result would be a 100% overlay of both sets of data. If you discover that the results are not closely matched, you must find the root cause for the discrepancy.

The tester-to-tester comparison in Figure 4 shows a shift between two testers of the same type. The test results are not entirely consistent, and the results between testers need to be very close. The difference is usually traceable to something simple, such as the range of the instrument used. Though the two testers used for this example are the same type, the GR&R process can be used for tester transfers; that is, between two different tester types.


Figure 4 This tester-to-tester comparison shows a drift between the testers for which the root cause must be determined.

GR&R for wafer sort

An alternative to testing GR&R is to implement a bin-flip wafer technique (Figure 5). Rather than test plastic parts, the technique tests a complete wafer on tester 1 and then on tester 2. The bin results—that is, bin 1 to bin 7— should not exceed a predefined limit. If the measurement result is not repeatable on the other tester, review the failing tests to determine the problem.


Figure 5 A report for a bin-flip wafer test compares results from two testers. Several designations and a color code on the wafer map indicate pass/fail results: Pass Pass (the site passed on tester 1 and tester 2); Fail Fail (the site failed on both testers); Pass Fail Flip (the site failed on tester 2); Fail Pass Flip (the site failed on tester 1); and Fail Flip (the site failed another test on tester 2).

Board versus board

To ensure multiple test boards have the same electrical characteristics, a measurement- system-analysis (MSA) report must be generated. The goal of this report is to verify that two or more load boards show the same electrical behavior. The example test flow described below assumes that two load boards are required to be released at the same time.

Fifty parts are tested in order, twice on one board and then twice on the other board. It is important to test the devices in the same order so that the same device test results are compared. In Figure 6, an offset between the boards can be seen. Figure 6a shows a histogram of the measurement results; Figure 6b shows the measurements in sequence. You can see that 50 parts were tested four times on two boards; the two distributions represent the boards. In this example, the trend line has a small offset to the left, which indicates a difference between devices 1 to 39 and devices 40 to 50.


Figure 6 Board-to-board comparison data indicates a difference between devices 1 to 39 and 40 to 50. Shown are a histogram of the measurement results (a) and measurements in sequence (b).

Board ID

Board identification ensures that the board being tested is the correct load board. Implement a board ID by writing a unique ID to the EEPROM on the load board. An error will then be indicated if the wrong board is selected for the production interface. Since each board has a unique ID, every test performed with it can be traced. In a worst-case situation, production lots can be recalled if tested with a defective board. To improve the measurement correlation between ATE and bench testing, offset calibration factors can be used and loaded automatically, depending on the board used.

Quality screening

After all the required data is collected, a quality-assurance (QA) screening procedure can commence. One thousand good parts (minimum) have to be tested with the guard-banded limits. At completion of testing, all parts have to be tested at the QA temperatures with the specification limits. No failures are allowed at this point. If failures appear, it’s necessary to reverify the guard bands and test-program stability.

Verifying that all the data results match and that no irregularities were found during the release phase of a test program minimizes the possibility of problems at a later stage. A stable and verified test solution can also help you avert product-yield problems and QA failures down the road.


Reference
  1. Burns, Mark, and Gordon W Roberts, An Introduction to Mixed-Signal IC Test and Measurement, 2001, Oxford University Press.

Author’s biography

Robert Seitz is a test development engineer in the Full Service Foundry business unit at AMS (formerly austriamicrosystems). He has worked in various areas of automated test engineering for seven years at the company.

Loading comments...

Write a Comment

To comment please Log In

FEATURED RESOURCES