Design Con 2015

Optimizing compression in scan-based ATPG DFT implementations

-March 01, 2007


READ OTHER MARCH ARTICLES:
Contents, March 2007
ALSO SEE:
Limits of test time reduction
by Chris Allsup
June 2006

In the 1990s, Carnegie Mellon researchers created a comprehensive scan-test cost model that demonstrated how design for test (DFT) contributes to profitability (Ref. 1). With scan compression in wide use, it is time for a new economic model, consistent with the earlier framework but focused on scan compression, not just scan. To that end, I have developed an economic theory (Ref. 2) that unifies test data reduction and time reduction concepts and considers the impact of test pattern inflation.

I have written in these pages previously (Ref. 3) on test application time reduction (TATR), which along with test data volume reduction (TDVR) has diminishing returns as compression increases (Ref. 4). TATR is an asymptotic function. For a scan-compression factor of x (assuming the scan chains are well balanced with no pattern-inflation issues), TATR can be expressed as:

TATR = 100% • (1 - 1/x)


Figure 1. An optimal compression level, λ, minimizes test cost. xC is the compression level needed to fit a complete scan ATPG pattern set, PC, into the fixed amount of tester memory. TATR denotes the economic benefits of increasing compression up to λ.


Therefore, at 50X compression, you can expect at most a 98% test time reduction: 100% (1 – 1/50) = 98%. And at 100X compression, you can expect at most a 99% test time reduction: 100% • (1 – 1/100) = 99%.

An optimal compression level, λ, minimizes the total costs of test and, consequently, maximizes profits. Above this optimal compression level, the total costs of test gradually increase again. In Figure 1, the x-axis represents the ratio of the number of internal scan chains to the number of external scan chains.

In this article, I address the cost impact, in terms of dollars per good die, on digital-scan tests that employ automatic test pattern generation (ATPG), with respect to the following major cost components of test:

  • Cost of field escapes (CESC) is the cost of defective ICs that are incorrectly identified as good parts, so they “escape” to the field. (You can also consider the cost of escapes going from, say, wafer probe to system-level test). If your pattern set doesn’t fit within tester memory, you can use compression to significantly reduce this cost.

  • Test execution cost (CEXEC) is the cost directly related to the test cycle count and the tester time spent running ATPG patterns. Compression can decrease this cost, although not always.

  • Silicon area overhead cost (CSILICON) consists of the cost of the scan-compression circuits and interconnects in your design. This cost is proportional to the amount of compression in a design.

In addition, there are two costs that are less affected by compression:

  • Cost of test preparation (CPREP) is the engineering, compute-resource, and tool costs related to preparing ATPG test patterns. Establishing a meaningful relationship between the scan-compression level and CPREP is difficult. For automated scan compression, there is not a large impact on CPREP in going from, say, 10X compression to 50X compression.

  • Cost to diagnose failed parts (CDIAG) can be significant, especially for highly compressed designs that you debug manually. But as with CPREP, CDIAG is difficult to quantify in terms of compression level. Because automated tools are getting better at diagnosing compressed designs, this cost is relatively insensitive to compression level.

Consequently, the only reason to implement scan compression is to reduce CESC and CEXEC, but you must keep in mind that any increase in compression also increases the expense of CSILICON.

TDVR and TATR revisited

TDVR can only reduce the cost of escapes, and TATR can only reduce test execution cost. To understand why, consider a complete high-fault-coverage ATPG pattern set. Call the number of patterns PC, in which “C” denotes “complete.” Also assume that you can’t fit the entire pattern set into the tester memory, M, allocated for ATPG stimulus and response patterns, so you must use compression.

To load a subset of these patterns into tester memory, you need to compress by exactly the amount of test data volume corresponding to the pattern level, P, divided by M. This is just the ratio of P to P0, the number of patterns you can load into memory without compression:


where F is the number of scan flops in the design. The coefficient 3 represents one scan stimulus bit, one response bit, and one mask or measure bit to determine if the response bit should be compared or not. For each unit increase in compression, you can load P0additional patterns from the complete set into memory.

Somewhere between “no compression” and a high level of compression lies xC, which represents the level needed to fit the complete scan ATPG pattern set, PC, into the fixed amount of tester memory. The “TDVR phase” is defined as the range of compression that extends up to this compression level, xC = PC/P0. Increasing compression above this level has no economic benefit in terms of reducing the cost of escapes, because there are no additional patterns being loaded to improve quality.

Above xC, however, you do have potential cost savings from TATR, because the scan-chain lengths continue to decrease with higher compression. You can define compression higher than xC as the “TATR phase.”

In this phase, you can continue increasing compression up to λ, above which the area overhead cost of compression exceeds the benefit of decreasing test application time. The maximum TATR that is economically viable is the ratio of λ to xC.

In Figure 1, the TDVR phase extends to 4X compression. The TATR phase starts at 4X, and the maximum cost-effective TATR is at 20X, but this doesn’t imply a 20X, or even a 16X, test time reduction. Instead, you get 5X, or an 80%, reduction in test time, which works out to be exactly the ratio of the test cycle count at 20X and at 4X compression. The graph indicates that compression decreases the total costs of test by about 95%, with all but 4% of this reduction occurring in the TDVR phase.

To understand the disparity in cost reduction between TDVR and TATR, I’ll now examine the impact that compression has on the three component costs of test that are highly sensitive to compression level.

Impact of compression on cost of escapes

The following exponential approximates the convergence of fault coverage vs. pattern count using an ATPG tool, especially in the high-coverage region:

where fC represents the maximum measured fault coverage of the complete pattern set, PC, and Δ is the difference between fC and the maximum predicted fault coverage. The smaller Δ is, the larger the exponential constant, and the faster the convergence.

a)
b)
 
Figure 2. a) If λ is greater than xC, then you know there are cost savings to be gained by increasing compression to reduce tester time. b) Alternatively, if you find that λ is less than xC, you know that the most cost-effective compression strategy is to simply truncate the patterns.

Recall that compression in the TDVR phase is just the ratio of the pattern count, P, to the number of patterns you can load into tester memory without compression, P0. Therefore, if you substitute xP0 for P in the exponential formula, you can describe the fault coverage as a function of compression, and the exponential constant is simply scaled by P0.

Once you have fault coverage as a function of compression, you can describe the escape rate using the Williams and Brown formula (Ref. 5) that expresses escape rate as a function of fault coverage and yield. The cost of escapes, at the least, will be the cost to manufacture and test these escapes.

Keep in mind, however, that the cost of escapes can actually be higher. The Carnegie Mellon cost model uses a parameter αESC to account for this escape-multiplier effect at any given stage of the test process. The higher the multiplier, the greater the cost of escapes across all compression levels.

As you increase compression, the cost of escapes drops off logarithmically as the additional patterns loaded into tester memory detect more faults, until all the patterns are loaded at xC. Above xC, the TDVR phase ends, and there is no further decrease in the cost of escapes; the curve is flat, as illustrated in Figure 2a.

Impact of compression on cost of test execution

Test execution time is constant during the TDVR phase, because every unit increase in compression adds another P0 number of patterns that must be tested, which exactly offsets any potential test time reduction. This test time is approximately T0, the time it takes to execute all P0 patterns on the tester without compression:

 

where C is the number of scan I/O channels (or external scan chains) and fTEST is the tester scan-shift frequency. Above xC, tester time declines by a factor of xC/x, as shown in Figure 2a.

Impact of compression on the silicon area overhead cost

Independent of the compression phase, the silicon area cost will continue to increase as you increase the number of scan chains for compression. A simple linear formula that represents the circuit die size as a function of compression (Ref. 3) can model this increase. You can measure the slope, γ, of this area increase empirically or use a ratio-of-gates approach, given by:

where g is the number of compression-circuit gates added per scan chain, G0 is the total number of gates in the design without compression, and C is again the number of scan I/O channels.

It’s useful to introduce a second-order area-scaling factor, ζ, to model a nonlinear area increase that can occur with higher compression. Even if the added gates per scan chain remains constant, wire-routing congestion at higher scan levels could increase the silicon area more than the linear formula predicts.

Note that higher values of γ and ζ tend to work against compression, lowering l. The silicon cost is displayed in Figure 2a. As you approach 100X compression, the cost increases by about two orders of magnitude, as expected.

Finding the optimal compression level

You can apply this methodology and these equations to find the optimal compression level. Start by running ATPG to measure the fault coverage as a function of pattern count. Next, derive the exponential constant, η, to curve-fit f(P) in equation 2 to your data using appropriate values of Δ. Finally, calculate the optimal compression level, λ, that minimizes the total cost:

CTEST = CESC+ CEXEC+ CSILICON

Once you know the optimal compression level, λ, you have insight into implementing the most cost-effective compression strategy. If you calculate λ and it is identical to xC, then you compress all the ATPG patterns knowing you have minimized the costs of test.

If you estimate that λ is greater than xC, then you know there are cost savings to be gained by increasing compression to reduce tester time. Use the following equation to calculate λ exactly (Table 1 defines the parameters):

Say you need to apply a certain level of compression xC= 4 to fit the complete pattern set into tester memory and your calculated value of λ is 20, greater than xC. You’ll want to increase compression to xC = 20 to take full advantage of more cost savings from test time reduction. λ occurs where the cost derivatives for the silicon area overhead and the test execution time are equal. Note that the fault coverage and the escape multiplier don’t even factor in, because the cost of escapes is flat during the TATR phase and has no impact on λ when it is greater than xC.

Alternatively, if you estimate that λ is less than xC, you can calculate it as follows:

In this case, you truncate the pattern set. In this situation (Figure 2b), all patterns, PC, can be loaded into tester memory by compressing to 103X. But compressing more than λ = 47 just increases costs, so there is no economic benefit gained by compressing any higher.

λ occurs where the cost derivatives for silicon area overhead and escapes are equal. Although the cost of escapes is sensitive to the maximum fault coverage, fC, the cost derivative always occurs in the high-coverage region and is therefore insensitive to fC as long as fault coverage is high. In this case, λ is nearly independent of fC.

Pattern inflation


Figure 3. For any pattern level, the compression level increases relative to the baseline condition without pattern inflation.

So far, I have ignored the phenomenon of pattern inflation caused by compression. But you really can’t disregard it, because the higher the compression level, the more patterns that ATPG produces. Although linearity of pattern inflation is not a strict requirement for this economic model, in-house experiments on many designs have revealed that pattern inflation behaves linearly across a wide range of compression levels. You can describe the effects of pattern inflation using a parameter, ε, that represents the fractional increase in pattern count per unit increase in compression.

First, examine the effect pattern inflation has on TDVR. Remember that in the TDVR phase, the compression level is just the ratio of the number of patterns, P, that fit in memory at this level to the number of patterns, P0, that fit in memory without compression. It follows that for any pattern level, the compression level increases relative to the baseline condition without pattern inflation in the manner displayed in Figure 3. Essentially, more compression is needed to load the same amount of patterns into tester memory than before.

This implies that xC also increases with pattern inflation. For a design with λ in the TATR phase, sufficiently high ε can increase xC above the predicted value of λ in equation 5. The result is that test time reduction is no longer cost effective, and you will find it more beneficial to compress to the level xC or less. To determine how much to compress, solve the following formula for x:

where the right-hand side represents equation 6, the formula for λ ≤ xC that assumes zero pattern inflation. The magnitude of λ in the TDVR phase will be higher when you account for pattern inflation than that predicted by equation 6. Note that it’s easy to solve equation 7 by obtaining the intersection of the curves corresponding to each term.

Now, examine the effect pattern inflation has on TATR. Pattern inflation increases the TDVR phase. The effect of an increase in xC, however, is completely offset by a corresponding decrease in the rate (per unit increase in compression) of test time reduction due to the pattern-inflation term, 1 + εx. Therefore, while pattern inflation increases the execution cost, there is no net change in the magnitude of λ.

You can thus use simple criteria to decide if you even need to account for pattern inflation to determine the optimal level of compression. If λ is much larger than xC with no pattern inflation, then there is a strong likelihood that it will still be larger than xC even after pattern inflation is factored in. Equivalently,

If this condition is true, you need not measure ε. If the condition is false, then you need to extract the value of ε to find λ using equation 7. To do this, run ATPG on your design with several different compression levels. Figure 4 plots pattern count vs. compression for an industrial design implemented at five different compression levels, where each point corresponds to the same fault-coverage level.


Figure 4. This graph plots pattern count vs. compression for an industrial design implemented at five different compression levels, where each point corresponds to the same fault coverage level. Once you calculate the least-squares curve fit, ε is just the ratio of the slope of the line to its y-intercept.

Once you calculate the least-squares curve fit, ε is the ratio of the line’s slope to its y-intercept. Subsequently, use the y-intercept in the formulas instead of PC.

In summary, most savings from compression arise from either TDVR or TATR. If you must compress your patterns to load them into memory, then TDVR will be the dominant factor contributing to savings and you can use even more compression to derive incremental savings from test time reduction. Conversely, if most of your patterns fit into memory without compression, then TATR savings can be significant, especially if you have few scan I/O channels.

But the silicon-area-overhead cost places a limit on the full benefits of compression. For any design, an optimal compression level, λ, maximizes profits by minimizing test costs. The optimal compression level is unique for each product and is a function of process, design, DFT and compression architecture, tester configuration, and cost infrastructure. Table 1 summarizes how these parameters influence xC and λ. You can use this table as a convenient guide on your way to discovering the most cost-effective compression strategy.


Table 1. Parameters affecting optimal compression level

Compression x C λ &xC λ >xC
Process   Y0 Y0
Design F F, A0 F, A0
DFT PC, ε PC, ε, γ, ζ, η, fC PC, γ, ζ, C
Tester M M fTEST
Const infrastructure   αESC CS, RACT
Strategy   Truncate patterns Reduce test time
(Parameters in red have a positive correlation; those in blue have a negative correlation.)
αESC Escape-rate multiplier
γ Scan compression area-scaling factor
ε Pattern inflation factor for compression
ζ Second-order area scaling coefficient
η Exponential constant affecting fault-coverage convergence
A0 Die area without compression
C Number of external scan chains or scan I/O pairs
CS Silicon-area cost multiplier
F Number of scan flip flops
fC Fault coverage of complete pattern set
fTEST Tester scan-clock frequency
M Memory allocated for stimulus/response patterns
PC Scan ATPG pattern count before inflation
RACT Cost of active tester
Y0 Manufacturing yield





REFERENCES
  1. Wei, S., et al., “To DFT or not to DFT?” Proceedings of the International Test Conference 1997. pp. 557–566. www.ieee.org.

  2. Allsup C., “The Economics of Implementing Scan Compression to Reduce Test Data Volume and Application Time,” Proceedings of the International Test Conference 2006, Lecture 2.2. www.ieee.org.

  3. Allsup, C., “Limits of test time reduction,” Test & Measurement World, June 2006. www.tmworld.com/2006_06.

  4. Allsup, C., “How much test compression is enough?” EE Times, February 20, 2006. www.eetimes.com.

  5. Williams, T.W. and N.C. Brown, “Defect Level as a Function of Fault Coverage,” IEEE Transactions on Computers, Vol. 30, No. 12, December 1981. pp. 987–988.

Loading comments...

Write a Comment

To comment please Log In

DesignCon App
FEATURED RESOURCES