Move ICs from defects per million to defects per billion

-December 15, 2016

Defective Parts Per Million (DPPM) is one of the key metrics used to measure quality in many semiconductor segments. With electronics becoming more and more a part of everyday life (wearable electronics and semi-autonomous vehicles), there's increasing pressure to improve quality across all semiconductor market segments. For mission-critical segments such as automotive and medical, market forces are driving improvements in quality into the Defective Parts Per Billion (DPPB) range.

The math behind the drive to DPPB levels derives from the number of semiconductor devices that reside in modern electronics. Consider a premium vehicle that has more than 7,000 semiconductor devices across its various electronic systems. If you assume a DPPM rate of 1 for all the semiconductor devices in that vehicle, it equates to seven failures for every 1,000 cars.

This may not seem like a large number, but for a car manufacturer that sells two million premium vehicles a year, it represents a failure rate of more than one per hour, every day of the year. With the increasing amount of vehicle electronics, one of these semiconductor-related failures could occur in anything ranging from an infotainment system not displaying a navigation map to the anti-lock braking system failing. Some would be irritating; others could be life-threatening. That's the reason behind the desire to improve quality levels by driving the defect rate below 1 part per million.

How big is "big data?"

For a complex electronic system such as an automotive ECU (electronic control unit) a multi-chip package (MCP) may be used. That MCP can have multiple semiconductor dice inside. For a typical MCP with five dice, with each die going through ~1.2 wafer sort operations and ~1.2 iterations per operation, each collecting ~3,000 parametric measurements along with ~1,000 WAT or eTest measurements and ~3,000 final test measurements could generate ~25,000 data points for each MCP. For a device with an annual run rate of one million units, that would result in 25 billion data points. To perform a bi-variate or multi-variate analysis from that many data points for a single device type requires significant data analytic performance.

For the semiconductor industry, achieving this next level of device quality will require the automotive value chain to put in place a large-scale analytics flow that can automatically analyze manufacturing data in real-time to detect quality issues on the manufacturing floor (See How big is big data?). Manufacturing data will also need to be integrated with all Return Materials Authorizations (RMA) so that known failures can be correlated back to the source material, which can let manufacturers screen out future devices with similar defective characteristics.

Currently, there are several methods used to improve quality in the overall manufacturing test process: Data feed forward, bi-variate and multi-variate analysis, and quality indexing. All of these methods are currently in place at semiconductor companies that deliver products to quality and safety-sensitive market segments such as automotive, medical, and data servers. Not only do these methods lower DPPM rates, they can reduce the number of expensive test steps such as burn-in and system-level test.

Data Feed Forward
Data Feed Forward leverages data collected from any step in the manufacturing flow and makes it available to any other downstream test insertion. That helps managers make valuable, informed decisions. Using analyzing manufacturing data, manufacturing and quality engineers can recall data from any test, such as wafer sort, final test, or system-level test. Operations can compare this data in real-time to the same device that is being tested to check for any variations in test results that could indicate whether that chip should be dispositioned in a different way.

Bi-variate and multi-variate Analysis
Bi-variate and multi-variate analysis looks at two or more test results that can help engineers find the empirical relationships between these tests. Tests that have strong correlations can then be used to identify "outliers" within the populations. This capability is critical to improve quality for many market segments because bi-variate and multi-variate outliers are typically devices that can't be screened out by normal test programs. These outliers usually have high correlation to devices that will likely fail prematurely and be returned as RMAs.

Once you identify and acknowledge a bi-variate/multi-variate correlation, you can evaluate it for every device in production by using automated rules. When finding an outlier, the parts can be re-binned accordingly. Figure 1 shows an outlier in a data set.

Figure 1. Data analysis can identify outliers based on two or more test parameters.

Multivariate analysis can be performed using a principal component analysis (PCA) approach. PCA is used for reducing the number of parametric tests by aggregating them into Principal Components (PCs), with the top PCs typically comprising a large percentage of the variation. The images in Figure 2 show the variances of each PC and the cumulative effect of those PCs to the total variance. Using PCA, engineers can easily identify the major contributors of multivariate outliers in a good population.

Figure 2. Data variance by Principle Component (PC), and the cumulative effects of those PCs to the total variance.

Loading comments...

Write a Comment

To comment please Log In