Design-centric yield management
By Sagar A. Kekare, Synopsys - March 12, 2009
The rapid price erosion of consumer electronics devices is a dominant market force shaping today’s IC industry. Equally important is the progressively detrimental impact from the design-process interactions during IC production at sub-130 nm technology nodes. Combined, these forces have caused IC vendors to re-evaluate their yield learning strategies.
In the race to the market, IC vendors have few avenues remaining to claim the first-to-market advantage. Besides getting to market early, they also have to be concerned about meeting the demand with a reasonably stable supply chain. In the early stages of production ramp, the relationship between the parametric data from wafer acceptance tests (WAT) and yield is not well understood. The wafers are accepted from the foundry as good based on WAT results alone, so it is very important to understand which test programs to use and how to determine the appropriate specifications for pass-fail decisions during die-level testing. Typically, IC vendors will start the ramp with a wider set of test programs and look for ways to remove redundancy based on yield learning.
For technology nodes above 130 nm, functional testing along with in-line parametric information was a good method to track yield learning. Today, each of the functional failure bins correlates to many physical failure mechanisms, which, in turn, are triggered by the interaction of design and process marginalities. As a result, localization of the defect using functional testing only has become nearly impossible. Hence, the sub-130 nm nodes show a steady rise in adoption of structural testing of ICs using Design for Test (DFT) and Automated Test Pattern Generation (ATPG) methodologies. Diagnostics of structural test data claimed to provide localization of defect to a specific cell instance and net. The expected outcome was rapid identification of critical segments of logic in the circuit, thereby enabling equally rapid physical localization of the failure and subsequent correction, either through process tuning or design re-spins.
The subtle yet cumulative interaction of design and process marginalities means that each fault mechanism targeted by ATPG techniques can be activated by several different patterning or material related non-idealities on the wafer. Thus, capturing structural failures in the IC logic does not necessarily point to a single physical failure mechanism. In actual practice, the diagnostic methods provide a relatively large set of candidates, each of which may be responsible for the failure at varying probability. Deploying this methodology, while better than the functional testing-based methods, still does not pinpoint a specific root cause. Rather, it points to several probable causes, when attempting to isolate the physical failure mechanism for corrective action.
As IC vendors increasingly converge on DFT and ATPG methodologies, the very differentiation promised by the structural testing approach actually becomes an equalizer. Differentiation is now shifting to IC vendors that can drive rapid analysis of large diagnostics datasets toward corrective action in a prioritized manner, as they are able to extract maximum usable information from their first silicon test output. This would allow them to address more issues with fewer re-spins, and thus claim the early market position. They would also benefit from a chip supply line fully capable of meeting market demand at the top of the pricing curve. As a side benefit, they would be able to reduce their test costs by removing the redundant test portions.
Leading IC vendors have recognized this change in the nature of yield learning. Some have started building the basic infrastructure internally, to help them navigate this change. But the challenges are significant. The design-process interactions that manifest themselves in test fails are highly complex and subtle. The analysis required to distill these towards corrective action is a hugely complex undertaking in itself. First, it requires seamless exchange of information between the domains of design, manufacturing and test. Second, it must enable users to not only connect and integrate data from these domains, but also to cross-correlate them and analyze the results in a statistically valid manner. Visualizing the logical failures in relation to the electrical circuitry and then relating them to the physical patterns on the wafer is a formidable challenge. Hence, an extremely wide choice of frames of reference for failure visualization and the ability to move between each of them with complete back-tracking capability become necessary components of such an analysis.
A typical use case for this type of analysis system would be first-silicon characterization. With diagnostics data from several wafers worth of dies, the first task would be to separate the failing dies into systematic and random failures. Failing cells and nets would need to be examined for systematic failure mechanisms. Here, the failing cell locations need to be correlated to known sensitivities from signoff steps in the design flow, such as near-critical timing sensitivity, a possible bridging fault, or a lithographic marginality in a cell identified by model-based physical verification. The correlations found would then need to be statistically confirmed and prioritized for physical failure analysis. Remaining failing dies would be treated as affected by random mechanisms. Composite die-stack of all the failing cell and net candidates would be correlated to critical area analysis (CAA) data as well as minimum design-rule flags. The failures that are spatially correlated to these known hotspots would then be separated out for further analysis of random defect based failures. Being able to exercise such a complex analysis flow without having to hop across several different point tools would be a tremendous time saver. This contiguous and continuous set of analytical and visualization capabilities would then become the ultimate means for IC vendors to differentiate themselves in an increasingly challenging industry environment.