Design Con 2015

Design for manufacturing and yield

-May 06, 2013

DFM at advanced nodes and its impact on design flows: a reality check
Manoj Chacko, Product Marketing Director, Custom IC and Sign Off, Cadence Design Systems

Manufacturing improvements via novel materials, processes, and new technologies aren’t keeping up with the market demand for ever-shrinking feature dimensions, increasing performance, and low-power requirements. Software is now, and will remain, the new key enabler, as long as there’s a growing gap between design and manufacturing.

At 28 nm, the impact of manufacturing variability on performance, power consumption, and yield has become disproportionately larger and more complex. Software analysis is critical for effectively quantifying and mitigating the impact on both the physical integrity and parametric performance of the designs.

Physical DFM checks are the final step after DRC, especially lithography process check analysis. Litho analysis can be run on blocks, post route in a design flow, and as the final step after DRC. The value of litho checks is clearly evident at the beginning of a process technology ramp-up. At 28 nm, for better predictability of physical and parametric yield, the lithography complexity has moved upstream to parasitic extraction with changes in the multi orders effects range, and in physical verification in the form of recommended DFM rules or litho yield detractor patterns in the design rule manuals (Figure 2). At 20 nm, double-patterning technology adds yet another dimension to the impact of silicon printability and connectivity.

Figure 2 Lithography complexity has moved upstream to parasitic extraction.

The increase in design density and use of third-party IP bring additional challenges associated with CMP-induced metal thickness variation. For example, model-based, rather than rule-based, CMP analysis is key to identifying thickness variations of the complete metal stack set. Also, as more design teams integrate third-party IP, the metal fill thickness variations around the border of the IP are increasing. The IP designer follows the design rules and the density requirements. However, making blocks that are easily integrated into different SoC environments without iterations to address CMP density issues is ineffective.

The impact of layout-dependent effects (LDE) variability on the design is well acknowledged. LDE variability comes primarily from manufacturing challenges, lithography effects, CMP, and stress, which significantly affect device behavior. Varying methods are used to mitigate the LDE issues because of the inability to qualify and quantify variability impact at the specific transistors. LDE cannot be analyzed by considering devices in isolation. A common method is to over-margin the transistors with dummies to minimize the impact of context problems on device performance. Designers need software to help quantify delay and leakage due to LDE, improve their traditional methods, and locally optimize the devices deviating from the specifications (Figure 3). Timing and power variability are becoming more significant at each new process node, affecting margins, silicon utilization, silicon failure, and timing closure.

Figure 3 Software helps quantify problems due to LDE and to locally optimize the devices deviating from the specifications.

Consequently, advanced-node designers must optimize chip manufacturability along with area, speed, and power. This trend will increase exponentially as technology advances to 14 nm.

Identify critical design features using diagnosis-driven yield analysis
Geir Eide, Product Marketing Manager, Silicon Test Solutions Group, Mentor Graphics

During the transition to the 28-nm node, several leading semiconductor companies struggled with supply: They couldn’t ship enough of their products. Part of the problem was lower-than-expected yield. This situation illustrates how traditional yield learning methods are running out of steam, largely because of the dramatic increase in the number and complexity of design-sensitive defects and longer failure analysis cycle times. These factors have forced fabless semiconductor companies to arm themselves with new technologies such as diagnosis-driven yield analysis (DDYA), which can rapidly identify the root cause of yield loss and effectively separate design- and process-oriented yield loss.

Software-based diagnosis of test failures is an established method for localizing defects during failure analysis for digital semiconductor devices. Diagnosis software determines the defect type and location for each failing device based on the design description, scan test patterns, and tester fail data. Using statistical analysis, diagnosis results from a number of failing devices can be used to effectively determine the underlying root causes.

The primary challenge for yield analysis is dealing with the ambiguity in the results. For example, more than one location could explain the defective behavior, and each suspect location often has multiple possible root causes associated with it. To better derive the underlying root causes represented in a population of failing devices from test data alone, you need to apply machine learning and design statistics, such as tested critical area for each layer and total number of gates tested of any given type1.

Another way to expand the scope of DDYA is to include data from DFM analysis (Figure 4). One key motivation behind this approach is to be able to prove that a defect found in failure analysis is a systematic critical feature, and then to learn what about that feature relates to the defect’s rate of occurrence. Without a DDYA methodology that can automatically incorporate DFM information, you would need a team of experts and a lot of experimentation to accomplish this. However, by first identifying all locations in a design with a suspected feature through DFM analysis, any diagnosis results (that is, actual silicon defects) that overlap these locations can easily be identified and analyzed to determine whether this correlation also presents causation. A second motivation behind this approach is to determine whether a potential design fix could cure the problem. By identifying design locations that contain the planned fix, a similar correlation can be performed before actually implementing the fix, and the failure rates can be combined2.

Figure 4 DFM analysis meets diagnosis-driven yield analysis in new methodologies and software tools such as these from Mentor Graphics.

Diagnosis-driven yield analysis appears particularly promising for the 20- and 16-nm nodes in spite of the inherent limitations of immersion lithography.


Loading comments...

Write a Comment

To comment please Log In

DesignCon App
FEATURED RESOURCES