Design for test or suffer the consequences
By Gabe Moretti, Technical Editor - January 23, 2003
Two of the most costly functions that engineering teams developing electronic products perform are verification and test. On the surface, the two functions seem the same, but dig a little deeper, and you'll quickly find that they are only related. Verification aims to debug the design and ensure that it meets its specification. Testing detects faults that result from manufacturing problems. You could verify a completed design with the same vectors developed to test it but not during development. Engineers must develop a number of ad hoc test platforms and vectors that are unsuitable for final test but indispensable for detecting design errors.
A test suite detecting a design error indicates that verification was done improperly. Fixing the error at this stage is costly and could even lead to canceling the product introduction. Although the degree of difficulty varies, you can always debug your design. Engineers have at their disposal a vast array of tools allowing them to simulate the design at various levels of abstraction, hardware accelerators and emulators, and even languages for testbench development. During development, you can always add temporary structures to your design that simplify verification. Formal-verification techniques are also becoming powerful tools for design verification (Reference 1). Yet, despite this impressive array of tools, Collett International Research Inc has found that 51% of all chip designs fail the first time they are fabricated. And 74% of the failures are attributable to functional errors (Reference 2).
Test engineers must deal with the topology of the finished product to identify faults. This process is difficult, because the engineers can take advantage of only the pins available on the device and must deal with the operating-speed and memory-size limitations of the ATE (automatic-test equipment) used in manufacturing. (For an up-to-date report on the state of the ATE industry, see Reference 3.) Modern fabrication processes make millions of transistors available to designers to implement a large number of functions on one IC, but mechanical constraints limit the number of pins in each package. It is therefore impossible for test engineers to identify every fault in a device without some added structures. Fortunately, the number of available transistors is such that adding these structures is rarely a problem. Engineers need to address the test issue using prevention, by anticipating the problems and building supporting structures that allow access to internal signals. The EDA industry has developed two classes of products to address the test issue: One is ATPG (automatic-test-pattern generation), which provides test vectors by looking at the circuit network and functions; the other is DFT (design for test), which generates logic structures within the chip to support manufacturing in testing the device. The sidebar "Scan testing and logic BIST: a short comparison" provides an overview of DFT methods. Designers often employ both methods in the same chip, because the characteristics of each method are complementary.
The evolution of testing
Until a few years ago, most ICs were tested using functional tests. Engineers developed test vectors to exercise all of the functional characteristics of the specific IC and used those tests in manufacturing to validate each unit. But ICs are now too complex, and the collection of functional test vectors is much too large for even the most up-to-date ATE. Engineers realized that an IC comprises a collection of independent or quasi-independent functional blocks and that it is more efficient to test each block separately. The method is called structural testing, because it divides the device into its functional components and independently tests each structure. To accomplish this task, you must be able to input the appropriate vectors to the block and obtain the resulting output. Because the block may not have external inputs and outputs, you must insert new structures into the IC.
Structural tests try to verify that all structures—usually gates and storage elements, such as flip-flops and latches—and most of the interconnects between them are working properly. Using a gate-level representation of the design netlist, ATPG algorithmically calculates stimulus for controllable nodes so that you can test internal design structures. The growing complexity and size of designs and a large number of memory cells has made "controlling" and "observing" each structure more difficult, leading to the development of two parallel methodologies: "scan" and "random testing."
Most of today's structural-logic-test methodologies are based on a full-scan scheme. In full scan, all storage elements connect together into one or more scan chains. In test mode, the ATE serially scans data into and out of these storage elements, providing the test engineers with full controllability and observability. Scan test consists of scanning in the pattern data (loading each scan chain), applying one or more functional clock cycles, and then scanning out the captured response data. Full-scan methodology essentially transforms any sequential design into a combinatorial one. Test engineers store the ATPG patterns in the tester memory. The ATE inputs the vectors into the circuit using a number of parallel scan chains. Factors such as availability of chip I/O pins, available tester channels, and on-chip routing congestion caused by chaining storage elements in test mode limit the number of scan chains.
Random-test methodology applies random-data values to the design-input nodes, aiming to exercise each structure in the design (in effect, randomly controlling and observing them). This method works well for regular structures, such as memories. You can improve the quality of pseudorandom patterns with the custom design of LFSRs (linear-feedback shift registers) and phase shifters. You can generate desired segments of the available total random pattern space (2LFSR length) by loading LFSR with different seeds to start pattern generation. Test engineers use internal register banks to break large designs in the test mode into smaller pieces. Some designs are unsuitable for testing using random patterns. For these designs, you need to insert test points to improve controllability and observability of internal structures. Engineers use an MISR (multiple-input signature register) to capture the responses to the random patterns. The BIST (built-in-self-test) methodology for logic was born when LFSR and MISR became part of the chip itself. Commercially available logic-BIST options today go one step further and use a full-scan implementation to apply random pattern data to each logic structure. This architecture allows for many more parallel scan chains, because you no longer have to route them to chip pins.
Products that support DFT
Most tools that insert either scan or BIST structures in a design work from a gate-level netlist. Engineers design the functional circuit at RTL (register-transfer level). They then use a synthesis product to generate the equivalent gate-level representation. Using this netlist, DFT tools add the required logic to support either scan or BIST, depending on the nature of the design. It is important that test engineers and logic designers work together to achieve DFT closure. DFT closure is the ability to meet all test requirements during every phase of the design, to avoid costly reworking when a portion of the design turns out to be untestable. To support such methodology, DFT tools must work well within the synthesis flow, so that logic designers can readily evaluate the impact of DFT on the design.
You can use Synopsys' latest offering, DFT Compiler, in conjunction with the company's TetraMAX ATPG tool. DFT Compiler works with Design Compiler and performs a one-pass scan synthesis. Mentor Graphics has played a leading role in DFT for many years. It markets the FastScan tool suite for testability analysis, which supports a number of fault models to help designers ensure that their product meets manufacturing test requirements. Engineers can use BSDArchitect to generate IEEE 1149.1-compliant boundary-scan circuitry, MBISTArchitect to generate BIST structures for embedded memories, or LBISTArchitect for BIST structures that embed test vectors within the device, thus shortening test time and reducing the amount of memory that the ATE requires. Cadence Design System has chosen to integrate Mentor's FastScan product in its Envisia PKS flow instead of developing its own DFT product. Because the companies' strategies are complementary, this integration makes sense. Mentor aims to provide best-of-class point tools, and Cadence offers an entire design environment that includes its own products as well as some from third-party EDA companies. Magma Design Automation has recently added DFT capabilities to its Blast Chip RTL-to-GDSII design system. It integrates DFT-analysis and -repair capabilities with logic synthesis and eliminates timing-closure iterations that are sometimes necessary when you need to independently add the test structures.
Syntest Technologies has been in business for more than 10 years, always focusing on the problems associated with testing circuits. It offers a number of products for DFT. TurboCheck is a testability analyzer for sequential circuits. Given an RTL netlist, it assists designers in developing test options. TurboScan and TurboBSD perform synthesis of scan and boundary-scan circuits, respectively, and also generate the related test patterns. TurboBIST synthesizes the BIST logic surrounding either functional logic or memory blocks, including IP (intellectual-property) cores from third parties. If your design contains DFT cores, you can use TurboDFT to stitch them together with or without a boundary scan controller. Syntest's latest product, VirtualScan, allows engineers to access a large number of test patterns inside the chip with a fewer pins than it would otherwise have taken and reducing the time that the test pass requires. Logic Vision has also focused its efforts on improving testability. It markets a number of products, including both embedded and externally programmable memory BIST blocks for both DRAM and SRAM and embedded-BIST circuitry for RAM and ROM blocks. Its Chip Test Assemble product creates test infrastructures on a chip, including an IEEE 1149.1-compliant test-access port and a boundary-scan register. Logic Vision also offers its PLL BIST product, which provides embedded test-circuit elements for PLLs.
Most of the efforts in the area of DFT have targeted digital design. Yet, current SOC (system-on-chip) designs often contain one or more analog blocks. The designers of DFT technology developed it with digital logic in mind, so adapting it to analog applications difficult. Credence Systems is pioneering work in the analog-test area. The company began as an ATE provider and has branched out into EDA. Its products adapt BIST technology to such devices as voltage-controlled regulators, as well as ADCs and DACs. Unfortunately, the company Web site is not user-friendly, and it requires you to register before you can find out any details about the products. Credence asks that you reveal your identity before you can obtain any information about its products—an approach that runs contrary to established EDA marketing methods and possibly indicates that Credence has yet to complete its transformation from ATE provider to system provider.
Most of today's SOCs contain a number of memory blocks dispersed in various portions of the device. Including a BIST structure with every individual block requires more area than anyone would like to dedicate. Yet, the cost of having a nonfunctional chip due to a memory fault is also high. Virage Logic offers an option in the form of self-testing and -repairing memory blocks, the Star memory system. The system includes one or more Star SRAM blocks, a Star processor, and a STAR fuse box. You can choose either single-port SRAM blocks as large as 4 Mbits or dual-port SRAM as large as 512 kbits. The semiconductor industry could extend the approach of self-repairing memories to logic by providing redundant structures that the device can use during certain failure modes. A technologically and financially desirable design strategy may be to use the significant number of additional transistors that each new process step provides to build redundant logic. The strategy would increase the chip's reliability by duplicating structures as small as registers or as large as functional blocks. By using statistical analysis of failure modes in a process, EDA companies could develop models to predict the vulnerability of logic blocks. A new type of tester could automatically enable a new topology when the system detects a failure during manufacturing test, using a derivative of the algorithms that placement and routing employ to energize redundant circuitry and internal test structures. The redundant block, functionally equivalent to the failed one, becomes active, and the fabrication yield improves. Of course, the probability that the redundant circuit is also bad would always exist, but multiple failures on the same die are less frequent.
|OTHER COMPANIES MENTIONED IN THIS ARTICLE:|
|Collett International Research Inc