Risk reduction in a verification upgrade
Product managers are under enormous pressures. Designs are getting larger and more complex, and in addition, managers have been barraged by a massive upgrade cycle in the verification side of the team. Vendors have released new methodologies (VMM, RVM, eRM), languages (PSL, e, OVL, SystemVerilog, SystemC), and tools, with each one claiming that it knows how to increase quality and reduce the risk associated with getting that chip out of the door on time and fully working.
With any methodology change, there are inherent short- and long-term risks as the team learns new techniques. Managers need a process by which they can evaluate possible changes, decide which ones are right for them to adopt, perform the insertion into the team, and measure the effectiveness of the changes. This article examines the process along with some specific examples related to the adoption of the recently released Verification Methodology Manual for SystemVerilog.
Designs today often include multiple heterogeneous processors with complex hardware/software interfaces, increased levels of concurrency, and large quantities of reuse. This increase in design complexity leads to an even bigger increase in verification complexity, and with verification now consuming on average 70% of the total time, designers must find ways to make the verification processes more efficient.
One way to accomplish this feat is with verification reuse, but for this approach to be effective and to enable you to use external verification IP, there must be a methodology that allows the industry to build these verification components into a complete verification flow. A recent example of this situation, the VMM (Verification Methodology Manual for SystemVerilog), attempts to define rules for a verification methodology such that industry-wide verification reuse will become possible, due to the standardized way in which each of the components has been created.