The verification revolution

-March 09, 2017

The Accellera Portable Stimulus (PS) standard is expected to be released early this year and that will set the stage for the abstraction of verification to be raised to the system level. While attempts have been made to raise the abstraction used for design, the industry decided to stay at the Register Transfer Level (RTL) for most blocks and instead use a block assembly approach to create systems using internally developed IP or IP coming from third parties. However, verification costs have been rising, and existing verification languages, such as SystemVerilog, and methodologies, including the Universal Verification Methodology (UVM) fail to address issues associated with system-level verification. Thus, the stage was set for the development of a new system level verification methodology.

Portable Stimulus is the first true verification language, in that it does not focus on the direct creation of stimulus like previous generations. It is a description of verification intent, encapsulated in a mixture of C++ and graphs. Tools transform that verification intent model into fully self-checking scenarios run on the design in a simulator or emulator. This process is very similar to the notions of synthesis being applied to a design model, except the output from the tool is a verification test case. While test case synthesis is the first application under consideration for PS, many other applications will be possible. The verification intent model is also a natural place to collect coverage data, but first the industry has to decide what coverage means at the system level.

The purpose of verification is to give designers enough confidence that no major problems exist in the design to cause a respin. The way most companies measure the level of confidence is through coverage level. The industry has become competent in the use of functional coverage, as described in SystemVerilog, but this is not usable at the system level. What needs to replace it, and how can that be tied back into existing coverage tools? This article will outline the concepts of system-level coverage that can be defined using a graph-based approach to verification.

Let’s start with what’s available today. Constrained random test pattern generation, as is being used for block-level verification at the register transfer level, is all about stimulus generation. This is what is encapsulated into SystemVerilog and UVM testbenches. They focus on the creation of a vector of legal stimulus. Constraints, sequences, and virtual sequences are used in an attempt to make the stimulus generation more useful. However, SystemVerilog has no concept of sequentiality or design purpose, so a vector knows nothing about the past, or what might usefully follow. It just has to satisfy all of the constraints and the predefined snippets of directed test encapsulated in the sequences.

To measure confidence in this method of verification, a coverage model has to be created. While it has been given the name of functional coverage, this title is a misnomer. A better name would have been observation coverage because that is what it actually does. A SystemVerilog cover point defines an observation that, if seen during the act of verification, indirectly implies that some functionality has been executed. It assumes that a scoreboard or reference model has been run in parallel and that a checker has determined that the observation does in fact tie in with correct behavior. These are very loose connections, and many teams have been surprised to find out that this correlation was not valid. Because of that, bugs have slipped through. For companies that require proof that this connection is valid, it is necessary to use a tool such as Certitude – now owned by Synopsys through a chain of acquisitions.

But that is the past. What about coverage at the system level and Portable Stimulus? There is some good news, and some not-so-good news, but nothing as bad as functional coverage.

The good news: Portable Stimulus solves all of the problems associated with functional coverage because it is a model of verification intent. This model defines the functionality of the design that can directly be used to measure coverage. No more having to create a coverage model. No more having to worry about coverage holes. No more uncertainty. Sound too good to be true? Well, it is that good, but only for part of the problem, and, it does create a different kind of problem.

The graph that is at the heart of Portable Stimulus defines every possible path of operation through the design. Some of those paths are limited by the constraints placed on the graph that narrow the legal paths to those that are intended to be supported by the hardware. Here’s where mathematics comes in. This corresponds to what, in the computer-science world, is deemed to be path coverage, and is inherently what formal verification uses. When a formal proof is made, it is made for every possible path through the design. This is why some designs are beyond the limits of formal verification: the problem becomes too big. That statement defines the first of the new problems.

Graph coverage defines what it would mean to verify every path through the design. A design team now requires a strategy to narrow that down to what is most important, and prioritize. Figure 1 shows how two coverage goals could be added to a graph and a cross coverage defined.


Figure 1  Setting coverage targets on a graph

A couple of terms need to be defined. First, a scenario model is a graph of all use case scenarios that should be supported by the design. Scenario coverage is the coverage of the paths through the scenario model. Unfortunately, scenario coverage is only one part of what coverage means at the system level.

As stated earlier, verifying every path through a graph is an impossible task. Unfortunately, even managing to do that, it is just the beginning of what needs to be verified. The next level required is concurrency coverage. Multiple graphs may be executed in parallel, which means that total coverage is more like the cross between the graph and itself for every independent thread that can be executed by the hardware. For example, did scenario A, that displayed a video read from the SD card and decompressed on the fly, run at the same time as scenario B, that took in an emergency-alert message through the radio? And, were two particular operations of those scenarios executed at the same time such that they both required use of a shared resource? An example of this is shown in Figure 2. This kind of coverage cannot be captured on the graph directly, but tools can be developed that would make that possible. For want of a better term, this is concurrency coverage.


Figure 2  Multiple tasks can be run on available processors.

Is this done yet? Sorry, but there are other kinds of coverage required. The next one is temporal concurrency coverage, and while the above example could be considered an example of that, it is somewhat simplistic. It asked for two processes to happen at the same time, or cause a synchronization to happen between them. A more complex example might be to create a test case that powers up as many independent power domains as possible so the designer can use this as a scenario to perform power integrity, thermal, electromigration (EM), and other power-grid-related verification functions. In this example, temporal relationships should be placed across concurrent actions, and these can be annotated as temporal constraints on paths through the graph.

While companies are working on tools in all of these coverage areas today, the problem is not fully solved. Instead, it is a continuing dialog between users and tool developers. It requires listening to users about the problems they face so that the best long-term solutions can be found without locking down a solution based on what is available today. One way to achieve this flexibility is by mapping notions of system-level coverage into existing cover points and cover groups that can be used with existing coverage tools, as shown in Figure 3.


Figure 3  Migrating coverage data between tools

However, there are also advantages in keeping coverage data on the graph. This will allow PS tools to control and prioritize aspects of the graphs such that the first test cases generated will be those that address the most important design targets. This makes the verification-intent model an integral part of a verification plan rather than it being loosely connected as it is today. It also enables the newly emerging tools and methodologies to coexist with existing flows and with other tools and verification performed at the block level.

The creation of the Portable Stimulus standard is a turning point for the industry, and the user community has recognized this. With many standards, users take a back seat and, while they monitor the development of the standards, they do not get actively involved. That has not been the case with Portable Stimulus, with users exerting pressure to ensure the standard goes in the directions they want, rather than taking the easy path for EDA companies based on the tools and technologies they already have. This will create a lot of innovation within the industry.

The dialog between users, tool developers, and standards bodies will continue long after the release of the PS standard. My company, Breker, has been working hard to develop definitions for system-level coverage that meet the needs of the user community, and as with much of the technology at the heart of Portable Stimulus, will share its experience with the community and continue to donate the necessary technology to drive the methodology forward.

 

Adnan Hamid is the founder and CEO of Breker Verification Systems, and inventor of its core technology. He has more than 20 years of experience in functional verification automation. Hamid managed AMD’s System Logic Division and led its verification group to create the first full test case generator for an x86-class microprocessor.

 

Also see:

 

 

Loading comments...

Write a Comment

To comment please Log In

FEATURED RESOURCES