Don’t over-constrain in formal property verification (FPV) flows
Formal property verification (FPV) is increasingly being used to complement simulation for system-on-chip (SoC) verification. Adding FPV to your verification flow can greatly accelerate verification closure and find tough corner-case bugs, but it is important to understand the differences between the technologies. The main difference is that FPV uses properties, i.e., assertions and constraints, instead of a testbench. Assertions are used in simulation as well, but the role of constraints is different. An understanding of constraints is necessary for successful use of FPV.
Constraints play a central role in FPV. They define what is legal stimulus to the design under test, i.e., what state space can be reached. Assertions define the desired behavior of the DUT for the legal stimulus.
Constraints describe how inputs to the DUT are allowed to behave, what values should be taken, and temporal relationships between inputs. Constraints can be thought of as the stimulus in simulation. In constrained random simulation, the constraint solver generates an input vector for the next cycle that satisfies all constraints. It will continue to generate cycle after cycle of stimulus until the end of simulation, or until it reaches a situation where no legal stimulus can be generated.
In contrast, constraints for formal verification can describe, for example, how to legally communicate within a given protocol.
Over and under-constraining
Writing constraints that exactly describe all legal stimuli is difficult and often undesirable. This means that the formal environment is either under- or over-constrained. Under-constrained means that there are fewer constraints than required to exactly model the stimulus. This means that some potentially illegal inputs will be driven to the device under test (DUT). Over-constrained means that there are more constraints than required, and not all legal behaviors will be allowed.
Having a slightly under-constrained environment is usually the best approach. Many designs can handle inputs and behaviors not defined in the specification, and a larger state space in the design will be verified if fewer constraints are used. An under-constrained environment may lead to failing assertions, and if this is the case, additional constraints need to be added. For example, let’s say we have a 4-bit multiplier to verify:
The specification says it can multiply positive integers A and B > 0, but the verification engineer assumes A and B >= 0. The constraints and the assertion to check the multiplier is simply:
assume property (@posedge clk) A >= 0;
assume property (@posedge clk) B >= 0;
assert property (@posedge clk) C == A * B;
If the property is proven in this case – for either or both A and B being zero as well as for positive integers – then obviously it will hold for A and B only being greater than zero. The constraints allowed for additional behaviors, which means the environment is under-constrained. Having fewer constraints usually also improves run time of formal tools. If the properties pass, we don’t have to worry about the under constraining case anymore.
Over-constraining the formal environment is a much bigger problem as it may hide bugs in the design. In effect you are not verifying as much as you think you are. For example, assume the case of a multiplier that can multiply both positive and negative numbers, but the verification engineer misinterprets the specification and writes constraints to restrict A and B to >= 0. Assuming the multiplier works, the property above will pass, and you think verification is done because all properties pass.
Over-constraining is only a problem when it is unintentional. Intentional over-constraining is a useful method to break up verification of a design into cases. One example is verification of a memory controller. First constrain the stimulus to do only write transactions, and then constrain it to do only read transactions. Each of these cases is clearly over-constrained.
In the first case, read transactions, which are legal transactions, are not allowed, and in the second case, write transactions are not allowed. This is not a problem because the two cases together cover all legal stimulus. It is the case where only one of the cases is exercised and not the other, leading the verification engineer to think that verification is done. A risk with intentionally over-constraining is that legal input values are missed, and sequences such as read followed by write (in the case of the memory controller) are not verified.
Constraints limit the set of inputs and the state space explored in formal property verification. If the verification environment has constraints that conflict with each other or with statements in the design, no legal inputs may be possible, and none of the state space in the design will be reachable. For example, the two constraints below can be satisfied individually, but together they produce a conflict:
equal: assume property (@ posedge clk) (wb_stb_i == wb_cyc_i)
not_equal: assume property (@ posedge clk) (wb_stb_i != wb_cyc_i)
Conflicting constraints can be seen as the most extreme form of an over-constrained environment that is so constrained that there are no legal inputs. This means that no assertions can fail, in effect because no checking is done. It is analogous to saying that none of my test cases fail in simulation when the reason is that you have not executed any test cases. The statement is true, but it is misleading in terms of verification completeness.
Most formal tools detect conflicting constraints before attempting to run proofs of properties. Manually determining which properties conflict is difficult, and it is better to rely on tools to report it. For example:
[Error] CONSTRAINT_CONFLICT: Found conflicting (contradictory) constraints.
[Info] CONFLICT_DBG: Conflict occurs at depth 0
[Info] CONFLICT_DBG: List of assumptions involved in conflict