FPGA debugging techniques to speed up pre-silicon validation

, & -February 07, 2013

The increase in complexity of designs being prototyped on FPGAs has led to the increase in need for better debugging techniques. The design being prototyped on the FPGA may be used for performing validation, early software development, proof-of-concept etc. Thus, it becomes important that the focus remains on performing these tasks rather than trying to figure out whether an issue is caused due to a prototyping error.

Different debugging techniques may be required depending on the available design or type of task at hand. Adoption of proper debugging techniques can also reduce the cycle time for validation of design on FPGA.

This paper talks about some debugging techniques for FPGAs that can be adopted to speed up the validation process while at the same time highlighting some of their constraints. These debugging techniques can be used for the various challenges or issues faced during pre-silicon validation as discussed below.

RTL Simulation
One of the major requirements in case of prototyped designs when the RTL is not completely stable is to access and monitor the behavior of internal signals. This aids in finding the root cause of the issue, whether it is due to prototype errors or some anomaly in the RTL itself.

RTL simulation requires building a full-fledged prototype verification simulation environment around the module under test, capable of driving stimulus, meeting the memory requirements if any, monitoring the errors in the design, etc.

Such a model for RTL simulation of a design is shown below in Figure 1.

Figure 1: RTL Simulation Model

Assumptions / Advantages

  • This method of debugging is suitable and good to adapt while dealing with design size which verification tools can handle comfortably today, provided that simulation times are not frustratingly long making the ROI (Return on investment) of this effort virtually ineffective leading to the requirement of an elaborate testbench.
  • In case of an already verified design it would make sense to re-use existing verification testbench to reduce time and effort of building it from scratch.
  • Effort should be applied in tweaking the existing testbench in order to make it capable of running validation test suites without any significant change.
  • You can reduce the dependence on simulation by selectively running only the failing scenario in simulation by narrowing down the issue and looking at the simulation waveforms for the signals that might be causing the issue.


  • If the design is large it would be nearly impossible to look at each signal present in the design and monitor its behavior.
  • One of the pre-requisites for this approach is that the validation or software engineer should have a good insight of the design as well as the internal signals.
  • It is time consuming and requires an extra effort from a software developer.
  • RTL Simulation might not be helpful in cases where the design is partitioned. Assuming that we are mostly trying to “re-use” the top level verification environment and with little effort making it work for “prototype version of the DUT”, in such cases creating additional hierarchies and design partition will add additional efforts to modify internal signal probes setup in the standalone verification environment.
  • It can be challenging to debug designs with large size of fast streaming output data which involves complex processing, as the issue can exist at various levels in the design hierarchy.

Some better and faster ways of debugging must be considered in case of complex designs that will be discussed in the coming sections.

Loading comments...

Write a Comment

To comment please Log In