The verification plan from hell
I had been working day and night on the post-silicon validation of the UHX1000. The design had nearly as many bugs in it as it had lines of code. After five spins and several clever workarounds, it seemed it was finally reaching a quality-level which would allow us to deliver samples to our customers, and hope they might still buy despite our endless project delays.
My boss, Mitchell, called me into his office. The discussion went something like this:
"Jasmine, I have an exciting assignment for you. I know you have been working hard on the post-silicon, and I have been hearing good reports about results from your work, so I'd like to give you an opportunity to lead the verification on the UHX2000. This time we really want to get the verification right - the first time around."
"A few things you should know to get started: We should be integrating the new blocks of RTL we are currently designing sometime early next month, so it's a good time to start planning the verification effort. In order to save time we have decided to skip the documentation of the new blocks and delve right into coding. So the blocks should be ready for integration in a few weeks."
I cringed, unable to speak; another round of design done without documentation, another project done with Verification as an afterthought. I knew this was not going well. But it got worse.
"I'm assigning Alex and Roth to the job along with you; Alex has experience in debug, since, between you and me, his first ASIC was a failure, and Roth is a recent college graduate that were not sure what to do with, so we've told him that if he does a good job on this, we may transfer him to the design team."
"Also, I want to clear up all this talk about hiring an expert verification consultant to help us set things up. We all know how to do verification, so we don't need someone telling us how we should do our job."
"I also heard there was some talk about upgrading our test-bench language. To be honest, I don't think you need any of those fancy tools, which are supposed to do all kinds of random testing. I suggest you stick to good old Verilog for the new blocks, and you should be reusing components from the old design which are written in Vhdl, c, Perl, and Tcl, so you won't have that much to develop anyway."
It was about at this point that I started to smile to myself. It all started to make sense; let's entrust our most critical and time consuming problem, the quality of our chips, to the least experienced people in the company. Then we can "save" by reinventing the wheel to implementing functions already well established in commercial tools and functionally ignore the acquired knowledge of experts who have already done similar projects many times. But now the story got even better.
"There was also talk about using functional coverage; that's the last thing I want to hear about. What a waste! Instead of just plain old writing tests, you would have to both define and code functional coverage points, and then "hope" that the random somehow creates them. We all know that we can accomplish all of our goals with these good old tests. But, just to be sure, we'll run an automatic tool for toggle coverage at the end of the project to see that we didn't miss anything."
'Absolutely', I thought to myself, 'keep on working with the gant-charts and the test check-lists', they have proven so effective on all of our previous projects. How could I be so callous as to think that an objective measure of progress based on what the tests are actually doing, could replace the age old wisdom which has brought so many companies to produce sand. But again, the story kept getting better.
"We want to make sure that we can use the verification environment for Post-Silicon testing, so I will need you to make sure all your checkers are completely external to the design, and adhere closely to the requirements of the post-silicon folks. I know this might slow you down a bit, and take away a lot of the visibility advantages you have in simulation, but we can probably save a lot down the road, and that's the type of savings our management wants to hear about."
"For your development, I want you to use the UHX2000 specification. We're just now piecing it together from a bunch of documents. It's still a work in progress, but it should give you an idea of everything that needs to get verified. I will have to ask you, since design time is critical, to limit your team from asking the architects questions during this period at least until the final spec is released."
On our company website it states that 'We aim to provide a challenging work environment'. I can't think of anything more challenging than putting blinders on, and working with your hands tied behind your back and absolutely no vision. So I guess he was living up to company’s expectations.
"The last project spent over a year developing the verification environment for the UHX1000, and 18 people in all wrote the 2000 tests. My estimate is that the legacy tests will be the basis for most of verification, so it should be a simple task just to port them to the new interfaces and protocols. If you have concerns about the tests continuing to function correctly, you can use "diff"s to assure they're still on track. After the 4th spin those test proved to be quite efficient at getting the silicon out and relatively bug-free. So what I'm trying to say is that we have already invested years and years in these tests, so we can't risk loosing them."
Oh, great! Legacy tests, piece-a-cake I though to myself. I just hoped my cynical grin didn’t shown too brightly as he continued.
“To improve quality, we decided we are measuring each designer's performance by how many bugs are found in their block. I think this will have a positive influence on both the deign engineers and on the verification team, since it ties each engineer's success with the quality of design he or she delivers. A colleague of mine told me about how this method decreased the number of reported bugs in his project by over 53%! That’s an amazing savings!"
"Since we know verification is mainly a resource problem, if you fall behind we will assign 5 to 10 people to it during the critical time. You need to plan to support these engineers in test writing with the verification environment. In addition the logic design team will be completely freed up to write tests while they work part time on debug of their blocks as well as run synthesis and work on the backend. Also, the PTY department has some interns they say they can spare for a few weeks for testing, and I bet we can hire short-term contractors to write some tests, shouldn't be any problem finding good verification engineers, all they need to know is how to write some Verilog and it doesn't even have to be synthesize-able, so how hard could that be.”
It never ceases to amaze me when managers see every problem in terms of resources, and have a knack for tracking and emphasizing the wrong project indicators instead of quantifying the goals and tracking based on reaching the same goals. I was exhausted now, but looked attentive as he continued.
“This might seem like a lot of work, but, I want you to take into account that most of the hard verification work we'll do on an FPGA, so your job should be simple. You don't need to waste any time on random, since that will be accomplished in the FPGA. But keep in mind that you will need to support recreating the bugs in simulation."
"The schedule is pretty tight on this, since we plan to finish the coding so quickly, I expect you can be done with the verification in the span of 6 to 8 weeks; in short all you need to do is write some interfaces, port some tests and write a few tests for the new protocols."
"As you know, we've always placed the utmost importance on verification. To keep morale high, your goal should be 0 bugs in first silicon. You have a key role in the success of our product, Jasmine; I know you can do it!”
I found the combination of baseless time assessments, over-emphasis and reliance on emulation alone, and unachievable goals to complete this great plan disheartening. I took a minute to absorb and contemplate my response, and then said. “You, know this sounds really exciting, but I really don’t think I’m right for this job.”
As I left the room, I glanced back to see the stunned look on his face.
*This article is fiction and is meant to illustrate trends the author has seen in the industry.*
Akiva Michelson is chief technology officer for Ace Verification, a verification methodology training and consulting firm.