The Conservative State of RTL Functional Verification

For many years that I have worked as an RTL Design and Verification engineer, I have witnessed the conservatism of people in the industry.  Don’t get me wrong, engineers I worked with were mostly keen to use new technology and new tools.  Yet, the rate that we adopt and discover new ways of doing things are relatively much slower than the people in the software industry.  I suppose there are three main reasons behind the inherent conservative mindset.

  1. Ever heard of this?  “We can’t use that because the synthesis does not support it”  and you can substitute “synthesis” with simulator, formal verification, DFT, etc.   Basically, we are under the grip and mercy of the EDA giants.   If they say no, we don’t use it.  When unsure, we take the conservative choice, which is to do whatever was done before.
  2. Ever heard of this?  “Has it been proven in silicon yet?”  Yes, the cost of having a failed chip is so high that it thwarts innovation for average people.  It’s too often risky to try something new unless it’s directly related to new functionality or directed by our bosses.
  3. It takes high software skills to produce new tools.  We have a very limited pool of resources that can do that. That’s why the software industry keeps producing new open-source tools.  One of the best open-source EDA tools that I have used is Icarus Verilog, which is actually developed a software engineer.

New language additions and even new verification libraries have to be adopted as a standard and supported widely by EDA vendors, or people would not dare to use.  That’s crazy.  We used to lead in verification and testing methodology but now the software world had gone at least a step ahead of us.  We are stuck with the language and methodology inherited pretty much from the last century.  Yes, Vera and Specman-e were invented just before the 21st century and became popular in the early 2000’s.   SystemVerilog was a direct descendant of Vera, the inferior of the two unfortunately.  The solution was half-baked and SystemVerilog had to be augmented by a very thick UVM layer to fulfill its verification promises.

Anyway, the whole solution, UVM + SystemVerilog, only answers the advanced functionality need of chip verification.   It fails terribly to deliver the productivity improvement.  The complex API’s and the lack of flexibility undercut the promised benefits of the methodology.  The net effect, in my opinion, is negative in many cases especially when you first adopt the methodology (actually should be called “framework”, not “methodology”).  Many will disagree with me on this and that’s understandable.  I’ll write in later posts how I have used and adapted UVM in my recent projects to get around the negatives.

For the point of arguing here, just look around and see the evidence around you.  A lot of the resources about UVM on the internet are focusing on explaining the API and its internals i.e. how to use it, how it actually works and why it works that way.  Many also came up with ways to simplify it, notably Easier UVM, UVM-Light, etc.  Few are focusing on actual applications and advanced use cases.  The same thing happened when I attended a 3-day UVM training class taught by Synopsys.  The majority of the class time was spent on explaining the UVM API and its internals!  The labs were fragmented, not designed to build the whole testbench environment.  The style was totally different than the Specman training class I attended many years ago, which went through a complete testbench example and build it up as the class went through each of the features of the language/tool.

Why do we need to care how it works internally?  Usually when we use a library, we only need to know its interface, not its implementation.   That’s not the case for UVM because

  1. UVM doesn’t just use SystemVerilog, it sort of puts bandages on it all over the place through macros, etc.  In order to debug problems or interpret error messages, we cannot avoid not understanding the internals.
  2. Sometimes we need to use something more advanced.  To do something beyond the basic examples presented in the literature, I found that I couldn’t just read the API reference manual, I had to dig deeper and sometimes even read the source code.
  3. We have to write a lot of code that means nothing.  They are there and required to make the thing work and good engineers just want to know why.

Basically, to be successful at UVM you need to understand its internals and how it’s actually working, not every little detail but at least the whole picture.

The study done by Wilson Research Group showed that the verification resources are increasing steadily from 2007 to 2016.  The industry seemed to use more design engineers than verification engineers before 2012.  It’s not 100% conclusive whether this increase was due to the designs being more complex or the verification process itself being more complex.  Chip sizes are not directly translate to complexity.  The study indicates as if gate count is the indication of design complexity, which I disagree.  The number of design engineers actually should be more directly correlated to complexity.

Trend of chip design vs verification resources

There are a few more interesting points to note from the study:

  1. The UVM adoption rate in the study grew from 40% to 63% and 75% from 2012 to 2016.  What I actually wanted to know was the usage rate of xVM (UVM + OVM + VMM) methodologies but couldn’t extract that from the study.  It does have the numbers for each of the methodologies but we can’t tell exactly the combined usage rate (or adoption rate) because most people who adopted UVM were usually doing so in transition and using more than one methodology, including traditional and other methodologies.  Having said that, it’s safe to say that the adoption rate of xVM has been steadily rising from 2007 to present.
  2. The quality of work as indicated by the design completion time and the number of IC re-spins does not increase much (or almost at all) from 2012 to 2016.   The same study done in 2012 also did not show any significant sign of improvement in work quality from 2007 to 2012.

I have a feeling that there are a number of shoot-yourself-in-the-foot cases out there as I have seen some myself.  In my opinion, the quality of the work depends very much also on the quality of designs, the management and the discipline of the team.   Test plans, design review, code review, release/regression methodology, open communication, good documentation, and good project management are the things that directly improve the work quality and good teams have done that all along.  Adopting an advance verification framework alone is not going to fix these problems.

Leave a Reply

Your email address will not be published. Required fields are marked *

thirteen + 12 =