Verification’s Inflection Point

What will the future of verification look like? New demands being placed on verification teams are causing the industry to take a deep look at the possibilities.

popularity

Functional verification is nearing an inflection point, brought on by rising complexity and the many tentacles that are intermixing it with other disciplines. New abstractions or different ways to approach the problems are needed.

Being a verification engineer is no longer enough, except for those whose concerns is block-level verification. Most of the time and effort spent in verification is now focusing on the chip, the system, performance, power, safety, security, along with an increasing array of concerns, the latest of which are associated with machine learning.

Verification always has been playing catch-up. Design had a 20-year head start. During the early days, designs were tested, not verified. It is no accident that this is the term still used today, because simulation was introduced to virtualize the physical test process. Testers were big and expensive, so companies could not afford to be developing the test vector suites on those machines. Instead, being able to do this on the design as it was being developed allowed better test suites to be developed in a more cost-effective manner.

Even the first languages that we associate with verification were developed to help improve the test development process. Some claim that the birth of verification happened with Verilog, a language that enabled both the hardware to be described, and could also be used to define a testbench. Other believe that verification started only when the first constrained random test pattern generator was developed. Either way, verification has been playing catch up ever since and it is not clear that it is gaining ground.

At the most simplistic level, it can be looked at mathematically. “If you double the gate count, you square the state space,” says Paul Cunningham, corporate vice president and general manager at Cadence. “It is existential that verification is on a different trajectory to anything else. As we keep putting more compute into silicon, the verification problems will grow out of bounds. If you double the gate count every 18 months, that is already exponential. But if you square the complexity every 18 months, you have a double exponential.”

Exhaustive verification has not been possible for a long time. “You have to look back to the distinction between simulation and verification,” says Neil Hand, director of marketing for design verification technology at Siemens EDA. “Verification today utilizes a whole suite of tools, including data analytics, where you are looking at how to take this big sea of data and make it actionable for the verification engineer. It just keeps expanding, and it will keep expanding. For any significant system, you are never going to check everything. It will have a state space that you cannot completely cover. What you need are tools and technologies that allow you to focus on where you should be looking.”

Balance is necessary. “People are running formal, simulations on a farm, emulators, FPGAs, they’ll be doing stuff by hand, they’ll be doing stuff automatically, they’ll be doing continuous integration, all with the remit of trying to get something hugely complicated, with several dimensions, out the door in a way that doesn’t come back to cost the company money or lose the company time,” says Colin McKellar, vice president of verification platforms for Imagination Technologies. “Because of the rapidly increasing complexity, but not necessarily backed up with a commensurate increase in revenue or investment, the verification and validation teams are becoming more challenged, and require more automation and more ability to traverse the data quickly, and more assistance across the flow.”

Increasing scope
The added complexity impacts functional verification, but that is not the only task placed on the shoulders of the verification team today. “It’s no longer just functional verification,” says Imagination’s McKellar. “In fact, it hasn’t been for some time. Power, performance, driver stability, driver quality, security implications, functional safety are all massive challenges. Functional verification has morphed quite quickly into something that is much bigger, without necessarily having the tools keeping up, and an increasing bunch of experts are involved in the process.”

In some case these tasks are getting pushed out to other groups. “There are people doing safety verification and they are not the ones doing functional verification,” says Siemens’ Hand. “They are not the same as the ones doing formal verification. They are not the same ones doing power verification. We see many of our users where this split happened. The challenge for smaller companies is they cannot afford different groups. Smaller companies require their verification engineers to take on way more than they did in the past.”

The scope of these problems is not well defined. “Whether it’s a security-related requirement, or a safety-related requirement, or reliability-related requirements, you have a waterfall process that comes from the original requirements,” says Cadence’s Cunningham. “How much are you trying to validate and verify those requirements at the software level? How much are they decomposed into chip-level requirements? You see that duality with functional safety, where you have safety mechanisms that try to ensure you meet your safety requirements at a software test level, or you can try to decompose them into unit-level requirements and check those at the chip level.”

Functional safety requires a concerted effort. “For safety certification it is not enough to verify that this device works or not,” says Darko Tomusilovic, verification director for Vtool. “We also have to prove that we respected the procedure, that we can reproduce all bugs, that we documented all the reviews of all the processes. That adds another layer of complexity to an already quite difficult profession. Verification must test the feature set, and it also should try to break the feature set.”

Security extends some of those notions even more. “You have to verify the cases that are necessary to support the required functionality, but you also have to verify the cases that do not have the required privilege,” says Olivera Stojanovic, senior verification manager for Vtool. “Your work has more than doubled. You need to check all the combinations both in a positive and negative sense.”

It can become even more complicated when side-channel attacks are considered, because this requires intimate knowledge about physical aspects of the implementation. “If you look at it from purely from a hardware point of view, it is very different than when you look at it from an application viewpoint,” says McKellar. “That means going through the various software layers, and to various hardware blocks as well. That’s a hugely challenging thing to validate and verify.”

It is also an interesting challenge when integrating IP blocks. “How do you prove to a customer that if you take all these components and bolt them together this way, you can guarantee security?” adds McKellar. “Is that even possible? Given the nature of the industry and the desire to get things out quickly and cost effectively, it raises all kinds of challenges.”

Verification no longer ends when chips or products are shipped. “We must take a holistic approach to verification when we talk about continuous verification throughout the product lifecycle,” says Rob van Blommestein, head of marketing for OneSpin Solutions. “We need to look at solutions that work to verify not only functional behavior but also safety and security aspects, and simulation alone cannot be the answer. Formal should be an integral part of the verification puzzle if proper verification closure is to be met. It is incumbent that design teams plan early for verification so that specific bugs, including corner-case bugs, can be detected before it is too late, and delays occur. Design and verification teams ought to work closely with safety engineers and security engineers to make this happen.”

Domain encroachment
Another factor is the expanding overlap between software and verification. “Almost every chip and every device you verify will have at least one, if not more, processors,” says Vtool’s Tomusilovic. “In the pre-silicon phase of verification, you need to run some basic software and system scenarios to make sure the device will work as expected. This involves software drivers, which we developed to run in our simulation environments, and are almost the same as the ones which embedded software developers developed for the end product.”

Responsibilities are shifting along with these changes, as well. “Teams are taking on more verification engineers every year, but there is also increasing software content,” says Cunningham. “They are finding the software and the verification teams have more blurring of challenges and who takes on which responsibility. If something is pure software, then you have a contract at the handoff point. You have to divide somewhere, and that is a fairly reasonable way in which you come up with that contract between the two teams. Who owns what and who is accountable for it?”

Developments such as RISC-V make this more difficult. “When you look at things like RISC-V, where people can change the instruction set, you’re not just looking at is my software working,” says Hand. “You now ask, ‘Is my software working and the processor doing what I intended to do with my constraints?’ It is not always clear whose responsibility that is.”

This is changing the dynamic in some teams. “People from the software team are coming to the verification engineers, asking them questions instead of the designers,” says Vtool’s Stojanovic. “This is because verification engineers are more familiar with how the DUT can be used, how to configure it, and the procedures necessary to enable certain functions. They are being considered as functional architects, knowing the whole system.”

Inflection point
Some people within the industry see that we are reaching an inflection point. “Running increasing amounts of verification to be more confident is the opposite of what we need,” says McKellar. “Instead we should be asking, ‘What should we stop running? How do we improve the efficiency of generating that data point?’ We need to manage this from a cost point of view, and manage it from an environment point of view. We need to get a good balance on that.”

It needs structural changes. “Verification doesn’t stand alone anymore,” says Hand. “Verification is part of this system design process. We need to link verification into the system’s design, and we are doing that through requirements management. It does not necessarily mean that system design people need to understand verification, or that verification people need to understand systems design, but they need to have a way to exchange data.”

And it needs new technologies. “The techniques used for traditional functional verification do not scale to the SoC level,” says Cunningham. “Verification has to change to be at different levels of abstraction. You can’t just do signal-level, constraint random UVM testbenches, you need to run real software workloads on the chip and try to verify that it is okay. Unit level methods are not possible. It’s not tractable.”

Raising the abstraction level
In the past, the industry has looked at raising the level of abstraction to solve problems of complexity. That path has not always been the one adopted. For design, encapsulation through the notions of integrated IP won out over high-level synthesis for many functions.

Within verification, UVM has added increasing notions of abstraction. “We have UVM frameworks, which allow us to do more abstraction,” says Hand. “More recently, Portable Stimulus gives us another level of abstraction. It’s always moving to a higher level of abstraction. Change appears to happen really slowly, but in hindsight, it looks like a revolution. We are in one of those right now, we are going to move to higher levels of abstraction. We’ve done it with UVM, we will do it with Portable Stimulus.”

Many are still not familiar with Portable Stimulus. “It is trying to create a new language for coding tests that is very targeted at the system level,” says Cunningham. “It targets the middle zone between running pure software and writing a traditional SystemVerilog/UVM testbench. It is transaction-level randomization. You describe scenarios, which is essentially a graph or state machine of behaviors. These behaviors can be as abstract as you like. An arc in the graph could be sending a packet or transacting a piece of information. The language allows you to describe these abstract scenarios as a collection of behaviors. Then you compile them into software programs that run on the processors in the SoC and execute different parts of the scenarios. Or it can compile into signal-level stimulus that you will inject onto a bus or through a peripheral. It can be a layer that wraps around and feeds into a UVM or signal-level testbench.”

Thinking about this as a system-level model can enable even more structural changes. “It could be the Holy Grail,” says Stojanovic. “If we can somehow use verification code to create the models, you are implementing something similar to the design. If the verification environment is ready before RTL, you can start using these simulation models to enable software testing.”

That model will enable future changes. “It can be a model of what you’re trying to achieve, or a model of the hardware at some high level, or a combination of both,” says McKellar. “Customers are increasingly asking for accurate, fast models at the start of the project. They need them for making hardware architectural decisions, and software architectural decisions. If we can generate models early, then we can hand them to our customer’s customers, that are writing applications and trying to work out which of these compute cores to use. Should I run this on a graphics core? Should I run this on a neural network accelerator? Should I run it on a CPU or offload it to a DSP? If they get that wrong at the start of the process, they may have software that is never going to be performant, because they spent all of their time writing based upon the wrong assumption. All because they didn’t have the right models.”

But we need to keep an open mind. “With care, functional verification can be extended to provide confidence about other areas such as performance and security,” says Daniel Schostak, architect and fellow, central engineering group at Arm. “However, this is not always the best approach, because using a different kind of abstraction may provide better results more quickly. For example, security may be more easily verified in terms of information flow rather than trying to model the details of the implementation. Furthermore, verifying features such as functional safety or power requires additional functionality to that used for functional verification. Consequently, verifying IP requires multiple flows and tools to ensure all areas are addressed properly.”

McKellar is also thinking along the same path. “For functional safety, we are trying to change how you analyze the data. By thinking outside the box and asking how you generate the data.”

Conclusion
Verification has been under pressure for a long time. Miraculously, chip failure rates have not increased significantly over time. This means the tools and flows that exist today, while not perfect, are doing an adequate job.

However, we should never stop looking for better solutions. Continuing in the same direction that we have been going is raising new environmental concerns, and additional demands being placed on the verification team are beginning to muddle the responsibilities between teams. The industry is increasingly looking at ways we could be doing it better, and it is not yet clear what the future looks like.

Related
Brute-Force Analysis Not Keeping Up With IC Complexity
How to ensure you’ve dealt with the most important issues within a design, because finding those spots is becoming a lot more important.
Stretching Engineers
The role of engineers is changing, and they need to be picking up new skills if they are to remain valuable team players. There are several directions they could go in.
RISC-V Verification Challenges Spread
Continuous design innovation adds to verification complexity, and pushes more companies to actually do it.
Is Hardware-Assisted Verification Avoidable?
Simulation is no longer up to the task of system-level verification, but making the switch to hardware-assisted verification can lead to some surprises if you do not fully plan ahead.



Leave a Reply


(Note: This name will be displayed publicly)