As integrated circuits have become progressively more complex, constrained random stimulus has become ubiquitous as a means of stimulating a designs functionality and ensuring it fully meets expectations. In theory, random stimulus allows all possible combinations to be exercised given enough time, but in practice with highly complex designs a purely random approach will have difficulty in exercising all possible combinations in a timely fashion. As a result it is often necessary to steer the Design Verification (DV) environment to generate hard to hit combinations. The resulting constrained-random approach is powerful but often relies on extensive human expertise to guide the DV environment in order to fully exercise the design. As designs become more complex, the guidance aspect becomes progressively more challenging and time consuming often resulting in design schedules in which the verification time to hit all possible design coverage points is the dominant schedule limitation. This paper describes an approach which leverages existing constrained-random DV environment tools but which further enhances them using supervised learning and reinforcement learning techniques. This approach provides better than random results in a highly automated fashion thereby ensuring DV objectives of full design coverage can be achieved on an accelerated timescale and with fewer resources. Two hardware verification examples are presented, one of a Cache Controller design and one using the open-source RISCV-Ariane design and Google's RISCV Random Instruction Generator. We demonstrate that a machine-learning based approach can perform significantly better on functional coverage and reaching complex hard-to-hit states than a random or constrained-random approach.
Deep Dive into Optimizing Design Verification using Machine Learning: Doing better than Random.
As integrated circuits have become progressively more complex, constrained random stimulus has become ubiquitous as a means of stimulating a designs functionality and ensuring it fully meets expectations. In theory, random stimulus allows all possible combinations to be exercised given enough time, but in practice with highly complex designs a purely random approach will have difficulty in exercising all possible combinations in a timely fashion. As a result it is often necessary to steer the Design Verification (DV) environment to generate hard to hit combinations. The resulting constrained-random approach is powerful but often relies on extensive human expertise to guide the DV environment in order to fully exercise the design. As designs become more complex, the guidance aspect becomes progressively more challenging and time consuming often resulting in design schedules in which the verification time to hit all possible design coverage points is the dominant schedule limitation. Thi
The first part of this paper presents an overview of the challenges in verifying complex IC's such as a Microprocessor, GPU or SOC. The second part of the paper demonstrates real-world examples of using Machine Learning to improve functional coverage thereby achieving results better than constrained-random techniques.
Two hardware verification examples are presented, one of a Cache Controller design and one using the open-source RISCV- Ariane [5] design and Google’s RISCV Random Instruction Generator [6] [7]. We demonstrate that a machine-learning based approach can perform significantly better on functional coverage and reaching complex hard-to-hit states than a random or constrained-random approach.
Software and hardware systems are playing an increasingly greater role in our daily life. Thus verifying these complex systems and ensuring their safe behaviour is becoming significantly more important. According to a recently published DARPA report [4], the cost of verifying an IC is approaching greater than half of the total cost of design. The cost of verification in the software segment is also increasing exponentially.
We conclude the paper with future possibilities of the application of our technology to other domains such as Software verification of mobile apps.
Design verification (DV) of integrated circuits typically involves generating stimulus of input signals and then evaluating the resulting output signals against expected values. This allows design simulations to be conducted to determine whether the design is operating correctly. Simulation failures indicate that one or more bugs are present in the design and hence the design must be modified in order to fix the bug and the simulation(s) are then re-run to verify the fix and uncover more bugs.
Writing specific tests by hand is no longer sufficient to verify all possible functionality of today’s complex chip designs. In order to fully exercise all possible combinations, a constrained-random approach is typically employed whereby input stimulus combinations and sequences are generated randomly but with an extensive set of environment-controls to allow the simulation to be steered in ways which allow for a rich and diverse set of input stimulus to be generated. However, passing some fixed set of simulations is insufficient to demonstrate that a design is free from bugs. It is also necessary to determine whether the set of simulations run on the design are sufficient to fully cover the entirety of the functionality of the design required to satisfy the design objectives. The extent to which a set of simulations covers desired design functionality is termed coverage and there are a number of different methods by which coverage can be measured and tracked.
A commonly used simple method for tracking coverage involves determining whether each line of code in a design evaluates both to a ‘1’ and a ‘0’ at least once each during the set of simulations. However this method can be insufficient to guarantee full coverage as there may be conditions which are deemed necessary for the design to handle for which there may not be a corresponding line of code which exactly encapsulates the condition. Common examples are cases where there may be a confluence of two or more conditions which can happen in the same cycle. Each condition may have a corresponding line of code but there may not be a line of code which fully expresses all conditions occurring simultaneously. It may also be necessary to generate certain sequences over time which again may not have corresponding design code which captures the objective.
For example, a cache memory in a microprocessor often has a limitation in the number of read/write operations it can handle simultaneously. The number of potential read/write operations will often exceed this limit. For example, the cache may be able to process two accesses in a single cycle whereas there may be read/write requests from one or more load instructions, a store buffer, a victim buffer and a snoop (cache coherence) queue. In this case there is often arbitration logic to determine which accesses proceed to the cache and which are stalled to be handled in subsequent clock cycles. To verify the design all combinations of simultaneous cache access requests must be tested including all requesters active in the same clock cycle.
In order to handle these cases it has become increasingly common to encapsulate a functional condition involving a number of existing signals in the design in a functional coverage statement. Each functional coverage statement typically represents a condition which the design must be able to handle functionally but for which no signal is already present in the design. Many such functional coverage statements may be written in order to capture cases which the design must be able to handle. The set of functional simulations run on the design must then be able to ensure that all functional coverage statements are hit at le
…(Full text truncated)…
This content is AI-processed based on ArXiv data.