Avi Ziv
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Avi Ziv.
design automation conference | 2003
Shai Fine; Avi Ziv
Functional verification is widely acknowledged as the bottleneck in the hardware design cycle. This paper addresses one of the main challenges of simulation based verification (or dynamic verification), by providing a new approach for coverage directed test generation (CDG). This approach is based on Bayesian networks and computer learning techniques. It provides an efficient way for closing a feedback loop from the coverage domain back to a generator that produces new stimuli to the tested design. In this paper, we show how to apply Bayesian networks to the CDG problem. Applying Bayesian networks to the CDG framework has been tested in several experiments, exhibiting encouraging results and indicating that the suggested approach can be used to achieve CDG goals.
design automation conference | 1998
Raanan Grinwald; Eran Harel; Michael Orgad; Shmuel Ur; Avi Ziv
This paper describes a new coverage methodology developed at IBMs Haifa Research Lab. The main idea behind the methodology is a separation of the coverage model definition from the coverage analysis tool. This enables the user to define the coverage models that best fit the points of significance in the design, and still have the benefits of a coverage tool. To support this methodology, we developed a new coverage measurement tool called Comet. The tool is currently used in many domains, such as system verification and micro-architecture verification, and in many types of designs ranging from systems, to microprocessors, and ASICs.
design automation conference | 2002
Oded Lachish; Eitan Marcus; Shmuel Ur; Avi Ziv
One of the main goals of coverage tools is to provide the user with informative presentation of coverage information. Specifically, information on large, cohesive sets of uncovered tasks with common properties is very useful. This paper describes methods for discovering and reporting large uncovered spaces (holes) for cross-product functional coverage models. Hole analysis is a presentation method for coverage data that is both succinct and informative. Using case studies, we show how hole analysis was used to detect large uncovered spaces and improve the quality of verification.
design, automation, and test in europe | 2011
Allon Adir; Shady Copty; Shimon Landa; Amir Nahir; Gil Shurek; Avi Ziv; Charles Meissner; John Schumann
The growing importance of post-silicon validation in ensuring functional correctness of high-end designs increases the need for synergy between the pre-silicon verification and post-silicon validation. We propose a unified functional verification methodology for the pre- and post-silicon domains. This methodology is based on a common verification plan and similar languages for test-templates and coverage models. Implementation of the methodology requires a user-directable stimuli generation tool for the post-silicon domain. We analyze the requirements for such a tool and the differences between it and its pre-silicon counterpart. Based on these requirements, we implemented a tool called Threadmill and used it in the verification of the IBM POWER7 processor chip with encouraging results.
IEEE Transactions on Computers | 1997
Avi Ziv; Jehoshua Bruck
This thesis deals with fault tolerant schemes that include checkpointing to shorten recovery time after failures, and task duplication for fault detection. Until now there was no known analytical method to analyze these schemes, and simulation was used to check their performance. The thesis includes a new analysis technique for checkpointing schemes with task duplication. This technique gives an easy-to-use method to analyze and study the performance of the schemes. A few applications of the analysis tool, such as finding the optimal interval between checkpoints and comparing different aspects in the performance of existing schemes, are given. One of conclusions we reached from studying the performance of existing schemes is that the system on which the scheme is implemented can have a major effect on the scheme performance. The thesis describes new checkpointing schemes that consist of two types of checkpoints, compare checkpoints and store checkpoints. The two types of checkpoints can be used to tune the schemes to the system they are used on, and enable an efficient use of the system resources. Analysis results show that using two types of checkpoints can lead to a significant improvement in the performance of checkpointing schemes. Experimental results, obtained on the Intel Paragon parallel computer and a cluster of workstations, confirm that the tuning of checkpointing schemes to the specific systems they are used on can significantly improve their performance. Another way to improve the performance of checkpointing schemes is to use changes in the checkpointing cost to improve the checkpointing placement strategy. A new on-line algorithm, that uses past and present knowledge when it decides whether or not to place a checkpoint, is presented. Analysis of the new scheme shows that the total overhead of execution time when the proposed algorithm is used is significantly smaller than the overhead when fixed intervals are used. Although the proposed on-line algorithm uses only knowledge about the past and present, its behavior is close to the off-line optimal algorithm that uses a complete knowledge of checkpointing cost in all possible locations.
design automation conference | 2004
Shai Fine; Shmuel Ur; Avi Ziv
Random test generators are often used to create regression suites on-the-fly. Regression suites are commonly generated by choosing several specifications and generating a number of tests from each one, without reasoning which specification should he used and how many tests should he generated from each specification. This paper describes a technique for building high quality random regression suites. The proposed technique uses information about the probablity of each test specification covering each coverage task. This probability is used, in tun, to determine which test specifications should be included in the regression suite and how many tests should, be generated from each specification. Experimental results show that this practical technique can he used to improve the quality, and reduce the cost, of regression suites. Moreover, it enables better informed decisions regarding the size and distribution of the regression suites, and the risk involved.
IEEE Transactions on Computers | 1998
Avi Ziv; Jehoshua Bruck
The paper suggests a technique for analyzing the performance of checkpointing schemes with task duplication. We show how this technique can be used to derive the average execution time of a task and other important parameters related to the performance of checkpointing schemes. The analysis results are used to study and compare the performance of four existing checkpointing schemes. Our comparison results show that, in general, the number of processors used, not the complexity of the scheme, has the most effect on the scheme performance.
design automation conference | 2011
Allon Adir; Maxim Golubev; Shimon Landa; Amir Nahir; Gil Shurek; Vitali Sokhin; Avi Ziv
Post-silicon validation poses unique challenges that bring-up tools must face, such as the lack of observability into the design, the typical instability of silicon bring-up platforms and the absence of supporting software (like an OS or debuggers). These challenges and the need to reach an optimal utilization of the expensive but very fast silicon platforms lead to unique design considerations - like the need to keep the tool simple and to perform most of its operation on platform without interaction with the environment. In this paper we describe a variety of novel techniques optimized for the unique characteristics of the silicon platform. These techniques are implemented in Threadmill - a bare-metal exerciser targeting multi-threaded processors. Threadmill was used in the verification of the POWER7 processor with encouraging results
design automation conference | 2010
Amir Nahir; Avi Ziv; Miron Abramovici; Albert Camilleri; Rajesh Galivanche; Bob Bentley; Harry Foster; Alan J. Hu; Valeria Bertacco; Shakti Kapoor
Post-silicon validation is a necessary step in a designs verification process. Pre-silicon techniques such as simulation and emulation are limited in scope and volume as compared to what can be achieved on the silicon itself. Some parts of the verification, such as full-system functional verification, cannot be practically covered with current pre-silicon technologies. This panel brings together experts from industry, academia, and EDA to review the differences and similarities between pre- and post-silicon, discuss how the fundamental aspects of verification are affected by these differences, and explore how the gaps between the two worlds can be bridged.
high level design validation and test | 2005
Shai Fine; Ari Freund; Itai Jaeger; Yishay Mansour; Yehuda Naveh; Avi Ziv
The initial state of a design under verification has a major impact on the ability of stimuli generators to successfully generate the requested stimuli. For complexity reasons, most stimuli generators use sequential solutions without planning ahead. Therefore, in many cases, they fail to produce a consistent stimuli due to an inadequate selection of the initial state. We propose a new method, based on machine learning techniques, to improve generation success by learning the relationship between the initial state vector and generation success. We applied the proposed method in two different settings, with the objective of improving generation success and coverage in processor and system level generation. In both settings, the proposed method significantly reduced generation failures and enabled faster coverage