Charles Meissner
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Charles Meissner.
design, automation, and test in europe | 2011
Allon Adir; Shady Copty; Shimon Landa; Amir Nahir; Gil Shurek; Avi Ziv; Charles Meissner; John Schumann
The growing importance of post-silicon validation in ensuring functional correctness of high-end designs increases the need for synergy between the pre-silicon verification and post-silicon validation. We propose a unified functional verification methodology for the pre- and post-silicon domains. This methodology is based on a common verification plan and similar languages for test-templates and coverage models. Implementation of the methodology requires a user-directable stimuli generation tool for the post-silicon domain. We analyze the requirements for such a tool and the differences between it and its pre-silicon counterpart. Based on these requirements, we implemented a tool called Threadmill and used it in the verification of the IBM POWER7 processor chip with encouraging results.
haifa verification conference | 2010
Allon Adir; Amir Nahir; Avi Ziv; Charles Meissner; John Schumann
Obtaining coverage information in post-silicon validation is a difficult task. Adding coverage monitors to the silicon is costly in terms of timing, power, and area, and thus even if feasible, is limited to a small number of coverage monitors. We propose a new method for reaching coverage closure in post-silicon validation. The method is based on executing the post-silicon exercisers on a pre-silicon acceleration platform, collecting coverage information from these runs, and harvesting important test templates based on their coverage. This method was used in the verification of IBMs POWER7 processor. It contributed to the overall high-quality verification of the processor, and specifically to the post-silicon validation and bring-up.
design automation conference | 2011
Allon Adir; Amir Nahir; Gil Shurek; Avi Ziv; Charles Meissner; John Schumann
The growing importance of post-silicon validation in ensuring functional correctness of high-end designs has increased the need for synergy between the pre-silicon verification and post-silicon validation. This synergy starts with a common verification plan. It continues with common verification goals and shared tools and techniques. This paper describes our experience in improving this synergy in the pre- and post-silicon verification of IBMs POWER7 processor chip and by leveraging pre-silicon methodologies and techniques in the post-silicon validation of the chip.
Ibm Journal of Research and Development | 2011
Klaus-Dieter Schubert; Wolfgang Roesner; John M. Ludden; Jonathan R. Jackson; Jacob Buchert; Viresh Paruthi; Michael L. Behm; Avi Ziv; John Schumann; Charles Meissner; Johannes Koesters; James P. Hsu; Bishop Brock
This paper describes the methods and techniques used to verify the POWER7® microprocessor and systems. A simple linear extension of the methodology used for POWER4®, POWER5®, and POWER6® was not possible given the aggressive design point and schedule of the POWER7 project. In addition to the sheer complexity of verifying an eight-core processor chip with scalability to 32 sockets, central challenges came from the four-way simultaneous multithreading processor core, a modular implementation structure with heavy use of asynchronous interfaces, aggressive memory subsystem design with numerous new reliability, availability, and serviceability (RAS) advances, and new power management and RAS mechanisms across the chip and the system. Key aspects of the successful verification project include a systematic application of IBMs random-constrained unit verification, unprecedented use of formal verification, thread-scaling support in core verification, and a consistent use of functional coverage across all verification disciplines. Functional coverage instrumentation, which is combined with the use of the newest IBM hardware simulation accelerator platform, enabled coverage-driven development of postsilicon exercisers in preparation of bring-up, a foundation for the desired systematic linkage of presilicon and postsilicon verification. RAS and power management verification also required new approaches, extending these disciplines to span all the way from the unit level to the end-to-end scenarios using the hardware accelerators.
design automation conference | 2014
Allon Adir; Dave Goodman; Daniel Hershcovich; Oz Hershkovitz; Bryan G. Hickerson; Karen Holtz; Wisam Kadry; Anatoly Koyfman; John M. Ludden; Charles Meissner; Amir Nahir; Randall R. Pratt; Mike Schiffli; Brett Adam St. Onge; Brian W. Thompto; Elena Tsanko; Avi Ziv
Transactional memory is a promising mechanism for synchronizing concurrent programs that eliminates locks at the expense of hardware complexity. Transactional memory is a hard feature to verify. First, transactions comprise several instructions that must be observed as a single global atomic operation. In addition, there are many reasons a transaction can fail. This results in a high level of non-determinism which must be tamed by the verification methodology. This paper describes the innovation that was applied to tools and methodology in pre-silicon simulation, acceleration and post-silicon in order to verify transactional memory in the IBM POWER8 processor core.
Ibm Journal of Research and Development | 2015
Klaus-Dieter Schubert; John M. Ludden; S. Ayub; J. Behrend; Bishop Brock; Fady Copty; S. M. German; Oz Hershkovitz; Holger Horbach; Jonathan R. Jackson; Klaus Keuerleber; Johannes Koesters; Larry Scott Leitner; G. B. Meil; Charles Meissner; Ronny Morad; Amir Nahir; Viresh Paruthi; Richard D. Peterson; Randall R. Pratt; Michal Rimon; John Schumann
This paper describes methods and techniques used to verify the POWER8™ microprocessor. The base concepts for the functional verification are those that have been already used in POWER7® processor verification. However, the POWER8 design point provided multiple new challenges that required innovative solutions. With approximately three times the number of transistors available, compared to the POWER7 processor chip, functionality was added by putting additional enhanced cores on-chip and by developing new features that intrinsically require more software interaction. The examples given in this paper demonstrate how new tools and the continuous improvement of existing methods addressed these verification challenges.
vlsi test symposium | 2013
Nagib Hakim; Charles Meissner
In the processor functional verification field, pre-silicon verification and post-silicon validation have traditionally been divided into separate disciplines. With the growing use of high-speed hardware emulation, there is an opportunity to join a significant portion of each into a continuous workflow [2], [1]. Three elements of functional verification rely on random code generation (RCG) as a primary test stimulus: processor core-level simulation, hardware emulation, and early hardware validation. Each of these environments becomes the primary focus of the functional verification effort at different phases of the project. Focusing on random-code-based test generation as a central feature, and the primary feature for commonality between these environments, the advantages of a unified workflow include people versatility, test tooling efficiency, and continuity of test technology between design phases. Related common features include some of the debugging techniques - e.g., software-trace-based debugging, and instruction flow analysis; and some of the instrumentation, for example counters that are built into the final hardware. Three key use cases that show the value of continuity of a pre-/post-silicon workflow are as follows: First, the functional test coverage of a common test can be evaluated in a pre-silicon environment, where more observability for functional test coverage is available, by way of simulation/emulation-only tracing capabilities and simulation/emulation model instrumentation not built into actual hardware [3]. The second is having the the last test program run on the emulator the day before early hardware arrives being the first validation test program on the new hardware. This allows processor bringup to proceed with protection against simple logic bugs and test code issues, having only to be concerned with more subtle logic bugs, circuit bugs and manufacturing defects. The last use case is taking an early hardware lab observation and dropping it seamlessly into both the simulation and emulation environments. Essential differences exist in the three environments, and create a challenge to a common workflow. These differences exist in three areas: The first is observability & controllability, which touches on checking, instrumentation & coverage evaluation, and debugging facilities & techniques. For observability, a simulator may leverage instruction-by-instruction results checking; bus trace analysis and protocol verification; and many more error-condition detectors in the model than in actual hardware. For hardware a fail scenario must defined, considering how behavior would propagate to checking point. For example “how do I know if this store wrote the wrong value to memory?” For hardware, an explicit check in code, a load and compare, would be required. The impact of less controllabilty is also that early hardware tests require more elaborate test case and test harness code, since fewer simulator crutches are available to help create desired scenarios. Where a simulator test may specify “let an asynchronous interrupt happen on this instruction”, a hardware test may have to run repeatedly with frequent interrupts until the interrupt hits on the desired instruction. The second difference is in speed of execution, which typically involves a 10,000x-100,000x difference between each of the environments. This affects both the wallclock time needed to create a condition - whether or not the condition can be observed or debugged - and also the “scale” of software that can be run on a given environment, from 1000 instruction segments up to a full operating system. The final difference is that much larger systems are built than simulated, and this is an issue going from the pre-silicon environments to early hardware, especially in testing scenarios that involve large numbers of caches and memories. These methodology issues, both in terms of taking advantage of the pre-/post-silicon environment commonalities, and also contending with the differences, have aided and impacted several generations of IBM POWER server processors, [4].
Archive | 2002
Pedro Martin-de-Nicolas; Charles Meissner; Michael Timothy Saunders
Archive | 2007
Matthew Edward King; Charles Meissner; Todd Swanson; Michael E. Weissinger
Archive | 2001
Robert W. Berry; Michael Criscolo; Pedro Martin-de-Nicolas; Charles Meissner; Michael Timothy Saunders