Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Klaus-Dieter Schubert is active.

Publication


Featured researches published by Klaus-Dieter Schubert.


Ibm Journal of Research and Development | 2002

Hyper-acceleration and HW/SW co-verification as an essential part of IBM eServer z900 verification

Jörg Kayser; Stefan Koerner; Klaus-Dieter Schubert

Hardware/software (HW/SW) coverification can considerably shorten the time required for system integration and bring-up. But coverification is limited by the simulation speed achievable whenever hardware models are required to verify hardware and software interactions. Although the use of a general-purpose hardware accelerator as an extremely fast simulator resolves performance aspects, it generates a new set of handling, efficiency, and serviceability demands. This paper describes a means for addressing those demands through the use of one of the largest hyper-acceleration systems created thus far, and describes many new associated features that have been implemented in operating software.


Ibm Journal of Research and Development | 2011

Functional verification of the IBM POWER7 microprocessor and POWER7 multiprocessor systems

Klaus-Dieter Schubert; Wolfgang Roesner; John M. Ludden; Jonathan R. Jackson; Jacob Buchert; Viresh Paruthi; Michael L. Behm; Avi Ziv; John Schumann; Charles Meissner; Johannes Koesters; James P. Hsu; Bishop Brock

This paper describes the methods and techniques used to verify the POWER7® microprocessor and systems. A simple linear extension of the methodology used for POWER4®, POWER5®, and POWER6® was not possible given the aggressive design point and schedule of the POWER7 project. In addition to the sheer complexity of verifying an eight-core processor chip with scalability to 32 sockets, central challenges came from the four-way simultaneous multithreading processor core, a modular implementation structure with heavy use of asynchronous interfaces, aggressive memory subsystem design with numerous new reliability, availability, and serviceability (RAS) advances, and new power management and RAS mechanisms across the chip and the system. Key aspects of the successful verification project include a systematic application of IBMs random-constrained unit verification, unprecedented use of formal verification, thread-scaling support in core verification, and a consistent use of functional coverage across all verification disciplines. Functional coverage instrumentation, which is combined with the use of the newest IBM hardware simulation accelerator platform, enabled coverage-driven development of postsilicon exercisers in preparation of bring-up, a foundation for the desired systematic linkage of presilicon and postsilicon verification. RAS and power management verification also required new approaches, extending these disciplines to span all the way from the unit level to the end-to-end scenarios using the hardware accelerators.


design automation conference | 2014

Post-Silicon Validation of the IBM POWER8 Processor

Amir Nahir; Manoj Dusanapudi; Shakti Kapoor; Kevin Franklin Reick; Wolfgang Roesner; Klaus-Dieter Schubert; Keith Sharp; Greg Wetli

The post-silicon validation phase in a processors design life cycle is geared towards finding all remaining bugs in the system. It is, in fact, our last opportunity to find functional and electrical bugs in the design before shipping it to customers. In this paper, we provide a high-level overview of the methodology and technologies put into use as part of the POWER8 post-silicon functional validation phase. We describe the results and list the primary factors that contributed to this highly successful bring-up.


Ibm Journal of Research and Development | 2015

Solutions to IBM POWER8 verification challenges

Klaus-Dieter Schubert; John M. Ludden; S. Ayub; J. Behrend; Bishop Brock; Fady Copty; S. M. German; Oz Hershkovitz; Holger Horbach; Jonathan R. Jackson; Klaus Keuerleber; Johannes Koesters; Larry Scott Leitner; G. B. Meil; Charles Meissner; Ronny Morad; Amir Nahir; Viresh Paruthi; Richard D. Peterson; Randall R. Pratt; Michal Rimon; John Schumann

This paper describes methods and techniques used to verify the POWER8™ microprocessor. The base concepts for the functional verification are those that have been already used in POWER7® processor verification. However, the POWER8 design point provided multiple new challenges that required innovative solutions. With approximately three times the number of transistors available, compared to the POWER7 processor chip, functionality was added by putting additional enhanced cores on-chip and by developing new features that intrinsically require more software interaction. The examples given in this paper demonstrate how new tools and the continuous improvement of existing methods addressed these verification challenges.


international conference on computer aided design | 2009

POWER7: verification challenge of a multi-core processor

Klaus-Dieter Schubert

Over the years functional hardware verification has made significant progress in the areas of traditional simulation techniques, hardware accelerator usage and last but not least formal verification approaches. This has been sufficient to deal with the additional design content and complexity increase that has been happening at the same time. For POWER7, IBMs first high end 8-core microprocessor, these incremental improvements in verification have been deemed not to be enough by themselves, because the chip was not just a remap of an existing design with more cores. The infrastructure on the chip had to be changed significantly, while at the same time the business side requested a shorter development cycle with perfect quality but without growing the team. Looking at these constraints a two phase approach seemed to be the only solution. This paper commences with the highlights of the first phase, where improvements to the existing process have been identified. This includes topics ranging from enhanced test case generation, over advancements in structural checking to the extensions of the formal verification scope both in property checking and sequential equivalence checking. At the same time, the paper describes the second phase which has targeted the exploitation of synergy across the various verification activities. The active interlock between simulation, formal verification and the design has helped to reduce workload and improved the project schedule. And the usage of coverage in holistic way from unit level simulation to acceleration has led to new innovations and new insight, which improved the overall verification process. Finally, an outlook on future challenges and future trends is given. Categories and Subject Descriptors B.6.3 [Logic Design]: Design Aids — Verification. General Terms Verification


Ibm Journal of Research and Development | 2004

Configurable system simulation model build comprising packaging design data

Hans-Werner Anderson; Hans Kriese; Wolfgang Roesner; Klaus-Dieter Schubert

A high-end eServerTM consists of multiple microprocessor chips packaged with additional chips on a multichip module. In conjunction with memory and various I/O cards, this module is mounted on a card called a processor book, and a few of those cards on a board finally represent a major part of the system. Before the first hardware is built, simulations must be performed to verify that all of these components work together. But before we can build the simulation models, we need to find answers to many questions and to specify constraints, such as the scope of the simulation, the representation of the packaging data, the handling of cross-hierarchical connections such as cables, and the handling of passive components such as resistors and capacitors. This system model build should be as flexible as possible. System verification must be done for different system configurations (both single-processor and multiprocessor systems, one-processor-book systems, and multiprocessor-book systems) with or without I/O. Therefore, not only should a configurable model build downsize the model structure, but it should provide the capability to add logic. The requirement to include special logic, such as clock macros or checker logic, is driven by the use of emulation and acceleration technology, and by other speed-related elements. This paper discusses these new concepts in eServer development: a configurable simulation model build, the automatic derivation of structural model data from packaging design, and the addition of specific logic without affecting the model structure generated by the previous step.


european conference on service oriented and cloud computing | 2012

Simplified authentication and authorization for RESTful services in trusted environments

Eric Brachmann; Gero Dittmann; Klaus-Dieter Schubert

In some trusted environments, such as an organizations intranet, local web services may be assumed to be trustworthy. This property can be exploited to simplify authentication and authorization protocols between resource providers and consumers, lowering the threshold for developing services and clients. Existing security solutions for RESTful services, in contrast, support untrusted services, a complexity-increasing capability that is not needed on an intranet with only trusted services. We propose a central security service with a lean API that handles both authentication and authorization for trusted RESTful services. A user trades credentials for a token that facilitates access to services. The services may query the security service for token authenticity and roles granted to a user. The system provides fine-grained access control at the level of resources, following the role-based access control (RBAC) model. Resources are identified by their URLs, making the authorization system generic. The mapping of roles to users resides with the central security service and depends on the resource to be accessed. The mapping of permissions to roles is implemented individually by the services. We rely on secure channels and the trusted intermediaries characteristic for intranets to simplify the protocols involved and to make the security features easy to use, cutting the number of required API calls in half.


design automation conference | 2003

Improvements in functional simulation addressing challenges in large, distributed industry projects

Klaus-Dieter Schubert

The development of large servers is facing multiple challenges. The system combines a mix of design styles from custom VLSI chips to ASIC and SoC designs. The integration of hardware and firmware accumulates further challenges to the functional simulation effort. By adding more and more specialized verification solutions additional constraints are generated and the amount of required resources is increasing. To gain efficiency and to keep staffing requirements reasonable, improvements have to be put in place to integrate and standardize the different environments and tools. This paper talks about some of the enhancements that have been introduced for IBMs server technology.


haifa verification conference | 2015

The Verification Cockpit – Creating the Dream Playground for Data Analytics over the Verification Process

Moab Arar; Michael L. Behm; Odellia Boni; Raviv Gal; Alex Goldin; Maxim Ilyaev; Einat Kermany; John R. Reysa; Bilal Saleh; Klaus-Dieter Schubert; Gil Shurek; Avi Ziv

The Verification Cockpit (VC) is a consolidated platform for planning, tracking, analysis, and optimization of large scale verification projects. Its prime role is to provide decision support from planning to on-going operations of the verification process. The heart of the VC is a holistic centralized data model for the arsenal of verification tools used in modern verification processes. This enables connection of the verification tools and provides rich reporting capabilities as well as hooks to advanced data analytics engines. This paper describes the concept of the Verification Cockpit, its architecture, and implementation. We also include examples of its use in the verification of a high-end processor, while highlighting the capabilities of the platform and the benefits of its use.


high level design validation and test | 2002

Practical experiences in functional simulation. An integrated method from unit to co-simulation

Klaus-Dieter Schubert

IBM, like other major companies, is developing large compute servers. These servers run typically various applications supported on a variety of different operating systems. To support all the user scenarios the systems consist of a hardware layer and a system software layer, that is hidden from the application or operating system. The hardware consists of a set of chips where some are unique for a given series of compute server. The system software, also called firmware, can be viewed as an extension of the hardware, to enable additional features and to manage those complex systems. From a verification point of view the task is to make sure that first of all the hardware chips are working according to the specification, that all these chips are also working together and nevertheless the firmware code is working seamlessly with the hardware. To complicate the task, the chip designs are following not always the same methodology, driven by the fact that we have a mix of custom designed VLSI chips, standard ASIC designs and some SoC type chips. With teams distributed globally the verification challenge is to integrate and coordinate all efforts to finally ensure that the overall system is working. The presentation will describe the problems and possible solutions, touching topics like standardization of interfaces, designs and languages, reusability of specifications, documentation and software, project managing aspects and their implications on the process.

Researchain Logo
Decentralizing Knowledge