Thomas Bartenstein
Cadence Design Systems
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas Bartenstein.
international test conference | 2001
Thomas Bartenstein; Douglas C. Heaberlin; Leendert M. Huisman; David Sliwinski
A new way of diagnosing ICs that fail logic tests is described. It can handle bridging fault, opens, transition faults and many more complex defects as easily and as accurately as regular stuck-at faults.
international test conference | 2000
Thomas Bartenstein
Test Generation for VLSI circuits suffers from two competing goals-to reduce the cost of test by minimizing the number of tests, and to be able to diagnose errors when failures occur. This paper outlines a methodology for generating diagnostic test patterns as they are needed using standard ATPG took. These diagnostic patterns are guaranteed to provide better diagnostic resolution than traditional manufacturing test patterns, and the use of standard ATPG fools enables generation of diagnostic patterns only when these patterns are needed.
international test conference | 2004
Brion L. Keller; Mick Tegethoff; Thomas Bartenstein; Vivek Chickermane
This work describes an economic and return-on-investment (RoI) model for a test methodology that ensures product quality for logic devices that are in the 130 nm technology node and below. We describe the key components of the nanometer test methodology (NTM) and how it drives the model. In addition to ensuring product quality we address the cost of test and time to volume and how both factors can be improved. Examples from realistic scenarios are provided to illustrate the net savings from the proposed NTM using this model.
international test conference | 2005
Brion L. Keller; Thomas Bartenstein
This paper describes a simple means for diagnosing failures by observing a compacted MISR output stream. While MISRs have been used in the industry for response compression, their use has often been seen as an impediment to diagnosis of failures. This paper shows how it is possible to use MISRs to perform a go/no-go failure test with very little data volume and to also use a compacted continuous stream of MISR output states to aid diagnosis
international conference on software engineering | 2013
Thomas Bartenstein; Yu David Liu
This paper introduces Green Streams, a novel solution to address a critical but often overlooked property of data-intensive software: energy efficiency. Green Streams is built around two key insights into data-intensive software. First, energy consumption of data-intensive software is strongly correlated to data volume and data processing, both of which are naturally abstracted in the stream programming paradigm; Second, energy efficiency can be improved if the data processing components of a stream program coordinate in a “balanced” way, much like an assembly line that runs most efficiently when participating workers coordinate their pace. Green Streams adopts a standard stream programming model, and applies Dynamic Voltage and Frequency Scaling (DVFS) to coordinate the pace of data processing among components, ultimately achieving energy efficiency without degrading performance in a parallel processing environment. At the core of Green Streams is a novel constraint-based inference to abstract the intrinsic relationships of data flow rates inside a stream program, that uses linear programming to minimize the frequencies - hence the energy consumption - for processing components while still maintaining the maximum output data flow rate. The core algorithm of Green Streams is formalized, and its optimality is established. The effectiveness of Green Streams is evaluated on top of the StreamIt framework, and preliminary results show the approach can save CPU energy by an average of 28% with a 7% performance improvement.
conference on object-oriented programming systems, languages, and applications | 2014
Thomas Bartenstein; Yu David Liu
We introduce RATE TYPES, a novel type system to reason about and optimize data-intensive programs. Built around stream languages, RATE TYPES performs static quantitative reasoning about stream rates -- the frequency of data items in a stream being consumed, processed, and produced. Despite the fact that streams are fundamentally dynamic, we find two essential concepts of stream rate control -- throughput ratio and natural rate -- are intimately related to the program structure itself and can be effectively reasoned about by a type system. RATE TYPES is proven to correspond with a time-aware and parallelism-aware operational semantics. The strong correspondence result tolerates arbitrary schedules, and does not require any synchronization between stream filters.We further implement RATE TYPES, demonstrating its effectiveness in predicting stream data rates in real-world stream programs.
international test conference | 2004
Thomas Bartenstein
This work discusses about the opportunity for diagnostic tools and physical failure analysis. The failure of chip, cause for the failure, and its diagnostics are also focused. The diagnostic tools attempt to isolate the cause for the failure to a small enough area to enable identification of the physical defect that caused the chip to fail. Diagnostic tools typically work in a logic model environment. Existing ATPG technology is used to generate a test pattern that exercises a specific net repeatedly and quickly to enable data collection by a photon emission tool. This work also discusses about the virtual failure analysis, which has the capability to identify defects on failing die without the PFA lab, through the use of inline defect data, and whatever other means are possible.
asian test symposium | 2008
Brion L. Keller; Sandeep Bhatia; Thomas Bartenstein; Brian Foutz; Anis Uzzaman
This paper describes a simple means to enable direct diagnosis by bypassing MISRs on a small set of tests while achieving ultimate output compression using MISRs for the majority of tests. By combining two compression schemes, XOR and MISRs in the same device, it becomes possible to have high compression and still support volume diagnostics.
international test conference | 1997
Thomas Bartenstein; Gilbert Vandling
This paper describes an extension of the standard, stuck-at fault model typically used for diagnostics. By defining stuck-at faults at all levels of a design hierarchy, diagnostic simulation has been able to succinctly identify a number of custom circuit design and modeling errors. Approximately half of these errors were not well identified by conventional diagnostics.
Archive | 2001
Thomas Bartenstein; L. Owen Farnsworth; Douglas C. Heaberlin; Edward E. Horton; Leendert M. Huisman; Leah M. P. Pastel; Glen E. Richard; Raymond J. Rosner; Francis Woytowich