Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas S. Barnett is active.

Publication


Featured researches published by Thomas S. Barnett.


vlsi test symposium | 2002

Yield-reliability modeling: experimental verification and application to burn-in reduction

Thomas S. Barnett; Adit D. Singh; Matt Grady; Kathleen G. Purdy

An integrated yield-reliability model is verified using burn-in data from 77,000 microprocessor units manufactured by IBM Microelectronics. The model is based on the fact that defects over semiconductor wafers are not randomly distributed, but have a tendency to cluster. It is shown that this fact can be exploited to produce die of varying reliability by sorting die into bins based on how many of their neighbors test faulty. Die that test good at wafer probe, yet come from regions with many faulty die, have a higher incidence of infant mortality failure than die from regions with few faulty die. The yield-reliability model is used to predict the fraction of good die in each bin following wafer probing as well as the fraction of failures in each bin following stress testing (e.g. burn-in). Results show excellent agreement between model predictions and observed data.


international test conference | 2003

Relating yield models to burn-in fall-out in time

Thomas S. Barnett; Adit D. Singh

An early-life reliability model is presented that allows wafer test information to be used to predict not only the total number of burn-in failures that occur for a given product, but also the time a t which they occur during burn-in testing. The model is a novel extension of an experimentally verified yield-reliability model based on the fact that defects that cause earlylife reliability (burn-in) failures are “smaller”, more subtle versions of the defects that cause failures a t wafer test. Consequently, knowledge of defect densities following wafer test (inferred from wafer probe failures) provides knowledge of the relative magnitude of earlylife reliability defect densities. It is shown that this fact can be exploited to produce die with varying burn-in duration requirements. This is accomplished by sorting die into “bins” based on known reliability indicators. Presently, two such indicators are known: the local region yield of the die in question, and the number of repairs performed on the die in question. The early-life reliability model presented in this work will demonstrate that chips sorted based on these criterion have different fall-out or failure rate curves in burn-in. This information can be used to select optimal burn-in durations while maintaining outgoing product reliabili ty.


vlsi test symposium | 2001

Burn-in failures and local region yield: an integrated field-reliability model

Thomas S. Barnett; Adit D. Singh; Victor P. Nelson

Defects have long been known to cluster on semiconductor wafers. Recent research has shown that this fact may be exploited to produce die of high reliability, (i.e. decreased infant mortality), by sorting die into bins based on how many of their neighbors test faulty. Die that test good at wafer probe, yet come from neighborhoods with many faulty die, have a higher incidence of infant mortality failure than die from neighborhoods with few faulty die. Analysis of burn-in results from the SEMATECH test methods experiment suggests that such a binning approach has the potential to isolate a high quality bin that displays very few burn-in failures. This paper presents the first analytical model that quantifies the reliability improvement one might expect when binning die based on local region yield.


international test conference | 2001

Estimating burn-in fall-out for redundant memory

Thomas S. Barnett; Adit D. Singh; Victor P. Nelson

Integrated circuits can exhibit significant early life or infant mortality failures. Methods to estimate and/or reduce the number of such failures are therefore of great interest to industry. Applications employing multi-chip modules (MCMs), where several die must be independently reliable, are particularly vulnerable to early life failures. Maximizing the reliability of each die is therefore of significant importance. This paper presents an integrated yield-reliability model that allows one to estimate the number of burn-in failures for repairable memory chips, a common component in many MCMs. Since defects in integrated circuits tend to cluster, memory chips that have been repaired have a greater chance of containing a latent defect than chips with no repairs. The result is a higher incidence of infant mortality failure among memory chips that have been repaired.


defect and fault tolerance in vlsi and nanotechnology systems | 2001

Yield-reliability modeling for fault tolerant integrated circuits

Thomas S. Barnett; Adit D. Singh; Victor P. Nelson

An integrated yield-reliability model for defect tolerant integrated circuits is presented that allows one to estimate the yield following both wafer probe and burn-in testing. The model is based on the long observed clustering of defects and the experimentally verified relation between defects causing wafer probe failures and defects causing infant mortality failures. The two-parameter negative binomial distribution is used to describe the distribution of defects over a semiconductor wafer. The clustering parameter /spl alpha/, while known to play a key role in accurately determining wafer probe yields of defect tolerant chips, is shown for the first time. to play a similar role in determining burn-in fall-out. Numerical results indicate that the number of infant mortality failures predicted by the clustering model can differ significantly from calculations that ignore clustering.


international conference on multimedia information networking and security | 1999

Simulants (decoys) for low-metallic-content mines: theory and experimental results

Lloyd S. Riggs; Larry T. Lowe; Jon E. Mooney; Thomas S. Barnett; Richard Ess; Frank Paca

Two sets of metallic objects are created to provide a standard set of metallic test targets to facilitate an objective comparison and evaluation of metal detectors. The first set of metallic objects is chosen form combinations of small metal parts common to many low-metallic content landmines. The collections of small metal parts are chosen based on an average detection distance measured with five sensitive metal detectors. A second set of metal objects is created using short-circuited coils of wire, INSCOILS. A development of the theory describing the interactions of INSCOILS with a metal detectors transmit and receive coil shows that the coupling and response function of an INSCOIL can be independently controlled. By varying the wire gauge, wire material, and loop size, an INSCOIL can be made to approximate the response of an arbitrary metallic object. A pulse-induction measurement system is used to measure the response of different metallic objects. The pulse-induction measurement system is used to match the response of an INSCOIL to that of the collection of small metal parts. Surrogate landmines are also constructed by matching the response of a coil of wire to that of a specific landmine.


international conference on multimedia information networking and security | 2000

Discrimination between buried metallic mines and metallic clutter using signal energy and exponential decay rates

Lloyd S. Riggs; Larry T. Lowe; James Elkins; Thomas S. Barnett; Richard Weaver

An algorithm based on Bayesian probability theory is developed to discriminate buried metallic landmines from buried metallic clutter. A binary hypothesis problem is formed using the two hypotheses that the buried object is either a mine-like object or a clutter-like object. The received signal under both hypotheses is modeled as a target function, which is a delayed decaying exponential, plus Gaussian noise. The target functions contain the targets decay rate and coupling strength information. The coupling strength manifests itself as the point where the buried targets response reasons comes out of amplifier saturation. A target with a large coupling strength will fall out of saturation much later in time that a target with a low coupling strength. The decay rate for each buried object is extracted using a differential-corrections routine. The decay rate and fallout time are considered random variables with known distributions under each hypothesis. The distribution for the mine decay rates and fallout times are calculated from four separate measurements taken in a calibration area. The distribution of decay rates and fallout times for all objects in a blind grid is also estimated.


Archive | 2008

System and method for estimating reliability of components for testing and quality optimization

Adit D. Singh; Thomas S. Barnett


Archive | 2002

Method for burn-in testing

Thomas S. Barnett; Matthew S. Grady; Kathleen G. Purdy


Archive | 2002

Syteme et procede permettant d'evaluer la fiabilite de composants pour optimiser les tests et la qualite

Thomas S. Barnett; Adit D. Singh

Collaboration


Dive into the Thomas S. Barnett's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge