Peter C. Maxwell
Hewlett-Packard
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter C. Maxwell.
international test conference | 1991
Peter C. Maxwell; Robert C. Aitken; Vic Johansen; Inshen Chiang
This paper discusses the use of stuck-at fault coverage as a means of determining quality levels. Data from a part tested with both functional and scan tests is analyzed and compared to three existing theories. It is shown that reasonable predictions of quality level are possible for the functional tests, but that scan tests produce significantly worse quality levels than predicted, Apparent clustering of defects resulted in very good quality levels for fault coverages less than 99%.
international test conference | 1996
Peter C. Maxwell; Robert C. Aitken; Kathleen R. Kollitz; Allen C. Brown
This paper investigates the relative effectiveness of scan-based AC tests, IDDQ tests and functional tests for the detection of defective chips, particularly those exhibiting delay faults. Data are presented from an experiment in which a production ASIC was tested with a number of scan and functional tests, together with IDDQ. Results show that all tests detect unique failures, indicating the presence of additional unmodelled faults. The effectiveness of the AC tests shows that targeting additional faults produces better quality than relying on peripheral coverage of existing tests.
international test conference | 1993
Peter C. Maxwell; Robert C. Aitken
In order to simulate the effects of bridging faults correctly it is necessary to take into account the fact that not all gate inputs have the same logic threshold. This paper presents a general technique which can be used to determine if a particular structure of transistors gives rise to a bridge voltage which is higher or lower than a given threshold, in most cases without requiring circuit simulation. If desired, the technique can also be used to predict actual voltages, which agree well with SPICE simulations. The approach is substantially faster than previous approaches for accurately simulating bridging faults.<<ETX>>
Journal of Electronic Testing | 1992
Peter C. Maxwell; Robert C. Aitken
This article is concerned with the role of IDDQ testing, in conjunction with other types of tests, in achieving high quality. In particular, the argument is made that rather than use a single fault coverage, it is better to obtain a number of different coverages, for different types of faults. To demonstrate the need for increasingly stringent fault coverage requirements, an analysis is given of the relationship between quality, fault coverage and chip area. This analysis shows that as chip area increases, fault coverage must also increase to maintain constant quality levels. Data are then presented from a production part tested with IDDQ, scan, timing and functional tests. To realistically fault grade IDDQ tests, three different coverage metrics are considered. The data show differences in tester failures compared to these coverage metrics, depending on whether one uses total IDDQ failures (parts which fail IDDQ regardless of whether they fail other tests as well) or unique IDDQ failures (parts which fail only IDDQ). The relative effectiveness of the different components of the full test suite are analyzed and it is demonstrated that no component can be removed without suffering a reduction in quality.
IEEE Design & Test of Computers | 1993
Peter C. Maxwell; Robert C. Aitken
The use of stuck-at-fault coverage for estimating overall quality levels is examined. Data from a part tested with both functional and scan tests are analyzed and compared with quality predictions generated by three existing theoretical models. It is shown that reasonable predictions are possible for functional tests, but that scan tests, due to misuse of theoretical equations, produce significantly worse quality levels than predicted.<<ETX>>
international test conference | 1998
Alan Righter; Charles F. Hawkins; Jerry M. Soden; Peter C. Maxwell
Sensitive I/sub DDQ/ and LVMF (low V/sub DD/, maximum frequency) tests were done to examine reliability indicators and burn-in economics for CMOS ICs. These experiments used 3,495 CMOS 1 Mb SRAMs for special I/sub DDQ/ tests, LVMF tests, burn-in and life tests, and failure analysis. I/sub DDQ/ was measured at the maximum V/sub DD/ tolerated by the IC, ranging from 40% to 60% above the nominal V/sub DD/. These data indicate that elevated voltage I/sub DDQ/ screening can replace burn-in for CMOS ICs, including ICs from poor quality (rogue or maverick) lots. A low reliability risk group (27% of the population) that failed only I/sub DDQ/ tests was identified by I/sub DDQ/ signatures and life tests. Two other defect classes were examined for this yield reclamation potential for ICs that failed only I/sub DDQ/ tests.
international test conference | 1998
Peter C. Maxwell; Jeff Rearick
This paper presents a switch-level simulation-based method for estimating quiescent current values. The simulator identifies transistors that are in the proper state to experience leakage mechanisms. This information is combined with data about both the size of these transistors and various process parameters in order to calculate the actual I/sub DDQ/ value. SPICE simulation results are presented on a variety of circuits to both calibrate the simulator, and to demonstrate state, time and sequence dependencies of circuits. Some preliminary results are also given for an actual production chip.
IEEE Design & Test of Computers | 2003
Peter C. Maxwell
For years, it has been common to run a test at wafer and then exactly the same test again at package. This article shows how one company took a detailed look at the wafer/package test mix and adjusted it to reduce cost while retaining quality.
international test conference | 1994
Peter C. Maxwell; Robert C. Aitken; Leendert M. Huisman
This paper addresses problems associated with the production and interpretation of traditional fault coverage numbers. The first part addresses the issue of non-uniform distribution of detected faults. It is shown that there is a large difference in final quality between covering the chip all over and leaving parts relatively untested, even if the coverage is the same in both cases. The second part deals with the use of weighted, rather than unweighted fault coverages and investigates the use of readily-available extracted capacitance information to produce a weighted fault coverage which is more useful for producing quality estimates, without having to perform a full defect analysis. Results show significant differences in weighted versus unweighted coverages, and also that these differences can be in either direction.
international test conference | 2006
Peter C. Maxwell
This paper discusses aspects which have to be considered in order to conduct a successful test experiment. Emphasis is on using data collected from testing large numbers of parts. Statistical arguments are presented relating to making meaningful conclusions, which includes the effects of sample size. Several pitfalls that can corrupt or skew data are discussed, and the importance of unambiguous presentation of data and results is stressed