Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where M.R. Mercer is active.

Publication


Featured researches published by M.R. Mercer.


international test conference | 1996

Iddq test: sensitivity analysis of scaling

Thomas W. Williams; R.H. Dennard; Rohit Kapur; M.R. Mercer; M. Maly

While technology is changing the face of the world, it itself is changing by leaps and bounds; there is a continuing trend to put more functionality on the same piece of silicon. Without major changes in the CMOS technology, it has been shown that the scaling of devices has significant impact on the effectiveness of Iddq testing. The sensitivity of Iddq testing to individual device parameters is studied. It is explained how Iddq testing becomes increasingly ineffective in the scaled product with respect to most parameters and can be improved with others.


vlsi test symposium | 1999

REDO-random excitation and deterministic observation-first commercial experiment

Michael R. Grimaila; Sooryong Lee; Jennifer Dworak; Kenneth M. Butler; B. Stewart; Hari Balachandran; B. Houchins; V. Mathur; Jaehong Park; Li-C. Wang; M.R. Mercer

For many years, non-target detection experiments have been simulated by using AND/OR bridges or gross delay faults as surrogates. For example, the defective part level can be estimated based upon surrogate detection when test patterns target stuck-at faults in the circuit. For the first time, test pattern generation techniques that attempt to maximize non-target defect detection have been used to test a real, 100% scanned, commercial chip consisting of 75 K logic gates. In this experiment, the defective part level for REDO-based patterns was 1,288 parts per million lower than that achieved by DC stuck-at based patterns generated using todays state of the art tools and techniques.


IEEE Design & Test of Computers | 2001

Defect-oriented testing and defective-part-level prediction

Jennifer Dworak; J.D. Wicker; Sooryong Lee; Michael R. Grimaila; M.R. Mercer; Kenneth M. Butler; B. Stewart; Li-C. Wang

After an integrated circuit (IC) design is complete, but before first silicon arrives from the manufacturing facility, the design team prepares a set of test patterns to isolate defective parts. Applying this test pattern set to every manufactured part reduces the fraction of defective parts erroneously sold to customers as defect-free parts. This fraction is referred to as the defect level (DL). However, many IC manufacturers quote defective part level, which is obtained by multiplying the defect level by one million to give the number of defective parts per million. Ideally, we could accurately estimate the defective part level by analyzing the circuit structure, the applied test-pattern set, and the manufacturing yield. If the expected defective part level exceeded some specified value, then either the test pattern set or (in extreme cases) the design could be modified to achieve adequate quality. Although the IC industry widely accepts stuck-at fault detection as a key test-quality figure of merit, it is nevertheless necessary to detect other defect types seen in real manufacturing environments. A defective-part-level model combined with a method for choosing test patterns that use site observation can predict defect levels in submicron ICs more accurately than simple stuck-at fault analysis.


design, automation, and test in europe | 2002

A New ATPG Algorithm to Limit Test Set Size and Achieve Multiple Detections of All Faults

Sooryong Lee; B. Cobb; Jennifer Dworak; Michael R. Grimaila; M.R. Mercer

Deterministic observation and random excitation of fault sites during the ATPG process dramatically reduces the overall defective part level. However, multiple observations of each fault site lead to increased test set size and require more tester memory. In this paper we propose a new ATPG algorithm to find a near-minimal test pattern set that detects faults multiple times and achieves excellent defective part level. This greedy approach uses 3-value fault simulation to estimate the potential value of each vector candidate at each stage of ATPG. The result shows generation of a close to minimal vector set is possible only using dynamic compaction techniques in most cases. Finally, a systematic method to trade-off between defective part level and test size is also presented.


vlsi test symposium | 1994

Limitations in predicting defect level based on stuck-at fault coverage

J. Park; M. Naivar; Rohit Kapur; M.R. Mercer; Thomas W. Williams

The stuck-at fault model has been used over decades as a guide to the test generation process and as an evaluation mechanism for the quality of the test set. As demands on quality have increased, the use of the stuck-at fault model as a predictor of the defect level has been questioned. This paper provides some insight on the issue and shows the limitations of using the stuck-at fault coverage to predict the defect level. The authors demonstrate that as defect level decreases the uncertainty of the estimate grows.<<ETX>>


international test conference | 2000

Enhanced DO-RE-ME based defect level prediction using defect site aggregation-MPG-D

Jennifer Dworak; Michael R. Grimaila; Sooryong Lee; Li-C. Wang; M.R. Mercer

Predicting the final value of the defective part level after the application of a set of test vectors is not a simple problem. In order for the defective part level to decrease, both the excitation and observation of defects must occur. This research shows that the probability of exciting an as yet undetected defect does indeed decrease exponentially as the number of observations increases. In addition, a new defective part level model is proposed which accurately predicts the final defective part level (even at high fault coverages) for several benchmark circuits and which continues to provide good predictions even as changes are made an the set of test patterns applied.


international test conference | 1996

Using target faults to detect non-target defects

Li-C. Wang; M.R. Mercer; Thomas W. Williams

The traditional ATPG method relies upon faults to target all defects. Since faults do not model all possible defects, testing quality depends on the fortuitous detection of non-target defects. By analyzing different ATPG approaches, this paper intends to identify critical factors that may greatly affect the fortuitous detection. For enhancing the fortuitous detection of non-target defects through target faults, new concepts and novel ATPG methods are proposed.


design, automation, and test in europe | 2004

Balanced excitation and its effect on the fortuitous detection of dynamic defects

Jennifer Dworak; B. Cobb; J. Wingfield; M.R. Mercer

Dynamic defects are less likely to be fortuitously detected than static defects because they have more stringent detection requirements. We show that (in addition to more site observations) balanced excitation is essential for detection of these defects, and we present a metric for estimating this degree of balance. We also show that excitation balance correlates with the parameter /spl tau/ in the MPG-D defective part level model.


international test conference | 2005

An optimal test pattern selection method to improve the defect coverage

Yuxin Tian; Michael R. Grimaila; Weiping Shi; M.R. Mercer

It is well known that n-detection test sets are effective to detect unmodeled defects and improve the defect coverage. However, in these sets, each of the n-detection test patterns has the same importance on the overall test set performance. In other words, the test pattern that detects a fault for the first time plays the same important role as the test pattern that detects that fault for the (n)-th time. In this paper, we propose a linear programming-based optimal test pattern selection method which aims at reducing the overall defect part level (DPL). Using resistive bridge faults as surrogates, our experimental results on ISCAS85 circuits demonstrate the proposed test pattern selection method achieves higher defect coverage than traditional n-detection method


international test conference | 2002

Analysis of delay test effectiveness with a multiple-clock scheme

Jing-Jia Liou; Li-C. Wang; Kwang-Ting Cheng; Jennifer Dworak; M.R. Mercer; Rohit Kapur; Thomas W. Williams

In conventional delay testing, two types of tests, transition tests and path delay tests, are often considered. The test clock frequency is usually set to a single pre-determined parameter equal to the system clock. This paper discusses the potential of enhancing test effectiveness by using multiple test sets with multiple clock frequencies. The two intuitions motivating our analysis are 1) multiple test sets can deliver higher test quality than a single test set, and 2) for a given set of AC delay patterns, a carefully-selected, tighter clock would result in higher effectiveness to screen out potentially defective chips. Hence, by using multiple test sets, the overall quality of AC delay test can be enhanced, and by using multiple-clock schemes the cost of adding the additional pattern sets can be minimized. In this paper, we analyze the feasibility of this new delay test methodology with respect to different combinations of pattern sets and to different circuit characteristics. We discuss the pros and cons of multiple-clock schemes through analysis and experiments using a statistical delay evaluation and delay defect-injected framework.

Collaboration


Dive into the M.R. Mercer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Li-C. Wang

University of California

View shared research outputs
Top Co-Authors

Avatar

Michael R. Grimaila

Air Force Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jing-Jia Liou

National Tsing Hua University

View shared research outputs
Researchain Logo
Decentralizing Knowledge