Osei Poku
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Osei Poku.
international test conference | 2006
Rao Desineni; Osei Poku; Ronald D. Blanton
DIAGNOSIX is a comprehensive fault diagnosis methodology for characterizing failures in digital ICs. Using limited layout information, DIAGNOSIX automatically extracts a fault model for a failing IC by analyzing the behavior of the physical neighborhood surrounding suspect lines. Results from several simulated and over 800 failing ICs reveal a significant improvement in localization. More importantly, the output of DIAGNOSIX is an accurate model of the logic-level defect behavior that provides useful insight into the actual defect mechanism. Experiment results for the failing chips with successful physical failure analysis reveal that the extracted faults accurately describe the actual defects
vlsi test symposium | 2009
Xiaochun Yu; Yen-Tzu Lin; Wing Chiu Tam; Osei Poku; Ronald D. Blanton
We propose to achieve and maintain ultra-high quality of digital circuits on a per-design basis by (i) monitoring the type of failures that occur through volume diagnosis, and (ii) changing the test patterns to match the current failure population characteristics. Opposed to the current approach that assumes sufficient quality levels are maintained using the tests developed during the time of design, the methodology described here presupposes that fallout characteristics can change over time but with a time constant that is sufficiently slow, thereby allowing test content to be altered so as to maximize coverage of the failure types actually occurring. Even if this assumption proves to be false, the test content can be tuned to match the characteristics of the fallout population if the fallout characteristics are unchanging. Under either scenario, it should be then possible to minimize DPPM for a given constraint on test costs, or alternatively ensure that DPPM does not exceed some pre-determined threshold. Our approach does not have to cope with situations where fallout characteristics change rapidly (e.g. excursion), since there are existing methods to deal with them. Our methodology uses a diagnosis technique that can extract defect activation conditions, a new model for estimating DPPM, and an efficient test selection method for reducing DPPM based on volume diagnosis results. Circuit-level simulation involving various types of defects shows that DPPM could be reduced by 30% using our methodology. In addition, experiments on a real silicon chip failures show that DPPM can be significantly reduced, without additional test execution cost, by altering the content (but not the size) of the applied test set.
international test conference | 2010
Wing Chiu Tam; Osei Poku; Ronald D. Blanton
Systematic defects due to design-process interactions are a dominant component of integrated circuit (IC) yield loss in nano-scaled technologies. Test structures do not adequately represent the product in terms of feature diversity and feature volume, and therefore are unable to identify all the systematic defects that affect the product. This paper describes a method that uses diagnosis to identify layout features that do not yield as expected. Specifically, clustering techniques are applied to layout snippets of diagnosis-implicated regions from (ideally) a statistically-significant number of IC failures for identifying feature commonalties. Experiments involving an industrial chip demonstrate the identification of possible systematic yield loss due to lithographic hotspots.
design automation conference | 2008
Wing Chiu Tarn; Osei Poku; Ronald D. Blanton
Traditional software-based diagnosis of failing chips typically identifies several lines where the failure is believed to reside. However, these lines can span across multiple layers and can be very long in length. This makes physical failure analysis difficult. hi contrast, there are emerging diagnosis techniques that identify both the faulty lines as well as the neighboring conditions for which an affected line becomes faulty, hi this paper, an approach is presented to improve failure localization by automatically analyzing the information associated with the outcome of diagnosis. Experimental results show a significant improvement in failure localization when this method is applied to 106 real IC failures.
design automation conference | 2012
Hongfei Wang; Osei Poku; Xiaochun Yu; Sizhe Liu; Ibrahima Komara; Ronald D. Blanton
Test data collection for a failing integrated circuit (IC) can be very expensive and time consuming. Many companies now collect a fix amount of test data regardless of the failure characteristics. As a result, limited data collection could lead to inaccurate diagnosis, while an excessive amount increases the cost not only in terms of unnecessary test data collection but also increased cost for test execution and data-storage. In this work, the objective is to develop a method for predicting the precise amount of test data necessary to produce an accurate diagnosis. By analyzing the failing outputs of an IC during its actual test, the developed method dynamically determines which failing test pattern to terminate testing, producing an amount of test data that is sufficient for an accurate diagnosis analysis. The method leverages several statistical learning techniques, and is evaluated using actual data from a population of failing chips and five standard benchmarks. Experiments demonstrate that test-data collection can be reduced by >; 30% (as compared to collecting the full-failure response) while at the same time ensuring >;90% diagnosis accuracy. Prematurely terminating test-data collection at fixed levels (e.g., 100 failing bits) is also shown to negatively impact diagnosis accuracy.
design, automation, and test in europe | 2008
Yen-Tzu Lin; Osei Poku; Naresh K. Bhatti; Ronald D. Blanton
N-detect test has been shown to have a higher likelihood for detecting defects. However, traditional definitions of N-detect test do not necessarily exploit the localized characteristics of defects. In physically-aware N-detect test, the objective is to ensure that the N tests establish N different logical states on the signal lines that are in the physical neighborhood surrounding the targeted fault site. We present a test selection procedure for creating a physically- aware N-detect test set that satisfies a user-provided constraint on test-set size. Results produced for an industrial test chip demonstrate the effectiveness and practicability of our pattern selection approach. Specifically, we show that we can virtually detect the same number of faults 10 or more times as a traditional 10-detect test set and increase the number of neighborhood states and the number of faults with 10 or more states by 18.0 and 4.7%, respectively, without increasing the number of tests over a traditional 10-detect test set.
IEEE Design & Test of Computers | 2012
Ronald D. Blanton; Wing Chiu Tam; Xiaochun Yu; Jeffrey E. Nelson; Osei Poku
A variety of yield-learning techniques are essential since no single approach can effectively find every manufacturing perturbation that can lead to yield loss. Test structures, for example, can range from being simple in nature (combs and serpentine structures for measuring defect-density and size distributions) to more complex, active structures that include transistors, ring oscillators, and SRAMs. Test structures are designed to provide seamless access to a given failure type: its size, its location, and possibly other pertinent characteristics.
international test conference | 2008
Yen-Tzu Lin; Osei Poku; Ronald D. Blanton; Phil Nigh; Peter Lloyd; Vikram Iyengar
Physically-aware N-detect attempts to improve the detection characteristics of traditional N-detect by exploiting the localized characteristics of defects. Specifically, in addition to detecting each fault N times, we also require that the physical neighborhood surrounding the target change state as well. In this work, the effectiveness of the physically-aware metric is examined using two approaches. First, tester responses from an in-production IBM chip are analyzed to compare the physically-aware N-detect test with other traditional tests that include stuck-at, IDDQ, logic BIST, and delay tests. Second, diagnostic results from LSI chip failures are utilized to directly compare the traditional and physically-aware N-detect metrics. Results from both experiments demonstrate the effectiveness of physically-aware N-detect test in detecting defects in modern industrial designs.
IEEE Design & Test of Computers | 2006
Jeffrey E. Nelson; Thomas Zanon; Jason G. Brown; Osei Poku; Ronald D. Blanton; Wojciech Maly; Brady Benware; Chris Schuermyer
Defect density and size distributions (DDSDs) are important parameters for characterizing spot defects in a process. This article addresses random spot defects, which affect all processes and currently require a heavy silicon investment to characterize and a new approach is proposed for characterizing such defects. This approach presents a system that overcomes the obstacle of silicon area overhead by using available wafer sort test results to measure critical-area yield model parameters with no additional silicon area. The results of the experiment on chips fabricated in silicon confirm the results of the simulation experiment that DDSDs measurement characterizes a process in ordinary digital circuits using only slow, structural test results from the product
design automation conference | 2009
Wing Chiu Tam; Osei Poku; Ronald D. Blanton
Integrated circuit (IC) diagnosis typically analyzes failed chips by reasoning about their responses to test patterns to deduce what has gone wrong. Current trends use diagnosis as the first step in extracting valuable information from a large population of failing ICs that include, for example, design-feature failure rates and defect-occurrence statistics. However, it is difficult to examine the accuracy of these techniques because of the unavailability of sufficient fail data where such information is known. This paper describes an approach for benchmarking and verifying diagnosis techniques through failure population creation that builds on prior work in this area. Specifically, we describe how a population of realistic IC failures is created through circuit-level simulation of extracted layouts. The most novel feature of the work is that the virtual test responses produced are both a precise function of defect type and the three-dimensional location within the layout. The extended approach is demonstrated using twelve placed-and-routed circuits. An example application of the developed framework is given to illustrate the utility of having a failure population where the location and type of defect are known a priori.