M. Enamul Amyeen
Intel
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by M. Enamul Amyeen.
international test conference | 2006
M. Enamul Amyeen; Debashis Nayak; Srikanth Venkataraman
For nanometer manufacturing fabrication process, it is critical to narrow down the defect location for successful physical failure analysis. This paper presents a mixed-level diagnosis technique, which first performs diagnosis at logic level, and then performs switch-level analysis to locate a defect at transistor level. An efficient single pass mixed-mode diagnosis flow proposed to isolate defects within a cell. Experimental results showed significant improvement in precision over traditional logic diagnosis with only a fractional increase in run-time. The proposed mixed-level diagnosis technique was applied to successfully isolate silicon defects
vlsi test symposium | 2003
Xiaoming Yu; M. Enamul Amyeen; Srikanth Venkataraman; Ruifeng Guo; Irith Pomeranz
Effective generation of diagnostic vectors can be assisted by a fast diagnostic fault simulator and an equivalence identification tool. Diagnostic fault simulation can be an expensive process for large circuits. If a large number of fault pairs are passed to an equivalence identification tool, it would take a long time. In this paper, a novel approach is proposed to concurrently execute diagnostic fault simulation and equivalence identification during diagnostic test generation, thereby reducing the overall execution time. Experimental results on industrial circuits and benchmark circuits demonstrate the potential of the proposed method.
international test conference | 2011
M. Enamul Amyeen; Andal Jayalakshmi; Srikanth Venkataraman; Sundar V. Pathy; Ewe C. Tan
Post silicon speed-path debug and production volume diagnosis for yield learning are critical to meet product time to market demand. In this paper, we present Logic BIST speed-path debug technique and methodology for achieving higher frequency demand. We have developed a methodology for Logic BIST production fail volume diagnosis and presented tester time and memory overhead tradeoffs and optimization for enabling volume diagnosis. Results are presented showing successful isolation of silicon speed-paths on Intel® SOCs.
international test conference | 2009
M. Enamul Amyeen; Srikanth Venkataraman; Mun Wai Mak
Diagnosis of functional failures can be used to debug design issues, isolate manufacturing defects, and improve manufacturing yield. Automated failure analysis and rapid root-cause isolation is critical for meeting ever decreasing product time to market demand. Conventional debug approach requires in-depth architecture knowledge and debug expertise. In this paper, we present a two phase approach for isolating microprocessor functional failures. First, failing functional blocks are identified utilizing functional fault simulation. Then, algorithmic diagnosis techniques are applied to accurately identify the failing signals within a functional block. Results are presented showing successful isolation of silicon defects on Intel® Core™ dual-core processor.
vlsi test symposium | 2016
Shraddha Bodhe; M. Enamul Amyeen; Clariza Galendez; Houston Mooers; Irith Pomeranz; Srikanth Venkataraman
This paper presents an algorithm for reducing the test data volume collected by a tester for defect diagnosis of an IC and the tester time. The tester executes the tests and transfers the failing test responses one by one from the tester capture memory to the tester data-logs. While the tester is transferring the fail data, the proposed algorithm analyzes the failing outputs for every test and determines if the test is a potential contributor to the identification of defects. If not, then the test is eliminated from the tester data-logs. Otherwise, the test may replace an existing test or be added as a new test. The addition and replacement of tests continue until the algorithm determines that the fail data transferred to the tester data-logs is sufficient for accurate defect diagnosis. The early termination of the fail data transfer reduces the overall tester time. The effectiveness of the method was verified using real defects in industry fabricated dies. The algorithm was also implemented in a test program library and integrated into a production fail flow for sort data-log optimization. The overhead of the algorithm was minimal, and yielded a 5x reduction in the test data trasfer time.
IEEE Transactions on Very Large Scale Integration Systems | 2016
Shraddha Bodhe; M. Enamul Amyeen; Irith Pomeranz; Srikanth Venkataraman
With the increasing transistor count and design complexity of modern integrated circuits, a large volume of fail data is collected by the tester for a failing die. This fail data is analyzed by a diagnosis procedure to obtain information about the defects in the die that caused it to fail. However, large portions of the fail data are not necessary for diagnosis. As a result, the diagnosis procedure spends time analyzing unnecessary data, thus decreasing its speed and throughput. We present a methodology to minimize the amount of fail data that is provided to the diagnosis procedure without compromising the diagnosis accuracy (DA). Our methodology evaluates the outputs at which the tests failed to eliminate noncontributing failing tests. The efficacy of our algorithm is demonstrated using fail data from industry fabricated chips. The experimental results show that, on average, our algorithm achieves fail data minimization of 40% while maintaining an average DA of 95%. The speed of the diagnosis procedure is increased by 39%.
vlsi test symposium | 2010
Dongok Kim; Irith Pomeranz; M. Enamul Amyeen; Srikanth Venkataraman
Following design-for-manufacturability (DFM) guidelines during chip design can lower the possibility of occurrence of systematic defects. In this paper, we investigate the use of DFM guidelines during the defect diagnosis process with the goal of identifying which DFM guidelines are responsible for the defects present in failing chips. We also introduce a new metric called diagnostic coefficient that allows us to rank the guidelines according to their contribution of hard-to-diagnose defects. DFM guidelines that are ranked high should be applied during chip design in order to obtain chips that are easier to diagnose.
IEEE Transactions on Very Large Scale Integration Systems | 2017
Shraddha Bodhe; Irith Pomeranz; M. Enamul Amyeen; Srikanth Venkataraman
During fail data collection, a tester collects information that is useful for defect diagnosis. If fail data collection can be terminated early, the tester time as well as the volume of fail data will be reduced. Test reordering can enhance the ability to terminate the process early without affecting the quality of diagnosis. In this paper, test reordering targets logic defects based on information that is derived during defect diagnosis. The defect diagnosis procedure is enhanced to identify tests that are useful for defect diagnosis across a sample of faulty instances of a circuit. Tests that are determined to be useful for more faulty instances of a circuit are placed earlier in the test set based on the expectation that the same tests will be useful for other faulty instances of the circuit. The experimental results for logic defects in benchmark circuits support the effectiveness of this approach and indicate that test reordering helps to terminate fail data collection early without impacting the diagnosis quality.
ACM Transactions on Design Automation of Electronic Systems | 2017
Irith Pomeranz; M. Enamul Amyeen; Srikanth Venkataraman
As part of a yield improvement process, fail data is collected from faulty units. Several approaches exist for reducing the tester time and the volume of fail data that needs to be collected based on the observation that a subset of the fail data is sufficient for accurate defect diagnosis. This article addresses the volume of fail data by considering the test set that is used for collecting fail data. It observes that certain faults from a set of target faults produce significantly larger numbers of faulty output values (and therefore significantly larger volumes of fail data) than other faults under a given test set. Based on this observation, it describes a procedure for modifying the test set to reduce the maximum number of faulty output values that a target fault produces. When defects are considered in a simulation experiment, and a defect diagnosis procedure is applied to the fail data that they produce, two effects are observed: the maximum and average numbers of faulty output values per defect are reduced significantly with the modified test set, and the quality of diagnosis is similar or even improved with the modified test set.
international test conference | 2016
M. Enamul Amyeen; Dongok Kim; Maheshwar Chandrasekar; Mohammad Noman; Srikanth Venkataraman; Anurag Jain; Neha Goel; Ramesh Sharma
Faster failure isolation is critical for manufacturing yield ramp and product time to market. Higher diagnosis resolution is essential for faster defect isolation and root-cause identification. A detection oriented test set is targeted for fault coverage and does not provide maximum diagnostic resolution. In this paper, we present the design and architecture of a state of the art diagnostic ATPG tool for industrial-scale designs. We develop a novel diagnostic test generation methodology which first generates deterministic diagnostic test content to distinguish the diagnosis suspects. If tester memory is available then additional N-detect oriented tests are generated to augment the content. Further, we present techniques to improve performance of diagnostic fault simulation for industrial-scale designs. Experimental results on Intel® Core™ microprocessor designs indicate 3X-114X speed up with up-to 2X memory overhead. Silicon failure data collected on sort wafer fails showed the effectiveness of the hybrid diagnostic content in improving the diagnostic resolution by 2.8x to 3X when compared with content generated from an industry standard diagnostic test generator. Silicon results are evaluated on Intel® Core™ microprocessor.