Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas W. Williams is active.

Publication


Featured researches published by Thomas W. Williams.


international test conference | 2001

Design of compactors for signature-analyzers in built-in self-test

Peter Wohl; John A. Waicukauski; Thomas W. Williams

Originally developed decades ago, logic built-in self-test (BIST) evolved and is now increasingly being adopted to cope with rapid growth in design size and complexity. Compared to deterministic pattern test, logic BIST requires many more test patterns, and therefore, increased test time unless many more internal scan chains can be shifted in parallel. To match this large number of scan chains, the width of the signature analyzer would have to be enlarged, which would result in large area overhead and signature storage space. Instead, a combinational space-compactor is inserted between the scan chain outputs and the signature analyzer inputs. However, the compactor may deteriorate the ability to test and diagnose the design. This paper analyzes how compactors affect test and diagnosis and shows that compactors can be designed to actually improve the testability of certain faults, while providing full diagnosis capability. Algorithms that allow automated design of optimal compactors are presented and results are discussed.


vlsi test symposium | 2003

A reconfigurable shared scan-in architecture

Samitha Samaranayake; Emil Gizdarski; Nodari Sitchinava; Frederic Neuveux; Rohit Kapur; Thomas W. Williams

In this paper, an efficient technique for test data volume reduction based on the shared scan-in (Illinois Scan) architecture and the scan chain reconfiguration (Dynamic Scan) architecture is defined. The composite architecture is created with analysis that relies on the compatibility relation of scan chains. Topological analysis and compatibility analysis are used to maximize gains in test data volume and test application time. The goal of the proposed synthesis procedure is to test all detectable faults in broadcast test mode using minimum scan-chain configurations. As a result, more aggressive sharing of scan inputs can be applied for test data volume and test application time reduction. The experimental results demonstrate the efficiency of the proposed architecture for real-industrial circuits.


vlsi test symposium | 2004

Changing the scan enable during shift

Nodari Sitchinava; Emil Gizdarski; Samitha Samaranayake; Frederic Neuveux; Rohit Kapur; Thomas W. Williams

This paper extends the reconfigurable shared scan-in architecture (RSSA) to provide additional ability to change values on the scan configuration signals (scan enable signals) during the scan operation on a per-shift basis. We show that the extra flexibility of reconfiguring the scan chains every shift cycle reduces the number of different configurations required by RSSA while keeping test coverage the same. In addition a simpler analysis can be used to construct the scan chains. This is the first paper of its kind that treats the scan enable signal as a test data signal during the scan operation of a test pattern. Results are presented on some ISCAS as well as industrial circuits.


vlsi test symposium | 2007

Minimizing the Impact of Scan Compression

Peter Wohl; John A. Waicukauski; Rohit Kapur; Sanjay Ramnath; Emil Gizdarski; Thomas W. Williams; P. Jaini

Scan is widely accepted as the basis for reducing test cost and improving quality, however its effectiveness is compromised by increasingly complex designs and fault models that can result in high scan data volume and application time. The authors present a scan compression method designed for minimal impact in all aspects: area overhead, timing, and design flow. Easily adopted on top of existing scan designs, the method is fully integrated in the scan synthesis and test generation flows. Data and test time compressions of over 10times were obtained on industrial designs with negligible overhead and no impact on schedule.


international test conference | 2005

Efficient compression of deterministic patterns into multiple PRPG seeds

Peter Wohl; John A. Waicukauski; Sanjay Patel; Francisco DaSilva; Thomas W. Williams; Rohit Kapur

Recent test-cost reduction methods are based on controlling the initial state (seed) of a pseudo-random pattern generator (PRPG) so that deterministic values are loaded in selected scan cells. Combined with an unload-data compression technique, PRPG seeding reduces test data volume and application time. This paper presents a method of mapping each scan load to multiple PRPG seeds, computed so that test pattern count, data volume, and, therefore, test cost are minimized. This method also allows smaller and fewer PRPGs, reducing the area overhead of test-compression circuitry. The results on deep-submicron industrial designs, show significant test cost reduction when this method is applied with either X-tolerant or X-free unload-data compression


IEEE Design & Test of Computers | 2008

Historical Perspective on Scan Compression

Rohit Kapur; Subhasish Mitra; Thomas W. Williams

The beginnings of the modern-day IC test trace back to the introduction of such fundamental concepts as scan, stuck-at faults, and the D-algorithm. Since then, several subsequent technologies have made significant improvements to the state of the art. Today, IC test has evolved into a multifaceted industry that supports innovation. Scan compression technology has proven to be a powerful antidote to this problem, as it has catalyzed reductions in test data volume and test application time of up to 100 times. This article sketches a brief history of test technology research, tracking the evolution of compression technology that has led to the success of scan compression. It is not our intent to identify specific inventors on a finegrained timeline. Instead, we present the important concepts at a high level, on a coarse timeline. Starting in 1998 and continuing to the present, numerous scan-compression-related inventions have had a major impact on the test landscape. However, this article also is not a survey of the various scan compression methods. Rather, we focus on the evolution of the types of constructs used to create breakthrough solutions.


vlsi test symposium | 1994

Limitations in predicting defect level based on stuck-at fault coverage

J. Park; M. Naivar; Rohit Kapur; M.R. Mercer; Thomas W. Williams

The stuck-at fault model has been used over decades as a guide to the test generation process and as an evaluation mechanism for the quality of the test set. As demands on quality have increased, the use of the stuck-at fault model as a predictor of the defect level has been questioned. This paper provides some insight on the issue and shows the limitations of using the stuck-at fault coverage to predict the defect level. The authors demonstrate that as defect level decreases the uncertainty of the estimate grows.<<ETX>>


international test conference | 1996

Using target faults to detect non-target defects

Li-C. Wang; M.R. Mercer; Thomas W. Williams

The traditional ATPG method relies upon faults to target all defects. Since faults do not model all possible defects, testing quality depends on the fortuitous detection of non-target defects. By analyzing different ATPG approaches, this paper intends to identify critical factors that may greatly affect the fortuitous detection. For enhancing the fortuitous detection of non-target defects through target faults, new concepts and novel ATPG methods are proposed.


design automation conference | 2002

Enhancing test efficiency for delay fault testing using multiple-clocked schemes

Jing-Jia Liou; Li-C. Wang; Kwang-Ting Cheng; Jennifer Dworak; M. Ray Mercer; Rohit Kapur; Thomas W. Williams

In conventional delay testing, the test clock is a single pre-defined parameter that is often set to be the same as the system clock. This paper discusses the potential of enhancing test efficiency by using multiple clock frequencies. The intuition behind our work is that for a given set of AC delay patterns, a carefully-selected, tighter clock would result in higher effectiveness to screen out the potential defective chips. Then, by using a smarter test clock scheme and combining with a second set of AC delay patterns, the overall quality of AC delay test can be enhanced while the cost of including the second pattern set can be minimized. We demonstrate these concepts through analysis and experiments using a statistical timing analysis framework with defect-injected simulation.


international test conference | 2007

Fundamentals of timing information for test: How simple can we get?

Rohit Kapur; Jindrich Zejda; Thomas W. Williams

Testing for small delay defects requires ATPG-FS tools to understand timing information of the design such that transition delay faults can be detected along longer paths. In this paper, timing information is analyzed for use in test automation tools to test for small delay defects. Fundamentals of static timing analysis are analyzed with regard to test. This paper concludes that Signal Integrity information can be ignored by test automation tools when timing information is used to guide ATPG tools towards longer paths. This paper also shows that a lack of understanding of clock trees in the long path ATPG algorithm leads to incorrect results.

Collaboration


Dive into the Thomas W. Williams's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Li-C. Wang

University of California

View shared research outputs
Top Co-Authors

Avatar

Nodari Sitchinava

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge