Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nur A. Touba is active.

Publication


Featured researches published by Nur A. Touba.


IEEE Design & Test of Computers | 2006

Survey of Test Vector Compression Techniques

Nur A. Touba

Test data compression consists of test vector compression on the input side and response, compaction on the output side. This vector compression has been an active area of research. This article summarizes and categories these techniques. The focus is on hardware-based test vector compression techniques for scan architectures. Test vector compression schemes fall broadly into three categories: code-based schemes use data compression codes to encode test cubes; linear-decompression-based schemes decompress the data using only linear operations (that is LFSRs and XOR networks) and broadcast-scan-based schemes rely on broadcasting the same values to multiple scan chains


vlsi test symposium | 2000

Static compaction techniques to control scan vector power dissipation

Ranganathan Sankaralingam; Rama Rao Oruganti; Nur A. Touba

Excessive switching activity during scan testing can cause average power dissipation and peak power during test to be much higher than during normal operation. This can cause problems both with heat dissipation and with current spikes. Compacting scan vectors greatly increases the power dissipation for the vectors (generally the power becomes several times greater). The compacted scan vectors often can exceed the power constraints and hence cannot be used. It is shown here that by carefully selecting the order in which pairs of test cubes are merged during static compaction, both average power and peak power for the final test set can be greatly reduced. A static compaction procedure is presented that can be used to find a minimal set of scan vectors that satisfies constraints on both average power and peak power. The proposed approach is simple yet effective and can be easily implemented in the conventional test vector generation flow used in industry today.


international test conference | 1998

Test vector decompression via cyclical scan chains and its application to testing core-based designs

Abhijit Jas; Nur A. Touba

A novel test vector compression/decompression technique is proposed for reducing the amount of test data that must be stored on a tester and transferred to each core when testing a core-based design. A small amount of on-chip circuitry is used to reduce both the test storage and test time required for testing a core-based design. The fully specified test vectors provided by the core vendor are stored in compressed form in the tester memory and transferred to the chip where they are decompressed and applied to the core (the compression is lossless). Instead of having to transfer each entire test vector from the tester to the core, a smaller amount of compressed data is transferred instead. This reduces the amount of test data that must be stored on the tester and hence reduces the total amount of test time required for transferring the data with a given test data bandwidth.


international test conference | 2003

Cost-effective approach for reducing soft error failure rate in logic circuits

Kartik Mohanram; Nur A. Touba

In this paper, a new paradigm for designing logic circuits with concurrent error detection (CED) is described. The key idea is to exploit the asymmetric soft error susceptibility of nodes in a logic circuit. Rather than target all modeled faults, CED is targeted towards the nodes that have the highest soft error susceptibility to achieve cost-effective tradeoffs between overhead and reduction in the soft error failure rate. Under this new paradigm, we present one particular approach that is based on partial duplication and show that it is capable of reducing the soft error failure rate significantly with a fraction of the overhead required for full duplication. A procedure for characterizing the soft error susceptibility of nodes in a logic circuit, and a heuristic procedure for selecting the set of nodes for partial duplication are described. A full set of experimental results demonstrate the cost-effective tradeoffs that can be achieved.


vlsi test symposium | 1999

Scan vector compression/decompression using statistical coding

Abhijit Jas; Jayabrata Ghosh-Dastidar; Nur A. Touba

A compression/decompression scheme based on statistical coding is presented for reducing the amount of test data that must be stored on a tester and transferred to each core in a core-based design. The test vectors provided by the core vendor are stored in compressed form in the tester memory and transferred to the chip where they are decompressed and applied to the core. Given the set of test vectors for a core, a statistical code is carefully selected so that it satisfies certain properties. These properties guarantee that it can be decoded by a simple pipelined decoder (placed at the serial input of the cores scan chain) which requires very small area. Results indicate that the proposed scheme can use a simple decoder to provide test data compression near that of an optimal Huffman code. The compression results in a two-fold advantage since both test storage and test time are reduced.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2003

An efficient test vector compression scheme using selective Huffman coding

Abhijit Jas; Jayabrata Ghosh-Dastidar; Mom Eng Ng; Nur A. Touba

This paper presents a compression/decompression scheme based on selective Huffman coding for reducing the amount of test data that must be stored on a tester and transferred to each core in a system-on-a-chip (SOC) during manufacturing test. The test data bandwidth between the tester and the SOC is a bottleneck that can result in long test times when testing complex SOCs that contain many cores. In the proposed scheme, the test vectors for the SOC are stored in compressed form in the tester memory and transferred to the chip where they are decompressed and applied to the cores. A small amount of on-chip circuitry is used to decompress the test vectors. Given the set of test vectors for a core, a modified Huffman code is carefully selected so that it satisfies certain properties. These properties guarantee that the codewords can be decoded by a simple pipelined decoder (placed at the serial input of the cores scan chain) that requires very small area. Results indicate that the proposed scheme can provide test data compression nearly equal to that of an optimum Huffman code with much less area overhead for the decoder.


international test conference | 2001

Test vector encoding using partial LFSR reseeding

C. C. Krishna; Abhijit Jas; Nur A. Touba

A new form of LFSR reseeding that provides higher encoding efficiency and hence greater reduction in test data storage requirements is described. Previous forms of LFSR reseeding have been static (i.e. test generation is stopped and the seed is loaded at one time) and have required full reseeding (i.e. n=r bits are used for an r-bit LFSR). The new form of LFSR reseeding proposed here is dynamic (i.e. the seed is incrementally modified while test generation proceeds) and allows partial reseeding (i.e. n<r bits can be used). Full static forms of LFSR reseeding are shown to be a special case of the new partial dynamic form of LFSR reseeding. In addition to providing better encoding efficiency, partial dynamic LFSR reseeding has a simpler hardware implementation than previous schemes based on multiple-polynomial LFSRs, and can generate each test vector in fewer clock cycles. Experimental results demonstrate the advantages of the new partial dynamic LFSR reseeding approach.


vlsi test symposium | 2001

Reducing power dissipation during test using scan chain disable

Ranganathan Sankaralingam; Bahram Pouya; Nur A. Touba

A novel approach for minimizing power during scan testing is presented. The idea is that given a full scan module or core that has multiple scan chains, the test set is generated and ordered in such a way that some of the scan chains can have their clock disabled for portions of the test set. Disabling the clock prevents flip-flops from transitioning, and hence reduces switching activity in the circuit. Moreover, disabling the clock also reduces power dissipation in the clock tree which often is a major source of power. The only hardware modification that is required to implement this approach is to add the capability for the tester to gate the clock for one subset of the scan chains in the core. A procedure for generating and ordering the test set to maximize the we of scan disable is described. Experimental results are shown indicating that the proposed approach can significantly reduce both logic and clock power during testing.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 1997

Logic synthesis of multilevel circuits with concurrent error detection

Nur A. Touba; Edward J. McCluskey

This paper presents a procedure for synthesizing multilevel circuits with concurrent error detection. All errors caused by single stuck-at faults are detected using a parity-check code. The synthesis procedure (implemented in Stanford CRCs TOPS synthesis system) fully automates the design process, and reduces the cost of concurrent error detection compared with previous methods. An algorithm for selecting a good parity-check code for encoding the circuit outputs is described. Once the code has been selected, a new procedure called structure-constrained logic optimization is used to minimize the area of the circuit as much as possible while still using a circuit structure that ensures that single stuck-at faults cannot produce undetected errors. It is proven that the resulting implementation is path fault secure, and when augmented by a checker, forms a self-checking circuit. The actual layout areas required for self-checking implementations of benchmark circuits generated with the techniques described in this paper are compared with implementations using Berger codes, single-bit parity, and duplicate-and-compare. Results indicate that the self-checking multilevel circuits generated with the procedure described here are significantly more economical.


vlsi test symposium | 1996

Test point insertion based on path tracing

Nur A. Touba; Edward J. McCluskey

This paper presents an innovative method for inserting test points in the circuit-under-test to obtain complete fault coverage for a specified set of test patterns. Rather than using probabilistic techniques for test point placement, a path tracing procedure is used to place both control and observation points. Rather than adding extra scan elements to drive the control points, a few of the existing primary inputs to the circuit are ANDed together to form signals that drive the control points. By selecting which patterns the control point is activated for, the effectiveness of each control point is maximized. A comparison is made with the best previously published results for other test point insertion methods, and it is shown that the proposed method requires fewer test points and less overhead to achieve the same or better fault coverage.

Collaboration


Dive into the Nur A. Touba's collaboration.

Researchain Logo
Decentralizing Knowledge