Subhadip Kundu
Indian Institute of Technology Kharagpur
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Subhadip Kundu.
international conference on vlsi design | 2011
Subhadip Kundu; Santanu Chattopadhyay; Indranil Sengupta; Rohit Kapur
Fault diagnosis is extremely important to ramp up the manufacturing yield and in some cases to reduce the product debug time as well. In this paper, we have proposed a novel technique to analyze multiple fault diagnosis based on multiple fault injection. Almost all the conventional fault diagnosis method simulate one fault (among the candidate faults) at a time and based on the number of failed patterns the fault can explain, a ranking is proposed for all candidate faults. But, a single fault injection cannot manifest the effect of multiple faults that are present in the actual failed circuit. Thus, in this paper, we have injected multiple faults simultaneously, and perform an effect-cause analysis to find the possible list of faults. Experimental results prove the validation of our approach as it has high diagnosability and resolution. The proposed method runs within moderate CPU time. We have been able to run simulations to diagnose upto 10 faults in a reasonable time. However, the scheme does not put any restrictions on the number of simultaneous faults.
advances in recent technologies in communication and computing | 2009
Subhadip Kundu; Santanu Chattopadhyay
Power consumption during test mode is much higher than in normal mode of operation. This paper addresses issue of assigning suitable values to the unspecified bits (don’t care) in the test patterns so that both static and dynamic power consumption during testing is reduced. We have used a Genetic Algorithm based heuristic to fill the don’t cares. Our approach produces an average percentage improvement of 31.9, 37.0, and 37.7 in dynamic power and 3.0, 7.4, and 5.3 leakage power over 0-fill, 1-fill, and MT-fill algorithms for don’t care filling, considering the test patterns having unspecified bits in ISCAS’89 benchmark suite.
design automation conference | 2013
Subhadip Kundu; Santanu Chattopadhyay; Indranil Sengupta; Rohit Kapur
Volume Diagnosis is extremely important to ramp up the yield during the IC manufacturing process. Limited observability due to test response compaction negatively affects the diagnosis procedure. Hence, in a compaction environment, it is important to implement Design For Diagnosis (DFD) methodology to restore diagnostic resolution. In this paper, a novel DFD technique which makes the faulty chains to behave as good chains during loading, has been proposed. As a result, the errors introduced in the responses, must occur during unloading of the scan chains. Diagnosis can then be performed by directly comparing the actual and expected responses without any fault simulation - leading to significant reduction in time. Results on benchmark circuits show that the average number of suspected cells for single chain failure is 1.27 (ideal value being 1) and the time taken for diagnosis is in the order of milli-seconds.
international conference on industrial and information systems | 2008
Subhadip Kundu; Santanu Chattopadhyay; Kanchan Manna
This paper addresses the issue of blocking pattern selection to reduce both leakage and peak power consumption during circuit testing using scan-based approach. The blocking pattern is used to prevent the scan-chain transitions to circuit inputs. This though reduces dynamic power significantly, can result in quite an increase in the leakage power and peak power. We have presented a novel approach to select a blocking pattern that reduces both peak and leakage power. The avg. improvement in peak power is 31.8% and that of leakage power is 13.5% (best is around 51.2% & 24.9% respectively) with respect to all 1s vector.
IEEE Transactions on Very Large Scale Integration Systems | 2014
Subhadip Kundu; Aniket Jha; Santanu Chattopadhyay; Indranil Sengupta; Rohit Kapur
This brief proposes a framework to analyze multiple faults based on multiple fault simulation in a particle swarm optimization environment. Experimentation shows that up to ten faults can be diagnosed in a reasonable time. However, the scheme does not put any restriction on the number of simultaneous faults.
international conference on computing, communication and networking technologies | 2010
S Krishna Kumar; S. Kaundinya; Subhadip Kundu; Santanu Chattopadhyay
In the sub 70 nm technologies, the leakage power dominates dynamic power. Most of the power calculation methods account for dynamic power dissipation and static leakage power dissipation, but the runtime leakage is generally neglected. It has been shown in recent studies that the contribution of runtime leakage power to the total power dissipation is not negligible any more. The dynamic power dissipation as well as the runtime leakage power depends on the sequence in which the test vectors are fed to it. This necessitates a pre-test phase to identify the sequence of test patterns to minimize the total power. Vector reordering problem is NP-complete and effective heuristic solutions have been proposed in the past. In this paper, we present an approach based on Particle Swarm Optimization (PSO), for vector reordering. PSO is based on the iterative use of a set of particles that correspond to states in an optimization problem, moving each agent in a numerical space looking for the optimal position. Experiments on ISCAS89 benchmark circuits validate the effectiveness of our work. Our approach obtained a maximum saving of 69.75% in the total number of transitions, 45.83% in peak transition, 68.05% in dynamic power, 42.56% in peak dynamic power, 0.38% in leakage power and 59.58% in total power dissipation over unordered test set.
asian test symposium | 2009
Subhadip Kundu; S Krishna Kumar; Santanu Chattopadhyay
Test mode power dissipation has been found to be much more than the functional power dissipation. Since dynamic power dissipation had a major contribution to the heat generated, most of the studies focused on reducing the transitions during testing. But at submicron technology, leakage current becomes significantly high. This demands a control on the leakage current as well. In this work, we propose techniques to simultaneously reduce the switching activity and keeping the leakage current under check. The overall average switching activity reduction is 70.01% and reduction in leakage power is about 6.31%, the maximum being 99.33% in switching and 9.92% in leakage.
international conference on vlsi design | 2012
Subhadip Kundu; Santanu Chattopadhyay; Indranil Sengupta; Rohit Kapur
Diagnosis is the methodology to identify the reason behind the failure of manufactured chips. This is particularly important from the yield enhancement viewpoint. The primary focus of a diagnosis algorithm is to accurately narrow down the list of suspected candidates. But for any diagnosis algorithm, the effectiveness will depend on the test set in use. If the test set used is not good enough to distinguish between fault pairs, the diagnosis algorithm can never be able to distinguish between a good number of faults. This problem leads us to find a metric which can characterize test sets in terms of their diagnostic power. In literature, several methods have been proposed for assessment of the diagnostic power of a test set. Though the methods are accurate in nature, the bottleneck is the space and time complexity. Thus, given a number of test sets (with same fault coverage) for a circuit, it is very difficult to select one of them for better diagnosis. In this paper, we have proposed a probability based approach to find out a metric to describe diagnostic power of a test set. We call this metric, the diagnosibility of the test set for a given circuit. Our method uses almost 99% less space compared to the proposed methods and is well accurate.
vlsi design and test | 2012
Bibhas Ghoshal; Subhadip Kundu; Indranil Sengupta; Santanu Chattopadhyay
Network-on-Chip (NoC) based Built-In-Self Test (BIST) architecture is an acceptable solution for testing embedded memory cores in Systems-On-Chip. The reuse of the available on-chip network to act as Test Access Mechnism brings down the area overhead as well as reduces test power. However, reducing the time to test still remains a problem due to latency in transporting the test instruction from BIST circuit to the memory cores. We have proposed a NoC based test architecture where a number of BIST controllers are shared by memory cores. A Particle Swarm Optimization (PSO) based technique is used (i) to place the BIST controllers at fixed locations and (ii) to form clusters of memories sharing the BIST controllers. This reduces the test instruction transport latency which in turn reduces the total test time of memory cores. Experimental results on different sizes of mesh based NoC confirm the effectiveness of our PSO based approach over heuristic techniques reported in literature as well as used in the industry.
International Journal of Computer Aided Engineering and Technology | 2012
Subhadip Kundu; Santanu Chattopadhyay
This paper addresses two methods for reducing power consumption during testing. The first one is to assign suitable values to the unspecified bits (don’t cares) in the test patterns so that both static and dynamic power are reduced. The second technique discusses the issue of blocking pattern selection for reducing power consumption during circuit testing in a scan-based approach. The blocking pattern is used to prevent the scan chain transitions from reaching circuit inputs. This, though reduces dynamic power significantly, can result in quite an increase in the leakage power. We have presented a novel approach to select a blocking pattern using genetic algorithm and use it properly so that both dynamic and leakage power are reduced.