Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kanad Basu is active.

Publication


Featured researches published by Kanad Basu.


IEEE Transactions on Very Large Scale Integration Systems | 2010

Test Data Compression Using Efficient Bitmask and Dictionary Selection Methods

Kanad Basu; Prabhat Mishra

Higher circuit densities in system-on-chip (SOC) designs have led to drastic increase in test data volume. Larger test data size demands not only higher memory requirements, but also an increase in testing time. Test data compression addresses this problem by reducing the test data volume without affecting the overall system performance. This paper proposes a novel test data compression technique using bitmasks which provides a substantial improvement in the compression efficiency without introducing any additional decompression penalty. The major contributions of this paper are as follows: 1) it develops an efficient bitmask selection technique for test data in order to create maximum matching patterns; 2) it develops an efficient dictionary selection method which takes into account the bitmask based compression; and 3) it proposes a test compression technique using efficient dictionary and bitmask selection to significantly reduce the testing time and memory requirements. We have applied our method on various test data sets and compared our results with other existing test compression techniques. Our algorithm outperforms existing dictionary-based approaches by up to 30%, giving a best possible test compression of 92%.


IEEE Transactions on Very Large Scale Integration Systems | 2013

RATS: Restoration-Aware Trace Signal Selection for Post-Silicon Validation

Kanad Basu; Prabhat Mishra

Post-silicon validation is one of the most important and expensive tasks in modern integrated circuit design methodology. The primary problem governing post-silicon validation is the limited observability due to storage of a small number of signals in a trace buffer. The signals to be traced should be carefully selected in order to maximize restoration of the remaining signals. Existing approaches have two major drawbacks. They depend on partial restorability computations that are not effective in restoring maximum signal states. They also require long signal selection time due to inefficient computation as well as operating on gate-level netlist. We have proposed a signal selection approach based on total restorability at gate-level, which is computationally more efficient (10 times faster) and can restore up to three times more signals compared to existing methods. We have also developed a register transfer level signal selection approach, which reduces both memory requirements and signal selection time by several orders-of-magnitude.


international conference on vlsi design | 2011

Efficient Trace Signal Selection for Post Silicon Validation and Debug

Kanad Basu; Prabhat Mishra

Post-silicon validation is an essential part of modern integrated circuit design to capture bugs and design errors that escape pre-silicon validation phase. A major problem governing post-silicon debug is the observability of internal signals since the chip has already been manufactured. Storage requirements limit the number of signals that can be traced, therefore, a major challenge is how to reconstruct the majority of the remaining signals based on traced values. Existing approaches focus on selecting signals with an emphasis on partial restorability, which does not guarantee a good signal restoration. We propose an approach that efficiently selects a set of signals based on total restorability criteria. Our experimental results demonstrate that our signal selection algorithm is both computationally more efficient and can restore up to three times more signals compared to existing methods.


vlsi test symposium | 2011

Efficient trace data compression using statically selected dictionary

Kanad Basu; Prabhat Mishra

Post-silicon validation and debug have gained importance in recent years to track down errors that have escaped the pre-silicon phase. Limited observability of internal signals during post-silicon debug necessitates the storage of signal states in real time. Trace buffers are used to store these states. To increase the debug observation window, it is essential to compress these trace signals, so that trace data over larger number of cycles can be stored in the trace buffer while keeping its size constant. In this paper, we propose several dictionary based compression techniques for trace data compression that takes account of the fact that the difference between golden and erroneous trace data is small. Therefore, the static dictionary selected based on golden trace data can provide notably better compression performance than the dynamic dictionaries selected in the current approaches. This will also significantly reduce the hardware overhead by reducing the dictionary size. Our experimental results demonstrate that our approach can provide up to 60% better compression compared to existing approaches, while reducing the architecture overhead by 84%.


great lakes symposium on vlsi | 2008

A novel test-data compression technique using application-aware bitmask and dictionary selection methods

Kanad Basu; Prabhat Mishra

Higher circuit densities in System-on-Chip (SOC) designs have led to enhancement in the test data volume. Larger test data size demands not only greater memory requirements, but also an increase in the testing time. Test data compression addresses this problem by reducing the test data volume without affecting the overall system performance. This paper proposes a novel test data compression technique using bitmasks which provides a significant enhancement in the compression efficiency without introducing any additional decompression penalty. The major contributions of this paper are as follows: i) it develops an efficient bitmask selection technique for test data in order to create maximum matching patterns; ii) it develops an efficient dictionary selection method which takes into account the speculated results of compressed codes and iii) it proposes a suitable code compression technique using dictionary and bitmask based code compression that can reduce the memory and time requirements. We have used our algorithm on various test data sets and compared our results with other existing test compression techniques. Our algorithm outperforms the best known existing compression technique up to 30%, giving a best possible compression of 92.2%.


international conference on vlsi design | 2013

Observability-aware Directed Test Generation for Soft Errors and Crosstalk Faults

Kanad Basu; Prabhat Mishra; Priyadarsan Patra

Post-silicon validation has emerged as an important component of any chip design methodology to detect both functional and electrical errors that have escaped the pre-silicon validation phase. In order to detect these escaped errors, both controllability and observability factors should be considered. Soft errors and crosstalk faults are two important electrical faults that can adversely affect the correct functionality of the chip. A major bottleneck with the existing approaches is that they do not consider the inter-dependence of the selected trace signals and test generation. In this paper, we explore the synergy between trace signal selection and observability-aware test generation to enable efficient detection of electrical errors including soft errors and crosstalk faults. Our experimental results demonstrate that our approach can significantly improve error detection performance - on an average 58% for crosstalk faults and 48% for soft errors compared to existing techniques.


Integration | 2013

Bitmask aware compression of NISC control words

Kanad Basu; Chetan Murthy; Prabhat Mishra

It is not always feasible to implement an application specific custom hardware due to cost and time considerations. No instruction set compiler (NISC) architecture is one of the promising directions to design a custom datapath for each application using its execution characteristics. A major challenge with NISC control words is that they tend to be at least 4-5 times larger than regular instruction size, thereby imposing higher memory requirement. A possible solution to counter this is to compress these control words to reduce the code size of the application. This paper proposes an efficient bitmask-based compression technique to drastically reduce the control word size while keeping the decompression overhead in an acceptable range. The main contributions of our approach are (i) smart encoding of constant and less frequently changing bits, (ii) efficient do not care resolution for maximum bitmask coverage using limited dictionary entries, (iii) run length encoding to significantly reduce repetitive control words and (iv) design of an efficient decompression engine to reduce the performance penalty. Our experimental results demonstrate that our approach improves compression efficiency by an average of 20% over the best known control word compression, giving a compression ratio of 25-35%. In addition, our technique only requires 1-3 on-chip RAMs, thus making it suitable for FPGA implementation.


Processor Description Languages#R##N#Applications and Methodologies | 2008

HMDES, ISDL, and Other Contemporary ADLs

Nirmalya Bandyopadhyay; Kanad Basu; Prabhat Mishra

Publisher Summary This chapter describes various contemporary architecture description languages (ADLs) such as HMDES, ISDL, RADL, Sim-nML, UDL/I, Flexware, Valen-C, and TDL and their associated methodologies that are used for designing automation of embedded processors. The machine description language HMDES captures the processor resources and their usage in the instruction set in a programmer- and compiler-friendly way. It also supports C-like preprocessing capabilities such as file inclusion, macro expansion, and conditional compilation. This language and the machine description (MD) libraries and tools are used to write complex machine descriptions and capture the design in a form of hierarchical specification, where each level in the hierarchy can be placed in a separate MD file. Instruction set description language (ISDL) is a behavioral machine description language in which a compiler front end takes a source program written in C or C++. The ISDL is used to enable various design automation tasks such as assembler generation, simulator and hardware generation, and compiler generation for exploration and rapid prototyping. The main contribution of this language is to capture the behavior of pipelines, inter-pipeline control, and data communication with ease and flexibility.


Archive | 2008

Lossless data compression and real-time decompression

Prabhat Mishra; Seok-Won Seong; Kanad Basu; Weixun Wang; Xiaoke Qin; Chetan Murthy


international test conference | 2011

Efficient combination of trace and scan signals for post silicon validation and debug

Kanad Basu; Prabhat Mishra; Priyadarsan Patra

Collaboration


Dive into the Kanad Basu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ankit Jindal

Indian Institute of Technology Bombay

View shared research outputs
Top Co-Authors

Avatar

Binod Kumar

Indian Institute of Technology Bombay

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge