Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kamran Rahmani is active.

Publication


Featured researches published by Kamran Rahmani.


international symposium on quality electronic design | 2014

Efficient trace signal selection using augmentation and ILP techniques

Kamran Rahmani; Prabhat Mishra; Sandip Ray

A key problem in post-silicon validation is to identify a small set of traceable signals that are effective for debug during silicon execution. Most signal selection techniques rely on a metric based on circuit structure. Simulation-based signal selection is promising but have major drawbacks in computation overhead and restoration quality. In this paper, we propose an efficient simulation-based signal selection technique to address these bottlenecks. Our approach uses (1) bounded mock simulations to determine state restoration effectiveness, and (2) an ILP-based algorithm for refining selected signals over different simulation runs. Experimental results demonstrate that our algorithm can provide significantly better restoration ratio (up to 515%, 51% on average) compared to the state-of-the-art techniques.


international conference on computer design | 2013

Scalable trace signal selection using machine learning

Kamran Rahmani; Prabhat Mishra; Sandip Ray

A key problem in post-silicon validation is to identify a small set of traceable signals that are effective for debug during silicon execution. Structural analysis used by traditional signal selection techniques leads to poor restoration quality. In contrast, simulation-based selection techniques provide superior restorability but incur significant computation overhead. In this paper, we propose an efficient signal selection technique using machine learning to take advantage of simulation-based signal selection while significantly reducing the simulation overhead. Our approach uses (1) bounded mock simulations to generate training vectors set for the machine learning technique, and (2) an elimination approach to identify the most profitable signals set. Experimental results indicate that our approach can improve restorability by up to 63.3% (17.2% on average) with a faster or comparable runtime.


IEEE Transactions on Very Large Scale Integration Systems | 2016

Efficient Selection of Trace and Scan Signals for Post-Silicon Debug

Kamran Rahmani; Sudhi Proch; Prabhat Mishra

Post-silicon validation is a critical part of integrated circuit design methodology. The primary objective is to detect and eliminate the bugs that have escaped pre-silicon validation phase. One of the key challenges in post-silicon validation is the limited observability of internal signals in manufactured chips. A promising direction to improve observability is to combine trace and scan signals-a small set of trace signals are stored in every cycle, whereas a large set of scan signals are dumped across multiple cycles. Existing techniques are not very effective, since they explore a coarse-grained combination of trace and scan signals. In this paper, we propose a fine-grained architecture that addresses this issue using various scan chains with different dumping periods. We also propose efficient algorithms to select beneficial signals based on this architecture. Our experimental results demonstrate that our approach can improve restoration ratio up to 127% (36% on average) compared with existing trace-only techniques. Our approach also shows up to 125% improvement (61.7% on average) compared with techniques that allow a combination of trace and scan signals with minor (<;1%) area and power overhead.


international conference on vlsi design | 2013

Efficient Signal Selection Using Fine-grained Combination of Scan and Trace Buffers

Kamran Rahmani; Prabhat Mishra

Post-silicon validation is a critical part of integrated circuit design methodology. The primary objective is to detect and eliminate the bugs that has escaped pre-silicon validation phase. One of the key challenges in post-silicon validation is the limited observability of internal signals in manufactured chips. Leveraging on-chip buffers addresses this issue by storing some of the internal signal states during runtime. A promising direction to improve observability is to combine trace and scan signals - a small set of trace signals are stored every cycle, whereas a large set of scan signals are dumped across multiple cycles. Existing techniques are not very effective since they explore a coarse-grained combination of trace and scan signals. In this paper, we propose a fine-grained architecture that addresses this issue by using various scan chains with different dumping periods. We also propose an efficient algorithm to select beneficial signals based on this architecture. Our experimental results demonstrate that our signal selection algorithm can improve restoration ratio up to 91% (32.3% on average) compared to existing trace only techniques. Our approach also shows up to 116% improvement (54.7% on average) compared to techniques that allows combination of trace and scan signals.


2011 International Green Computing Conference and Workshops | 2011

Synergistic integration of dynamic cache reconfiguration and code compression in embedded systems

Hadi Hajimiri; Kamran Rahmani; Prabhat Mishra

Optimization techniques are widely used in embedded systems design to improve overall area, performance and energy requirements. Dynamic cache reconfiguration is very effective to reduce energy consumption of cache subsystems which accounts for about half of the total energy consumption in embedded systems. Various studies have shown that code compression can significantly reduce memory requirements, and may improve performance in many scenarios. In this paper, we study the challenges and associated opportunities in integrating dynamic cache reconfiguration with code compression to retain the advantages of both approaches. Experimental results demonstrate that synergistic combination of cache reconfiguration and code compression can significantly reduce both energy consumption (65% on average) and memory requirements while drastically improve the overall performance (up to 75%) compared to dynamic cache reconfiguration alone.


IEEE Transactions on Very Large Scale Integration Systems | 2017

Postsilicon Trace Signal Selection Using Machine Learning Techniques

Kamran Rahmani; Sandip Ray; Prabhat Mishra

A key problem in postsilicon validation is to identify a small set of traceable signals that are effective for debug during silicon execution. Structural analysis used by traditional signal selection techniques leads to a poor restoration quality. In contrast, simulation-based selection techniques provide superior restorability but incur significant computation overhead. In this paper, we propose an efficient signal selection technique using machine learning to take advantage of simulation-based signal selection while significantly reducing the simulation overhead. The basic idea is to train a machine learning framework with a few simulation runs and utilize its effective prediction capability (instead of expensive simulation) to identify beneficial trace signals. Specifically, our approach uses: 1) bounded mock simulations to generate training vectors for the machine learning technique and 2) a compound search-space exploration approach to identify the most profitable signals. Experimental results indicate that our approach can improve restorability by up to 143.1% (29.2% on average) while maintaining or improving runtime compared with the state-of-the-art signal selection techniques.


international conference on vlsi design | 2015

Efficient Peak Power Estimation Using Probabilistic Cost-Benefit Analysis

Hadi Hajimiri; Kamran Rahmani; Prabhat Mishra

Estimation of peak power consumption is an essential task in order to design reliable systems. Optimistic design choices can make the circuit unreliable and vulnerable to power attacks, whereas pessimistic design can lead to unacceptable design overhead. The power virus problem is defined as finding input patterns that can maximize switching activity (dynamic power dissipation) in digital circuits. In this paper, we present a fast and simple to implement power virus generation technique utilizing a probabilistic cost-benefit analysis. To maximize switching activity, our proposed algorithm iteratively enables transitions in high fan-out gates while considering the trade-off between switching of new gates (benefit) and blocking of gate transitions in the future iterations (cost) due to switching of the currently selected one. Extensive experiments using both combinational and sequential benchmarks demonstrate that our approach can achieve up to 64% more toggles (30.7% on average) for zero-delay model and improvements of up to 319% (109% on average) for unit-delay model compared to the state-of-the-art techniques.


IEEE Transactions on Emerging Topics in Computing | 2017

Feature-based Signal Selection for Post-silicon Debug using Machine Learning

Kamran Rahmani; Prabhat Mishra

A key challenge of post-silicon validation methodology is to select a limited number of trace signals that are effective during post-silicon debug. Structural analysis used by traditional signal selection techniques are fast but lead to poor restoration quality. In contrast, simulation-based selection techniques provide superior restorability but incur significant computation overhead. While early work on machine learning based signal selection is promising [1] , it is still not applicable on large industrial designs since it needs thousands of simulations of large and complex designs. In this paper, we propose a signal selection technique that addresses the scalability issue of simulation-based techniques while maintaining a high restoration performance. The basic idea is to train a machine learning framework using a small set of circuits, and apply the trained model to the bigger circuit under test, without any need for simulating the large industry-scale designs. This paper makes two fundamental contributions: i) this is the first attempt to show that learning from small related circuits can be useful for signal selection, and ii) this is the first automated signal selection approach that is applicable on industrial designs without sacrificing restoration quality. Experimental results indicate that our approach can improve restorability by up to 135.4 percent (8.8 percent on average) while significantly reduce (up to 37X, 16.6X on average) the runtime compared to existing signal selection approaches.


great lakes symposium on vlsi | 2012

Memory-based computing for performance and energy improvement in multicore architectures

Kamran Rahmani; Prabhat Mishra; Swarup Bhunia

Memory-based computing (MBC) is promising for improving performance and energy efficiency in both data- and compute-intensive applications. In this paper, we propose a novel reconfigurable MBC framework for multicore architectures where each core uses caches for computation using Look Up Tables (LUTs). Experimental results demonstrate that on-demand memory-based computing in each core can significantly improve performance (up to 4.7X, 3.3X on average) as well as reduce energy consumption (up to 4.7X, 2X on average) in multicore architectures.


Archive | 2019

Post-Silicon Signal Selection Using Machine Learning

Alif Ahmed; Kamran Rahmani; Prabhat Mishra

A key constraint in post-silicon validation and debug is limited observability of internal signals. Knowledge of only primary outputs can lead to an observability of a few internal signals. The most widely used on-chip testing infrastructure is scan chains, which can provide reasonable observability. However, using scan chain involves loading of input vectors in test mode, and therefore, is not suitable for signal tracing during normal execution. Instead, a small trace buffer is commonly used to meet this purpose. Size of trace buffer is limited (can only trace a small number of signals) due to area and energy constraints. The goal of signal selection is to maximize observability by selecting the correct set of signals (hundreds out of billions) for trace buffer. Many signal selection techniques have been proposed over the years. Metric- based techniques do static analysis on design to select profitable signals. They often use greedy algorithms, which are fast but lead to poor restoration quality. In contrast, simulation-based selection techniques provide superior restorability but incur significant runtime overhead. A hybrid between these two approaches has also been proposed, which trades-off some restoration performance to reduce runtime. Recently, machine learning based signal selection techniques are emerging as the most promising one. This chapter describes two machine learning based signal selection methods. The first method trains a model to predict restoration quality based on the selected signals. This method improves runtime by performing only a small number of simulations for training. The second method demonstrates how runtime can be further improved by running simulation on small-scale designs with the similar characteristic as the actual one.

Collaboration


Dive into the Kamran Rahmani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge