Michael Orshansky
University of Texas at Austin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Orshansky.
custom integrated circuits conference | 2000
Yu Cao; Takashi Sato; Michael Orshansky; Dennis Sylvester; Chenming Hu
A new paradigm of predictive MOSFET and interconnect modeling is introduced. This approach is developed to specifically address SPICE compatible parameters for future technology generations. For a given technology node, designers can use default values or directly input L/sub eff/, T/sub ok/, V/sub t/, R/sub dsw/ and interconnect dimensions to instantly obtain a BSIM3v3 customized model for early stages of circuit design and research. Models for 0.18 /spl mu/m and 0.13 /spl mu/m technology nodes with L/sub eff/ down to 70 nm are currently available on the web. Comparisons with published data and 2D simulations are used to verify this predictive technology model.
european test symposium | 2013
Jie Han; Michael Orshansky
Approximate computing has recently emerged as a promising approach to energy-efficient design of digital systems. Approximate computing relies on the ability of many systems and applications to tolerate some loss of quality or optimality in the computed result. By relaxing the need for fully precise or completely deterministic operations, approximate computing techniques allow substantially improved energy efficiency. This paper reviews recent progress in the area, including design of approximate arithmetic blocks, pertinent error and quality measures, and algorithm-level techniques for approximate computing.
design automation conference | 2002
Michael Orshansky; Kurt Keutzer
The traditional approach to worst-case static-timing analysis is becoming unacceptably conservative due to an ever-increasing number of circuit and process effects. We propose a fundamentally different framework that aims to significantly improve the accuracy of timing predictions through fully probabilistic analysis of gate and path delays. We describe a bottom-up approach for the construction of joint probability density function of path delays, and present novel analytical and algorithmic methods for finding the full distribution of the maximum of a random path delay space with arbitrary path correlations.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2002
Michael Orshansky; Linda Milor; Pinhong Chen; Kurt Keutzer; Chenming Hu
In this paper we address both empirically and theoretically the impact of an advanced manufacturing phenomenon on the performance of high-speed digital circuits. Using data collected from an actual state-of-the-art fabrication facility, we conducted a comprehensive characterization of an advanced 0.18-/spl mu/m CMOS process. The measured data revealed a significant systematic, rather than random spatial intrachip variability of MOS gate length, leading to large circuit path delay variation. The delay of the critical path of a combinational logic block varies by as much as 17%, and the global skew is increased by 8%. Thus, a significant timing error and performance loss takes place if variability is not properly addressed. We derive a model, which allows estimating performance degradation for the given circuit and process parameters. We demonstrate explicitly that intrachip Lgate variation has a significant detrimental impact on the overall circuit performance, shifting the entire distribution of clock frequencies toward slower values. This is in striking contrast to the impact of interchip Lgate variation, traditionally considered in statistical circuit analysis, which leads to the variation of chip clock frequencies around the average value. Moreover, analysis shows that the spatial, rather than proximity-dependent systematic Lgate variability, is the main cause of large circuit speed degradation. The degradation is worse for the circuits with a larger number of critical paths and shorter average logic depth. We propose a location-dependent timing analysis methodology that allows mitigation of the detrimental effects of Lgate variability and have developed a tool linking the layout-dependent spatial information to circuit analysis. We discuss the details of practical implementation of the methodology, and provide guidelines for managing design complexity.
high-performance computer architecture | 2006
Kypros Constantinides; Stephen M. Plaza; Jason A. Blome; Bin Zhang; Valeria Bertacco; Scott A. Mahlke; Todd M. Austin; Michael Orshansky
As silicon technologies move into the nanometer regime, transistor reliability is expected to wane as devices become subject to extreme process variation, particle-induced transient errors, and transistor wear-out. Unless these challenges are addressed, computer vendors can expect low yields and short mean-times-to-failure. In this paper, we examine the challenges of designing complex computing systems in the presence of transient and permanent faults. We select one small aspect of a typical chip multiprocessor (CMP) system to study in detail, a single CMP router switch. To start, we develop a unified model of faults, based on the time-tested bathtub curve. Using this convenient abstraction, we analyze the reliability versus area tradeoff across a wide spectrum of CMP switch designs, ranging from unprotected designs to fully protected designs with online repair and recovery capabilities. Protection is considered at multiple levels from the entire system down through arbitrary partitions of the design. To better understand the impact of these faults, we evaluate our CMP switch designs using circuit-level timing on detailed physical layouts. Our experimental results are quite illuminating. We find that designs are attainable that can tolerate a larger number of defects with less overhead than naive triple-modular redundancy, using domain-specific techniques such as end-to-end error detection, resource sparing, automatic circuit decomposition, and iterative diagnosis and reconfiguration.
international symposium on low power electronics and design | 2003
David N. Nguyen; Abhijit Davare; Michael Orshansky; David Chinnery; Brandon Thompson; Kurt Keutzer
We describe an optimization strategy for minimizing total power consumption using dual threshold voltage (Vth) technology. Significant power savings are possible by simultaneous assignment of Vth with gate sizing. We propose an efficient algorithm based on linear programming that jointly performs Vth assignment and gate sizing to minimize total power under delay constraints. First, linear programming assigns the optimal amounts of slack to gates based on power-delay sensitivity. Then, an optimal gate configuration, in terms of Vth and transistor sizes, is selected by an exhaustive local search. Benchmark results for the algorithm show 32% reduction in power consumption on average, compared to sizing only power minimization. There is up to a 57% reduction for some circuits. The flow can be extended to dual supply voltage libraries to yield further power savings.
design automation conference | 2005
Murari Mani; Anirudh Devgan; Michael Orshansky
Power minimization under variability is formulated as a rigorous statistical robust optimization program with a guarantee of power and timing yields. Both power and timing metrics are treated probabilistically. Power reduction is performed by simultaneous sizing and dual threshold voltage assignment. An extremely fast run-time is achieved by casting the problem as a second-order conic problem and solving it using efficient interior-point optimization methods. When compared to the deterministic optimization, the new algorithm, on average, reduces static power by 31% and total power by 17% without the loss of parametric yield. The run time on a variety of public and industrial benchmarks is 30/spl times/ faster than other known statistical power minimization algorithms.
IEEE Transactions on Semiconductor Manufacturing | 2004
Michael Orshansky; Linda Milor; Chenming Hu
The authors present a comprehensive characterization method applied to the study of the state-of-the-art 18-/spl mu/m CMOS process. Statistical characterization of gate CD reveals a large spatial intrafield component, strongly dependent on the local layout patterns. The authors describe the statistical analysis of this data and demonstrate the need for such comprehensive characterization. They describe the experimental setup of the novel measurement-based characterization approach that is capable of capturing all the relevant CD variation patterns necessary for accurate circuit modeling and statistical design for increased performance and yield. Characterization is based upon an inexpensive electrically based measurement technique. A rigorous statistical analysis of the impact of intrafield variability on circuit performance is undertaken. They show that intrafield CD variation has a significant detrimental effect on the overall circuit performance that may be as high as 25%. Moreover, they demonstrate that the spatial component of gate CD variability, rather than the proximity-dependent component, is predominantly responsible for speed degradation. In order to reduce the degradation of circuit performance and yield, the authors propose a mask-level spatial gate CD correction algorithm to reduce the intrafield and overall variability and provide an analytical model to evaluate the effectiveness of correction for variance reduction. They believe that potentially significant benefits can be achieved through implementation of this compensation technique in the production environment.
international conference on computer aided design | 2000
Michael Orshansky; Linda Milor; Pinhong Chen; Kurt Keutzer; Chenming Hu
Using data collected from an actual state-of-the-art fabrication facility, we conducted a comprehensive characterization of an advanced 0.18 /spl mu/m CMOS process. The measured data revealed significant systematic, rather than random, spatial intra-chip variability of MOS gate length, leading to large circuit path delay variation. The critical path value of a combinational logic block varies by as much as 17%, and the global skew is increased by 8%. Thus, a significant timing error (/spl sim/25%) and performance loss takes place if variability is not properly addressed. We derive a model, which allows estimating performance degradation for the given circuit and process parameters. Analysis shows that the spatial, rather than proximity-dependent, systematic Lgate variability is the main cause of large circuit speed degradation. The degradation is worse for the circuits with a larger number of critical paths and shorter average logic depth. We propose a location-dependent timing analysis methodology that allows to mitigate the detrimental effects of Lgate variability, and developed a tool linking the layout-dependent spatial information to circuit analysis. We discuss the details of the practical implementation of the methodology, and provide the guidelines for managing the design complexity.
international conference on computer aided design | 2006
Bin Zhang; Ari Arapostathis; Sani R. Nassif; Michael Orshansky
In this paper, for the first time, a theory for evaluating dynamic noise margins of SRAM cells is developed analytically. The results allow predicting the transient error susceptibility of an SRAM cell using a closed-form expression. The key innovation involves using the methods of nonlinear system theory in developing the model. It is shown that when a transient noise of given magnitude affects a sensitive node of a cell, the bi-stable, feedback-driven nature of the cell determines whether the noise will be suppressed or will evolve to eventually flip state. The specific formal and quantitative result is a closed-form expression that can be used to predict whether a cell flip will occur for a noise signal with specific characteristics, and for a given SRAM cell design. Experiments show excellent match between the analytical prediction and the SPICE simulation results