Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Lisherness is active.

Publication


Featured researches published by Peter Lisherness.


international test conference | 2008

A Cost Analysis Framework for Multi-core Systems with Spares

Saeed Shamshiri; Peter Lisherness; Sung-Jui Pan; Kwang-Ting Cheng

It becomes increasingly difficult to achieve a high manufacturing yield for multi-core chips due to larger chip sizes, higher device densities, and greater failure rates. By adding a limited number of spare cores to replace defective cores either before shipment or in the field, the effective yield of the chip and its overall cost can be significantly improved. In this paper, we propose a yield and cost analysis framework to better understand the dependency of a multi-core chips cost on key parameters such as the number of cores and spares, core yield, and defect coverage of manufacturing and in-field testing. Our analysis shows that we can eliminate the burn-in process when we have some spare cores for in-field recovery. We demonstrate that a high defect coverage for in-field testing, a necessity for supporting in-field recovery, is essential for overall cost reduction. We also illustrate that, with in-field recovery capability, the reliance on high quality manufacturing testing is significantly reduced.


IEEE\/OSA Journal of Optical Communications and Networking | 2012

Power-efficient calibration and reconfiguration for optical network-on-chip

Yan Zheng; Peter Lisherness; Ming Gao; Jock Bovington; Kwang-Ting Cheng; Hong Wang; Shiyuan Yang

Recent advances in nanophotonic fabrication have made the optical network-on-chip an attractive interconnect option for next-generation multi-/many-core systems, providing high bandwidth and power efficiency. Both postfabrication and runtime calibration of the optical components (ring resonators) are essential to building a robust optical communication system, as they are highly sensitive to process and thermal variation. Existing tuning methods based on bias voltage and temperature adjustment require excessive power to fully compensate for these variations. In this work, we propose a set of complementary techniques to address this challenge and significantly reduce the tuning power consumption: 1) a subchannel remapping scheme to decrease the required tuning from the free spectral range to less than one channel (typically less than 1 nm); 2) a transceiver-based network topology capable of building and tuning far fewer rings while maintaining the same system throughput. Our results show that the proposed methods can together reduce the tuning power by as much as 99.85%.


IEEE Transactions on Computers | 2011

Time-Multiplexed Online Checking

Ming Gao; Hsiu-Ming Chang; Peter Lisherness; Kwang-Ting Cheng

There is a growing demand for online hardware checking capability to cope with increasing in-field failures resulting from variability and reliability problems. While many online checking schemes have been proposed, their area overhead remains too high for cost-sensitive applications. In this paper, we introduce a Time-Multiplexed Online Checking (TMOC) scheme using embedded field-programmable blocks for checker implementation, which enables various system parts to be checked dynamically in-field in a time-multiplexed fashion. The test quality analyses using a probabilistic model show that TMOC could maintain high fault coverage that is similar to traditional dedicated checkers. We conducted a case study of an H.264 decoder design that demonstrates our TMOC scheme provides a significant reduction in chip area and power overhead for online checkers at the cost of increased fault detection latency. We have successfully implemented and demonstrated our proposed TMOC scheme using a single Field-Programmable Gate Array (FPGA) chip.


design, automation, and test in europe | 2012

Power-efficient calibration and reconfiguration for on-chip optical communication

Yan Zheng; Peter Lisherness; Ming Gao; Jock Bovington; Shiyuan Yang; Kwang-Ting Cheng

On-chip optical communication infrastructure has been proposed to provide higher bandwidth and lower power consumption for the next-generation high performance multicore systems. Online calibration of these optical components is essential to building a robust optical communication system, which is highly sensitive to process and thermal variation. However, the power consumption of existing tuning methods to properly calibrate the optical devices would be prohibitively high. We propose two calibration and reconfiguration techniques that can significantly reduce the worst case tuning power of a ring-resonator-based optical modulator: 1) a channel re-mapping scheme, with sub-channel redundant resonators, which results in significant reduction in the amount of required tuning, typically within the capability of voltage based tuning, and 2) a dynamic feedback calibration mechanism used to compensate for both process and thermal variations of the resonators. Simulation results demonstrate that these techniques can achieve a 48X reduction in tuning power - less than 10W for a network with 1-million ring resonators.


high level design validation and test | 2009

An instrumented observability coverage method for system validation

Peter Lisherness; Kwang-Ting Cheng

In order to improve effectiveness and efficiency of post-silicon validation, we present a fault-symbol tracking method and a coverage metric that account for the limited observability in silicon and thus are useful for guiding validation test selection, test development, and design for debug. The coverage points targeted in this study are a set of fault-symbols, or ‘tags’, generated from each expression in a system model. Coverage is measured in simulation by tracking tags alongside dynamic information flows to user-defined or implicit observation points. Computation of the metric is performed based on high-level (C/C++) functional and behavioral models through compiler-inserted parallel fault-symbol tracking instrumentation, which offers high efficiency as well as compatibility with existing simulation flows. The coverage results from our initial implementation for a microcontroller instruction set simulator are compared with the statement and mutation coverages. The results show that the new coverage metric is more accurate than the statement coverage and can be computed in significantly shorter runtimes than the mutation coverage.


design, automation, and test in europe | 2013

Mutation analysis with coverage discounting

Peter Lisherness; Nicole Lesperance; Kwang-Ting Cheng

Mutation testing is an established technique for evaluating validation thoroughness, but its adoption has been limited by the manual effort required to analyze the results. This paper describes the use of coverage discounting for mutation analysis, where undetected mutants are explained in terms of functional coverpoints, simplifying their analysis and saving effort. Two benchmarks are shown to compare this improved flow against regular mutation analysis. We also propose a confidence metric and simulation ordering algorithm optimized for coverage discounting, potentially reducing overall simulation time.


high level design validation and test | 2011

Coverage discounting: A generalized approach for testbench qualification

Peter Lisherness; Kwang-Ting Cheng

In simulation-based validation, the detection of design errors requires both stimulus capable of activating the errors and checkers capable of detecting the behavior as erroneous. Validation coverage metrics tend to address only the sufficiency of a testbenchs stimulus component, whereas fault insertion techniques focus on the testbenchs checker component. In this paper we introduce “coverage discounting”, an analytical technique that combines the benefits of each approach, overcomes their respective shortcomings, and provides significantly more information than performing both tasks separately. The proposed approach can be used with any functional coverage metric (including, and ideally, user defined covergroups and bins), and a variety of fault models and insertion mechanisms. We present an experimental case study where the proposed approach is used to evaluate functional and pseudofunctional tests for a microprocessor. The simulation efficiency is improved through the use of an instruction set simulator, which has been instrumented to record functional coverage information as well as insert faults according to an ad-hoc fault model. The results demonstrate the benefits of coverage discounting: it is able to correctly distinguish high and low-quality tests with similar coverage scores as well as expose checker insufficiencies.


asia and south pacific design automation conference | 2012

Improving validation coverage metrics to account for limited observability

Peter Lisherness; Kwang-Ting Cheng

In both pre-silicon and post-silicon validation, the detection of design errors requires both stimulus capable of activating the errors and checkers capable of detecting the behavior as erroneous. Most functional and code coverage metrics evaluate only the activation component of the testbench and ignore propagation and detection. In this paper, we summarize our recent work in developing improved metrics that account for propagation and/or detection of design errors. These works include tools for observability-enhanced code coverage and mutation analysis of high-level designs as well as an analytical method, Coverage Discounting, which adds checker sensitivity to arbitrary functional coverage metrics.


international test conference | 2012

Adaptive test selection for post-silicon timing validation: A data mining approach

Ming Gao; Peter Lisherness; Kwang-Ting Cheng

Test failure data produced during post-silicon validation contain accurate design- and process-specific information about the DUD (design-under-debug). Prior research efforts and industry practice focused on feeding this information back to the design flow via bug root-cause analysis. However, the value of this silicon data for helping further improvement of the post-silicon validation process has been largely overlooked. In this paper, we propose an adaptive test selection method to progressively tune the validation plan using knowledge automatically mined from the bug sightings during post-silicon validation. Experimental results demonstrate that the proposed fault-model-free data mining approach can prioritize those tests capable of uncovering more silicon timing errors, resulting in significant reduction of validation time and effort.


design automation conference | 2010

SCEMIT: a systemc error and mutation injection tool

Peter Lisherness; Kwang-Ting Cheng

Collaboration


Dive into the Peter Lisherness's collaboration.

Top Co-Authors

Avatar

Kwang-Ting Cheng

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ming Gao

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jock Bovington

University of California

View shared research outputs
Top Co-Authors

Avatar

Sung-Jui Pan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge