Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kenneth M. Zick is active.

Publication


Featured researches published by Kenneth M. Zick.


field programmable gate arrays | 2010

On-line sensing for healthier FPGA systems

Kenneth M. Zick; John P. Hayes

Electronic systems increasingly suffer from component variation, thermal hotspots, uneven wearout, and other subtle physical phenomena. Systems based on FPGAs have unique opportunities for adapting to such effects. Required, however, is a low-cost, fine-grained method for sensing physical parameters. This paper presents an approach to on-line sensing that includes a compact multi-use sensor implemented in reconfigurable logic, methods for instrumenting an application, and enhanced measurement procedures. The sensor utilizes a highly-efficient counter and improved ring oscillator, and requires just 8 LUTs. We describe how to measure variations in delay, static power, dynamic power, and temperature. We demonstrate the proposed approach with an experimental system based on a Virtex-5. The system is instrumented with over 100 sensors with a total overhead of only 1.3%. Results from thermally-controlled experiments provide some surprising insights and illustrate the power of the approach. On-line sensing can help open the door to physically-adaptive computing, including fine-grained power, reliability, and health management schemes for FPGA-based systems.


ACM Transactions on Reconfigurable Technology and Systems | 2012

Low-cost sensing with ring oscillator arrays for healthier reconfigurable systems

Kenneth M. Zick; John P. Hayes

Electronic systems on a chip increasingly suffer from component variation, voltage noise, thermal hotspots, and other subtle physical phenomena. Systems with reconfigurability have unique opportunities for adapting to such effects. Required, however, are low-cost, fine-grained methods for sensing physical parameters. This article presents powerful, novel approaches to online sensing, including methods for designing compact reconfigurable sensors, low-cost threshold detection, and several enhanced measurement procedures. Together, the approaches help enable systems to autonomously uncover a wealth of physical information. A highly efficient counter and improved ring oscillator are introduced, enabling an entire sensor node in just 8 Virtex-5 LUTs. We describe how variations can be measured in delay, temperature, switching-induced IR drop, and leakage-induced IR drop. We demonstrate the proposed approach with an experimental system based on a Virtex-5, instrumented with over 100 sensors at an overhead of only 1.3%. Results from thermally controlled experiments provide some surprising insights and illustrate the utility of the approach. Online sensing can help open the door to physically adaptive computing, including fine-grained power, reliability, and health management schemes for systems on a chip.


Scientific Reports | 2015

Experimental quantum annealing: case study involving the graph isomorphism problem

Kenneth M. Zick; Omar Shehab; Matthew French

Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N2 to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers.


field programmable gate arrays | 2013

Sensing nanosecond-scale voltage attacks and natural transients in FPGAs

Kenneth M. Zick; Meeta Srivastav; Wei Zhang; Matthew French

Voltage noise not only detracts from reliability and performance, but has been used to attack system security. Most systems are completely unaware of fluctuations occurring on nanosecond time scales. This paper quantifies the threat to FPGA-based systems and presents a solution approach. Novel measurements of transients on 28nm FPGAs show that extreme activity in the fabric can cause enormous undershoot and overshoot, more than 10× larger than what is allowed by the specification. An existing voltage sensor is evaluated and shown to be insufficient. Lastly, a sensor design using reconfigurable logic is presented; its time-to-digital converter enables sample rates 500× faster than the 28nm Xilinx ADC. This enables quick characterization of transients that would normally go undetected, thereby providing potentially useful data for system optimization and helping to defend against supply voltage attacks.


high level design validation and test | 2008

High-level vulnerability over space and time to insidious soft errors

Kenneth M. Zick; John P. Hayes

The integrity of computational results is being increasingly threatened by soft errors, especially for computations that are large-scale or performed under harsh conditions. Existing methods for soft error estimation do not clearly characterize the vulnerability associated with a particular result. 1) We propose a metric which captures the intrinsic vulnerability over space and time (VST) to soft errors that corrupt computational results. The method of VST estimation bridges the gap between the inherently low-level faults and the high-level computational failures that they eventually cause. 2) We define a model of an insidious soft error and try to clear up confusion around the concept of silent data corruption. 3) We present experimental results from three vulnerability studies involving floating-point addition, CORDIC, and FFT computations. The results show that traditional vulnerability metrics can be confounded by seemingly reliable but inefficient implementations which actually incur high vulnerability per computation. The VST method characterizes vulnerability accurately, provides a figure-of-merit for comparing alternative implementations of an algorithm, and in some cases uncovers pronounced and unexpected fluctuations in vulnerability.


ieee aerospace conference | 2012

Applying Radiation Hardening by Software to Fast Lossless compression prediction on FPGAs

Andrew G. Schmidt; John Paul Walters; Kenneth M. Zick; Matthew French; Didier Keymeulen; Nazeeh Aranki; Matthew Klimesh; Aaron Kiely

As scientists endeavor to learn more about the worlds ecosystems, engineers are pushed to develop more sophisticated instruments. With these advancements comes an increase in the amount of data generated. For satellite based instruments the additional data requires sufficient bandwidth be available to transmit the data. Alternatively, compression algorithms can be employed to reduce the bandwidth requirements. This work is motivated by the proposed HyspIRI mission, which includes two imaging spectrometers measuring from visible to short wave infrared (VSWIR) and thermal infrared (TIR) that saturate the projected bandwidth allocations. We present a novel investigation into the capability of using FPGAs integrated with embedded PowerPC processors to adequately perform the predictor function of the Fast Lossless (FL) compression algorithm for multispectral and hyperspectral imagery. Furthermore, our design includes a multi-PowerPC implementation which incorporates recently developed Radiation Hardening by Software (RHBSW) techniques to provide software-based fault tolerance to commercial FPGA devices. Our results show low performance overhead (4-8%) while achieving a speedup of 1.97× when utilizing both PowerPCs. Finally, the evaluation of the proposed system includes resource utilization, performance metrics, and an analysis of the vulnerability to Single Event Upsets (SEU) through the use of a hardware based fault injector.


dependable systems and networks | 2013

A practical characterization of a NASA SpaceCube application through fault emulation and laser testing

John Paul Walters; Kenneth M. Zick; Matthew French

Historically, space-based processing systems have lagged behind their terrestrial counterparts by several processor generations due, in part, to the cost and complexity of implementing radiation-hardened processor designs. Efforts such as NASAs SpaceCube seek to change this paradigm, using higher performance commercial hardware wherever possible. This has the potential to revolutionize onboard data processing, but it cannot happen unless the soft error reliability can be characterized and deemed sufficient. A variety of fault injection techniques are used to evaluate system reliability, most commonly fault emulation, fault simulation, laser testing, and particle beam testing. Combining multiple techniques is more complex and less common. In this study we characterize a real-world application that leverages a radiation-hardening by software (RHBSW) solution for the SpaceCube platform, using two fault injection strategies: laser testing and fault emulation. We describe several valuable lessons learned, and show how both validation techniques can be combined to greater effect.


IEEE Embedded Systems Letters | 2012

Silent Data Corruption and Embedded Processing With NASA's SpaceCube

Kenneth M. Zick; Chien-Chih Yu; John Paul Walters; Matthew French

Dramatic increases in embedded data processing performance are becoming possible using platforms such as the NASA SpaceCube. With a flexible architecture and commercial devices, selected computations can be tuned for the highest performance while giving up perfect data reliability. More needs to be known about the nature of silent data corruption in this paradigm. When it occurs, how pervasive is it? To what extent can it be mitigated while near-optimal performance is maintained? This paper provides new insights into these questions, via a fault emulation-based study of two disparate applications running on a hard-core embedded processor. Two very low-cost methods of data error detection reduce the worst type of silent data corruption (SDC) by 89-97% with a performance overhead of <; 1%.


field-programmable logic and applications | 2010

Self-Test and Adaptation for Random Variations in Reliability

Kenneth M. Zick; John P. Hayes

Random physical variations and noise are growing challenges for advanced electronic systems. Field programmable systems can, in principle, adapt to these phenomena, but two main problems must be addressed: how to efficiently characterize random variations and how to perform subsequent optimization. This paper addresses both of these questions. First, an approach to self-test is presented that uses on-chip noise emulation to quickly characterize some of the hidden variations in latches. Our noise-injection experiments demonstrate that there can be significant spreads in latch reliability even with current 65nm field-programmable gate arrays (FPGAs). We detected coefficients of variation as high as 77%. Second, we propose an approach to self-optimization using local resource swapping. Experiments on two FPGAs show improvements in mean-time-between-failures (MTBF) of up to 60%.


international on line testing symposium | 2009

On-line characterization and reconfiguration for single event upset variations

Kenneth M. Zick; John P. Hayes

The amount of physical variation among electronic components on a die is increasing rapidly. There is a need for a better understanding of variations in transient fault susceptibility, and for methods of on-line adaptation to such variations. We address three key research questions in this area. First, we investigate accelerated characterization of individual latch susceptibilities. We find that on the order of 10 upsets per latch must be observed for variations to be adequately characterized. Second, we propose a method of on-line hardware reconfiguration using incremental place-and-route on FPGAs. Surprisingly, we find that highly localized place-and-route changes (e.g. restricted to groups of 8 flip-flops) are sufficient for realizing most of the possible benefits. Lastly, we quantify potential improvements in system-level soft error rates via Monte Carlo simulation experiments. The study highlights both what is required for and what can be gained by on-line adaptation.

Collaboration


Dive into the Kenneth M. Zick's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew French

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

John Paul Walters

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Aaron Kiely

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrew G. Schmidt

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Didier Keymeulen

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Klimesh

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge