Timothy J. Dysart
University of Notre Dame
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Timothy J. Dysart.
IEEE Transactions on Very Large Scale Integration Systems | 2009
Timothy J. Dysart; Peter M. Kogge
As computing technology delves deeper into the nanoscale regime, reliability is becoming a significant concern, and in response, Teramac-like systems will be the model for many early non-CMOS nanosystems. Engineering systems of this type requires understanding the inherent reliability of both the functional cells and the interconnect used to build the system, and which components are most critical. One particular nanodevice, quantum-dot cellular automata (QCA), offers unique challenges in understanding the reliability of its basic circuits since the device used for logic is also used for interconnect. In this paper, we analyze the reliability properties of two classes of QCA devices: molecular electrostatic-based and magnetic-domain-based. We use an analytic model, probabilistic transfer matrices (PTMs), to compute the inherent reliability of various nontrivial circuits. Additionally, linear regression is used to determine which components are most critical and estimated the reliability gains that may be achieved by improving the reliability of just a critical component. The results show the critical importance of different structures, especially interconnect, as used by the two classes of QCA.
defect and fault tolerance in vlsi and nanotechnology systems | 2007
Timothy J. Dysart; Peter M. Kogge
Since nanoelectronic devices are likely to be defective and error-prone, developing an understanding of circuit reliabilities and critical components will be required. To this end, this paper examines reliability considerations of several sample circuits when implemented in a molecular QCA technology. Probabilistic transfer matrices are used to analyze an XOR, crossover, adder, and an adder using triple modular redundancy. This provides insight in answering how reliable emerging circuit components must be to have a reliable circuit and which of these components are the most critical. As will be shown, component error rates must be at or below 10~4 for an adder to function with 99% reliability and that the straight wire and majority gate are the most critical components to each circuits reliability. It is also shown that the common assumption made in triple modular redundancy theory that only gates fail is insufficient for QCA.
international conference on nanotechnology | 2004
Sarah E. Frost; Timothy J. Dysart; Peter M. Kogge; Craig S. Lent
Quantum-dot cellular automata (QCA) is a computing model that has shown great promise for efficient molecular computing. The QCA clock signal consists of an electric field being raised and lowered. The wires needed to generate the clocking field have been thought to be the limiting factor in the density of QCA circuits. This paper explores the feasibility of using single walled carbon nanotubes (SWNTs) to implement the clocking fields, effectively removing the clocking wire barrier to greater circuit densities.
IEEE Transactions on Nanotechnology | 2013
Timothy J. Dysart
This paper presents a yield analysis of molecular scale electrostatic QCA wires in the presence of a variety of manufacturing defects. Within this analysis, we compare wires of varying lengths and widths as thicker wires are frequently projected to be more tolerant to manufacturing defects. Additionally, we compare the simulation results of long wires to the yield rates predicted via probabilistic transfer matrix (PTM) modeling. This comparison demonstrates that PTM modeling is best used when the short wire segments used to estimate the yields of long wires have high yields.
IEEE Transactions on Nanotechnology | 2011
Timothy J. Dysart; Peter M. Kogge
Nanoelectronic systems are extremely likely to demonstrate high defect and fault rates. As a result, defect and/or fault tolerance may be necessary at several levels throughout the system. Methods for improving defect tolerance, in order to prevent faults, at the component level for quantum-dot cellular automata (QCA)1 have been studied. However, methods and results considering fault tolerance in QCA have received less attention. In this paper, we present an analysis of how QCA system reliability may be impacted by using various N-modular redundancy (NMR) schemes. Our results demonstrate that using NMR in QCA can improve reliability in some cases, but can harm reliability in others.
international conference on nanotechnology | 2003
Timothy J. Dysart; Peter M. Kogge
Quantum-dot cellular automata (QCA) has been proposed as a replacement for CMOS circuits. The major difference between QCA and CMOS is that electronic charge, not current, is the information carrier. A complete set of logic gates has been created and some have been experimentally tested with metal-dots acting as quantum dots. Molecular implementations are currently being examined. This work examines the possible defects that may occur in the fabrication of both types of QCA systems. Fault models for these defects are developed, and a prototype tool with a strategy for fault modeling is outlined.
defect and fault tolerance in vlsi and nanotechnology systems | 2008
Timothy J. Dysart; Peter M. Kogge
Nanoelectronic systems are extremely likely to demonstrate high defect and fault rates. As a result, defect and/or fault tolerance may be necessary at several levels throughout the system. Methods for improving defect tolerance, in order to prevent faults, at the component level for QCA have been studied. However, methods and results considering fault tolerance in QCA have received less attention. In this paper, we present an analysis of how QCA system reliability may be impacted by using various triple modular redundancy schemes.
2008 IEEE International Workshop on Design and Test of Nano Devices, Circuits and Systems | 2008
Timothy J. Dysart; Peter M. Kogge
Assuming that nanoelectronic systems will face a large number of defective devices resulting in numerous computational faults, defect and/or fault tolerance will be necessary in these systems. However, multiple methods exist for providing this tolerance; in particular, many nanoelectronic systems are considering the use of reprogrammable logic or circuit redundancy. One decision point that may aid in determining which approach should be favored is whether the circuits to be implemented are more reliable when they are implemented in custom logic or in a regular logic array such as a PLA. In this paper, we consider the simple case of an adder circuit implemented in both logic types. We consider the use of QCA devices since both regular and custom logic designs are possible with QCA. We show that, when the components used to build each circuit are faulty, custom logic is preferable to regular logic since the component requirements are significantly smaller for custom logic.
irregular applications: architectures and algorithms | 2016
Timothy J. Dysart; Peter M. Kogge; Martin M. Deneroff; Eric Bovell; Preston Briggs; Jay B. Brockman; Kenneth Jacobsen; Yujen Juan; Shannon K. Kuntz; Richard Lethin; Janice O. McMahon; Chandra Pawar; Martin Perrigo; Sarah Rucker; John Ruttenberg; Max Ruttenberg; Steve Stein
There is growing evidence that current architectures do not well handle cache-unfriendly applications such as sparse math operations, data analytics, and graph algorithms. This is due, in part, to the irregular memory access patterns demonstrated by these applications, and in how remote memory accesses are handled. This paper introduces a new, highly-scalable PGAS memory-centric system architecture where migrating threads travel to the data they access. Scaling both memory capacities and the number of cores can be largely invisible to the programmer.The first implementation of this architecture, implemented with FPGAs, is discussed in detail. A comparison of key parameters with a variety of todays systems, of differing architectures, indicates the potential advantages. Early projections of performance against several well-documented kernels translate these advantages into comparative numbers. Future implementations of this architecture may expand the performance advantages by the application of current state of the art silicon technology.
international symposium on performance analysis of systems and software | 2004
Timothy J. Dysart; Branden J. Moore; Lambert Schaelicke; Peter M. Kogge
One of the major design decisions when developing a new microprocessor is determining the target pipeline depth and clock rate since both factors interact closely with one another. The optimal pipeline depth of a processor has been studied before, but the impact of the memory system on pipeline performance has received less attention. This study analyzes the affect of different level-1 cache designs across a range of pipeline depths to determine what role the memory system design plays in choosing a clock rate and pipeline depth for a microprocessor. The pipeline depths studied here range from those found in current processors to those predicted for future processors. For each pipeline depth a variety of level-1 cache sizes are simulated to explore the relationship between clock rate, pipeline depth, cache size and access latency. Results show that the larger caches afforded by shorter pipelines with slower clocks outperform longer pipelines with smaller caches and higher clock rates.