Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cory E. Merkel is active.

Publication


Featured researches published by Cory E. Merkel.


IEEE Transactions on Computers | 2013

Memristor-Based Neural Logic Blocks for Nonlinearly Separable Functions

Michael Soltiz; Dhireesha Kudithipudi; Cory E. Merkel; Garrett S. Rose; Robinson E. Pino

Neural logic blocks (NLBs) enable the realization of biologically inspired reconfigurable hardware. Networks of NLBs can be trained to perform complex computations such as multilevel Boolean logic and optical character recognition (OCR) in an area- and energy-efficient manner. Recently, several groups have proposed perceptron-based NLB designs with thin-film memristor synapses. These designs are implemented using a static threshold activation function, limiting the set of learnable functions to be linearly separable. In this work, we propose two NLB designs-robust adaptive NLB (RANLB) and multithreshold NLB (MTNLB)-which overcome this limitation by allowing the effective activation function to be adapted during the training process. Consequently, both designs enable any logic function to be implemented in a single-layer NLB network. The proposed NLBs are designed, simulated, and trained to implement ISCAS-85 benchmark circuits, as well as OCR. The MTNLB achieves 90 percent improvement in the energy delay product (EDP) over lookup table (LUT)-based implementations of the ISCAS-85 benchmarks and up to a 99 percent improvement over a previous NLB implementation. As a compromise, the RANLB provides a smaller EDP improvement, but has an average training time of only ≈ 4 cycles for 4-input logic functions, compared to the MTNLBs ≈ 8-cycle average training time.


international symposium on neural networks | 2011

Reconfigurable N-level memristor memory design

Cory E. Merkel; Nakul Nagpal; Sindhura Mandalapu; Dhireesha Kudithipudi

Memristive devices have gained significant research attention lately because of their unique properties and wide application spectrum. In particular, memristor-based resistive random access memory (RRAM) offers the high density, low power, and low volatility required for next-generation non-volatile memory. The ability to program memristive devices into several different resistance states has also led to the proposal of multilevel RRAM. This work analyzes the application of thinfilm memristors as N-level RRAM elements. The tradeoffs between the number of memory levels and each RRAM elements reliability will be discussed. A metric is proposed to rate each RRAM element in the presence of process variations. A memory architecture is also presented which allows the number of memory levels to be reconfigured based on different application characteristics. The proposed architecture can achieve a write time speedup of 5.9 over other memristor memory architectures with 80% ion mobility degradation.


Frontiers in Neuroscience | 2016

Design and Analysis of a Neuromemristive Reservoir Computing Architecture for Biosignal Processing

Dhireesha Kudithipudi; Qutaiba Saleh; Cory E. Merkel; James Thesing; Bryant T. Wysocki

Reservoir computing (RC) is gaining traction in several signal processing domains, owing to its non-linear stateful computation, spatiotemporal encoding, and reduced training complexity over recurrent neural networks (RNNs). Previous studies have shown the effectiveness of software-based RCs for a wide spectrum of applications. A parallel body of work indicates that realizing RNN architectures using custom integrated circuits and reconfigurable hardware platforms yields significant improvements in power and latency. In this research, we propose a neuromemristive RC architecture, with doubly twisted toroidal structure, that is validated for biosignal processing applications. We exploit the device mismatch to implement the random weight distributions within the reservoir and propose mixed-signal subthreshold circuits for energy efficiency. A comprehensive analysis is performed to compare the efficiency of the neuromemristive RC architecture in both digital(reconfigurable) and subthreshold mixed-signal realizations. Both Electroencephalogram (EEG) and Electromyogram (EMG) biosignal benchmarks are used for validating the RC designs. The proposed RC architecture demonstrated an accuracy of 90 and 84% for epileptic seizure detection and EMG prosthetic finger control, respectively.


great lakes symposium on vlsi | 2014

A current-mode CMOS/memristor hybrid implementation of an extreme learning machine

Cory E. Merkel; Dhireesha Kudithipudi

In this work, we propose a current-mode CMOS/memristor hybrid implementation of an extreme learning machine (ELM) architecture. We present novel circuit designs for linear, sigmoid,and threshold neuronal activation functions, as well as memristor-based bipolar synaptic weighting. In addition, this work proposes a stochastic version of the least-mean-squares (LMS) training algorithm for adapting the weights between the ELMs hidden and output layers. We simulated our top-level ELM architecture using Cadence AMS Designer with 45 nm CMOS models and an empirical piecewise linear memristor model based on experimental data from an HfOx device. With 10 hidden node neurons, the ELM was able to learn a 2-input XOR function after 150 training epochs.


international symposium on circuits and systems | 2016

A design of HTM spatial pooler for face recognition using memristor-CMOS hybrid circuits

Timur Ibrayev; Alex Pappachen James; Cory E. Merkel; Dhireesha Kudithipudi

Hierarchical Temporal Memory (HTM) is a machine learning algorithm that is inspired from the working principles of the neocortex, capable of learning, inference, and prediction for bit-encoded inputs. Spatial pooler is an integral part of HTM that is capable of learning and classifying visual data such as objects in images. In this paper, we propose a memristor-CMOS circuit design of spatial pooler and exploit memristors capabilities for emulating the synapses, where the strength of the weights is represented by the state of the memristor. The proposed design is validated on a challenging application of single image per person face recognition problem using AR database resulting in a recognition accuracy of 80%.


international conference on vlsi design | 2012

Towards Thermal Profiling in CMOS/Memristor Hybrid RRAM Architectures

Cory E. Merkel; Dhireesha Kudithipudi

In this paper, we propose a hybrid temperature sensing resistive random access memory (TSRRAM) architecture composed of traditional CMOS components and emerging memristive switching devices. The architecture enables each RRAM switching element to be used both as a memory bit and a temperature sensor. The TSRRAM is integrated into an Alpha 21364 processor as an L2 cache. Its accuracy and performance were simulated using a customized simulation framework. SPEC2000 benchmarks were used to generate thermal profiles in the Alpha processor core. Active and passive sensing mechanisms are also introduced as means for DTM algorithms to determine the thermal profile of the RRAM switching layer. The proposed architecture yielded a 2.14 K mean absolute temperature error during passive sensing, which is well within the useful range of dynamic thermal management (DTM) algorithms. Furthermore, the proposed design is shown to have only an 8 cycle performance overhead.


system on chip conference | 2014

A stochastic learning algorithm for neuromemristive systems

Cory E. Merkel; Dhireesha Kudithipudi

In this paper, we present a stochastic learning algorithm for neuromemristive systems. Existing algorithms are based on gradient descent techniques, which require analog multiplications. The proposed algorithm removes the necessity for an analog multiplier by transforming each variable into a random Bernoulli-distributed value. Arithmetic operations on such values are easily implemented using digital circuits, reducing the area cost of the implementation. We tested the proposed algorithm and compared with the least-mean-squares algorithm on both linear and logistic regression problems. Results indicate that the proposed algorithm is able to achieve similar accuracy with up to ≈3.5× less area.


international symposium on neural networks | 2013

Periodic activation functions in memristor-based analog neural networks

Cory E. Merkel; Dhireesha Kudithipudi; Nick Sereni

This work explores the use of periodic activation functions in memristor-based analog neural networks. We propose a hardware neuron based on a folding amplifier that produces a periodic output voltage. Furthermore, the amplifiers fold factor be adjusted to change the number of low-to-high or high-to-low output voltage transitions. We also propose a memristor-based synapse circuit and training circuitry for realizing the Perceptron learning rule. Behavioral models of our circuits were developed for simulating a single-layer, single-output feedforward neural network. The network was trained to detect the edges of a grayscale image. Our results show that neurons with a single fold-with an activation function similar to a sigmoidal activation function-perform the worst for this application, since they are unable to learn functions with multiple decision boundaries. Conversely, the 4-fold neuron performs the best (up to ≈65% better than the 1-fold neuron), as its activation function is periodic, and it is able to learn functions with four decision boundaries.


international symposium on nanoscale architectures | 2012

RRAM-based adaptive neural logic block for implementing non-linearly separable functions in a single layer

Michael Soltiz; Cory E. Merkel; Dhireesha Kudithipudi; Garrett S. Rose

As the efficiency of neuromorphic systems improves, biologically-inspired learning techniques are becoming more and more appealing for various computing applications, ranging from pattern and character recognition to general purpose reconfigurable logic. Due to their functional similarities to synapses in the brain, memristors are becoming a key element in the hardware realization of Hebbian Learning systems. By pairing such devices and a perceptron-based neuron model with a threshold activation function, previous work has shown that a neural logic block capable of learning any linearly separable function in real-time can be developed. However, in this configuration, any function with two or more decision boundaries cannot be learned in a single layer. While previous memristor-based neural logic block designs have proven to achieve very low area and high performance when compared to Look-Up Tables (LUT) and Capacitive Threshold Logic (CTL), the limitation on the set of learnable functions has made networks of these logic blocks impractical to scale to realistic applications. By integrating an additional layer of memristors into a neural logic block, this paper proposes a logic block with an adaptive activation function. The resulting logic block is capable of learning any function in a single layer, reducing the number of logic blocks required to implement a single 4-input function by up to 10 and significantly improving training time. When considered as a building block for ISCAS-85 benchmark circuits, the proposed logic block is capable of achieving an Energy-Delay Product (EDP) up to 97.8% lower than a neural logic block with a threshold activation function. Furthermore, the performance improvement over a CMOS LUT implementation ranges from 78.08% to 97.43% for all ISCAS-85 circuits.


international conference on vlsi design | 2015

Comparison of Off-Chip Training Methods for Neuromemristive Systems

Cory E. Merkel; Dhireesha Kudithipudi

Neuromemristive systems offer an efficient platform for learning and modeling non-linear functions in real time. Specifically, they are effective tools for pattern classification. However, training these systems presents several challenges, especially when CMOS and memristor process variations are considered. In this paper, we propose two off-chip training methods for neuromemristive systems: weight programming and feature training. Detailed variation models are developed to study the effects of CMOS and memristor process variations on neuromemristive circuits, including neurons, synapses, and training circuits. We analyze the impact of those variations on the proposed off-chip training methods. Specifically, we train a neuromemristive system to classify handwritten digits. The results indicate that the feature training method is able to provide over 2× better classification accuracy per unit area than the weight programming method. However, the weight programming method is much faster, and may be more suitable when the network needs to be frequently re-trained.

Collaboration


Dive into the Cory E. Merkel's collaboration.

Top Co-Authors

Avatar

Dhireesha Kudithipudi

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Qutaiba Saleh

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bryant T. Wysocki

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Soltiz

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Colin Donahue

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yu Kee Ooi

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Robinson E. Pino

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Abdullah M. Zyarah

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Abhishek Ramesh

Rochester Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge