Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Farnood Merrikh-Bayat is active.

Publication


Featured researches published by Farnood Merrikh-Bayat.


Nature | 2015

Training and operation of an integrated neuromorphic network based on metal-oxide memristors

Mirko Prezioso; Farnood Merrikh-Bayat; Brian J. Hoskins; Gina C. Adam; Konstantin K. Likharev; Dmitri B. Strukov

Despite much progress in semiconductor integrated circuit technology, the extreme complexity of the human cerebral cortex, with its approximately 1014 synapses, makes the hardware implementation of neuromorphic networks with a comparable number of devices exceptionally challenging. To provide comparable complexity while operating much faster and with manageable power dissipation, networks based on circuits combining complementary metal-oxide-semiconductors (CMOSs) and adjustable two-terminal resistive devices (memristors) have been developed. In such circuits, the usual CMOS stack is augmented with one or several crossbar layers, with memristors at each crosspoint. There have recently been notable improvements in the fabrication of such memristive crossbars and their integration with CMOS circuits, including first demonstrations of their vertical integration. Separately, discrete memristors have been used as artificial synapses in neuromorphic networks. Very recently, such experiments have been extended to crossbar arrays of phase-change memristive devices. The adjustment of such devices, however, requires an additional transistor at each crosspoint, and hence these devices are much harder to scale than metal-oxide memristors, whose nonlinear current–voltage curves enable transistor-free operation. Here we report the experimental implementation of transistor-free metal-oxide memristor crossbars, with device variability sufficiently low to allow operation of integrated neural networks, in a simple network: a single-layer perceptron (an algorithm for linear classification). The network can be taught in situ using a coarse-grain variety of the delta rule algorithm to perform the perfect classification of 3 × 3-pixel black/white images into three classes (representing letters). This demonstration is an important step towards much larger and more complex memristive neuromorphic networks.


international symposium on neural networks | 2015

Efficient training algorithms for neural networks based on memristive crossbar circuits

Irina Kataeva; Farnood Merrikh-Bayat; Elham Zamanidoost; Dmitri B. Strukov

We have adapted backpropagation algorithm for training multilayer perceptron classifier implemented with memristive crossbar circuits. The proposed training approach takes into account switching dynamics of a particular, though very typical, type of memristive devices and weight update restrictions imposed by crossbar topology. The simulation results show that for crossbar-based multilayer perceptron with one hidden layer of 300 neurons misclassification rate on MNIST benchmark could be as low as 1.47% and 4.06% for batch and stochastic algorithms, respectively, which is comparable to the best reported results for similar neural networks.


international symposium on nanoscale architectures | 2013

Digital-to-analog and analog-to-digital conversion with metal oxide memristors for ultra-low power computing

Ligang Gao; Farnood Merrikh-Bayat; Fabien Alibart; Xinjie Guo; Brian D. Hoskins; Kwang-Ting Cheng; Dmitri B. Strukov

The paper presents experimental demonstration of 6-bit digital-to-analog (DAC) and 4-bit analog-to-digital conversion (ADC) operations implemented with a hybrid circuit consisting of Pt/TiO2-x/Pt resistive switching devices (also known as ReRAMs or memristors) and a Si operational amplifier (op-amp). In particular, a binary-weighted implementation is demonstrated for DAC, while ADC is implemented with a Hopfield neural network circuit.


international electron devices meeting | 2015

Modeling and implementation of firing-rate neuromorphic-network classifiers with bilayer Pt/Al2O3/TiO2−x/Pt Memristors

Mirko Prezioso; I. Kataeva; Farnood Merrikh-Bayat; Brian D. Hoskins; Gina C. Adam; T. Sota; Konstantin K. Likharev; Dmitri B. Strukov

Neuromorphic pattern classifiers were implemented, for the first time, using transistor-free integrated crossbar circuits with bilayer metal-oxide memristors. 10×6- and 10×8-crosspoint neuromorphic networks were trained in-situ using a Manhattan-Rule algorithm to separate a set of 3×3 binary images: into 3 classes using the batch-mode training, and into 4 classes using the stochastic-mode training, respectively. Simulation of much larger, multilayer neural network classifiers based on such technology has sown that their fidelity may be on a par with the state-of-the-art results for software-implemented networks.


Frontiers in Neuroscience | 2015

Modeling and Experimental Demonstration of a Hopfield Network Analog-to-Digital Converter with Hybrid CMOS/Memristor Circuits

Xinjie Guo; Farnood Merrikh-Bayat; Ligang Gao; Brian D. Hoskins; Fabien Alibart; Bernabé Linares-Barranco; Luke Theogarajan; Christof Teuscher; Dmitri B. Strukov

The purpose of this work was to demonstrate the feasibility of building recurrent artificial neural networks with hybrid complementary metal oxide semiconductor (CMOS)/memristor circuits. To do so, we modeled a Hopfield network implementing an analog-to-digital converter (ADC) with up to 8 bits of precision. Major shortcomings affecting the ADCs precision, such as the non-ideal behavior of CMOS circuitry and the specific limitations of memristors, were investigated and an effective solution was proposed, capitalizing on the in-field programmability of memristors. The theoretical work was validated experimentally by demonstrating the successful operation of a 4-bit ADC circuit implemented with discrete Pt/TiO2−x/Pt memristors and CMOS integrated circuit components.


international memory workshop | 2015

Memory Technologies for Neural Networks

Dmitri B. Strukov; Farnood Merrikh-Bayat; Mirko Prezioso; Xinjie Guo; Brian D. Hoskins; Konstantin K. Likharev

Synapses, the most numerous elements of neural networks, are memory devices. Similarly to traditional memory applications, device density is one of the most essential metrics for large-scale artificial neural networks. This application, however, imposes a number of additional requirements, such as the continuous change of the memory state, so that novel engineering approaches are required. In this paper, we briefly review our recent efforts at addressing these needs. We start by reviewing the CrossNet concept, which was conceived to address major challenges of artificial neural networks. We then discuss the recent progress toward CrossNet implementation, in particular the experimental results for simple networks with crossbar-integrated resistive switching (memristive) metal oxide devices. Finally, we review preliminary results on redesigning commercial-grade embedded NOR flash memories to enable individual cell tuning. While NOR flash memories are less dense then memristor crossbars, their technology is much more mature and ready for the development of large-scale neural networks.


international symposium on circuits and systems | 2016

Spiking neuromorphic networks with metal-oxide memristors

Mirko Prezioso; Y. Zhong; Dmitri Gavrilov; Farnood Merrikh-Bayat; Brian D. Hoskins; Gina C. Adam; Konstantin K. Likharev; Dmitri B. Strukov

This is a brief review of our recent work on memristor-based spiking neuromorphic networks. We first describe the recent experimental demonstration of several most biology-plausible spike-time-dependent plasticity (STDP) windows in integrated metal-oxide memristors and, for the first time, the observed self-adaptive STDP, which may be crucial for spiking neural network applications. We then discuss recent theoretical work in which an analytical, data-verified STDP model was used to simulate operation of a spiking classifier of spatial-temporal patterns, and the capacity-to-fidelity tradeoff and noise immunity o f spiking spatial-temporal associative memories with local and global recording was evaluated.


Proceedings of SPIE | 2016

RRAM-based hardware implementations of artificial neural networks: progress update and challenges ahead

Mirko Prezioso; Farnood Merrikh-Bayat; Bhaswar Chakrabarti; Dmitri B. Strukov

Artificial neural networks have been receiving increasing attention due to their superior performance in many information processing tasks. Typically, scaling up the size of the network results in better performance and richer functionality. However, large neural networks are challenging to implement in software and customized hardware are generally required for their practical implementations. In this work, we will discuss our group’s recent efforts on the development of such custom hardware circuits, based on hybrid CMOS/memristor circuits, in particular of CMOL variety. We will start by reviewing the basics of memristive devices and of CMOL circuits. We will then discuss our recent progress towards demonstration of hybrid circuits, focusing on the experimental and theoretical results for artificial neural networks based on crossbarintegrated metal oxide memristors. We will conclude presentation with the discussion of the remaining challenges and the most pressing research needs.


IEEE Transactions on Electron Devices | 2017

3-D Memristor Crossbars for Analog and Neuromorphic Computing Applications

Gina C. Adam; Brian D. Hoskins; Mirko Prezioso; Farnood Merrikh-Bayat; Bhaswar Chakrabarti; Dmitri B. Strukov


arXiv: Emerging Technologies | 2016

Sub-1-us, Sub-20-nJ Pattern Classification in a Mixed-Signal Circuit Based on Embedded 180-nm Floating-Gate Memory Cell Arrays.

Farnood Merrikh-Bayat; Xinjie Guo; Michael Klachko; Mirko Prezioso; Konstantin K. Likharev; Dmitri B. Strukov

Collaboration


Dive into the Farnood Merrikh-Bayat's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mirko Prezioso

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gina C. Adam

University of California

View shared research outputs
Top Co-Authors

Avatar

Xinjie Guo

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ligang Gao

University of California

View shared research outputs
Top Co-Authors

Avatar

Fabien Alibart

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge