Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rodrigo Alvarez-Icaza.
Science | 2014
Paul A. Merolla; John V. Arthur; Rodrigo Alvarez-Icaza; Andrew S. Cassidy; Jun Sawada; Filipp Akopyan; Bryan L. Jackson; Nabil Imam; Chen Guo; Yutaka Nakamura; Bernard Brezzo; Ivan Vo; Steven K. Esser; Rathinakumar Appuswamy; Brian Taba; Arnon Amir; Myron Flickner; William P. Risk; Rajit Manohar; Dharmendra S. Modha
Modeling computer chips on real brains Computers are nowhere near as versatile as our own brains. Merolla et al. applied our present knowledge of the structure and function of the brain to design a new computer chip that uses the same wiring rules and architecture. The flexible, scalable chip operated efficiently in real time, while using very little power. Science, this issue p. 668 A large-scale computer chip mimics many features of a real brain. Inspired by the brain’s structure, we have developed an efficient, scalable, and flexible non–von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts.
Proceedings of the IEEE | 2014
Ben Varkey Benjamin; Peiran Gao; Emmett McQuinn; Swadesh Choudhary; Anand R. Chandrasekaran; Jean-Marie Bussat; Rodrigo Alvarez-Icaza; John V. Arthur; Paul A. Merolla; Kwabena Boahen
In this paper, we describe the design of Neurogrid, a neuromorphic system for simulating large-scale neural models in real time. Neuromorphic systems realize the function of biological neural systems by emulating their structure. Designers of such systems face three major design choices: 1) whether to emulate the four neural elements-axonal arbor, synapse, dendritic tree, and soma-with dedicated or shared electronic circuits; 2) whether to implement these electronic circuits in an analog or digital manner; and 3) whether to interconnect arrays of these silicon neurons with a mesh or a tree network. The choices we made were: 1) we emulated all neural elements except the soma with shared electronic circuits; this choice maximized the number of synaptic connections; 2) we realized all electronic circuits except those for axonal arbors in an analog manner; this choice maximized energy efficiency; and 3) we interconnected neural arrays in a tree network; this choice maximized throughput. These three choices made it possible to simulate a million neurons with billions of synaptic connections in real time-for the first time-using 16 Neurocores integrated on a board that consumes three watts.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2015
Filipp Akopyan; Jun Sawada; Andrew S. Cassidy; Rodrigo Alvarez-Icaza; John V. Arthur; Paul A. Merolla; Nabil Imam; Yutaka Nakamura; Pallab Datta; Gi-Joon Nam; Brian Taba; Michael P. Beakes; Bernard Brezzo; Jente B. Kuang; Rajit Manohar; William P. Risk; Bryan L. Jackson; Dharmendra S. Modha
The new era of cognitive computing brings forth the grand challenge of developing systems capable of processing massive amounts of noisy multisensory data. This type of intelligent computing poses a set of constraints, including real-time operation, low-power consumption and scalability, which require a radical departure from conventional system design. Brain-inspired architectures offer tremendous promise in this area. To this end, we developed TrueNorth, a 65 mW real-time neurosynaptic processor that implements a non-von Neumann, low-power, highly-parallel, scalable, and defect-tolerant architecture. With 4096 neurosynaptic cores, the TrueNorth chip contains 1 million digital neurons and 256 million synapses tightly interconnected by an event-driven routing infrastructure. The fully digital 5.4 billion transistor implementation leverages existing CMOS scaling trends, while ensuring one-to-one correspondence between hardware and software. With such aggressive design metrics and the TrueNorth architecture breaking path with prevailing architectures, it is clear that conventional computer-aided design (CAD) tools could not be used for the design. As a result, we developed a novel design methodology that includes mixed asynchronous-synchronous circuits and a complete tool flow for building an event-driven, low-power neurosynaptic chip. The TrueNorth chip is fully configurable in terms of connectivity and neural parameters to allow custom configurations for a wide range of cognitive and sensory perception applications. To reduce the systems communication energy, we have adapted existing application-agnostic very large-scale integration CAD placement tools for mapping logical neural networks to the physical neurosynaptic core locations on the TrueNorth chips. With that, we have successfully demonstrated the use of TrueNorth-based systems in multiple applications, including visual object recognition, with higher performance and orders of magnitude lower power consumption than the same algorithms run on von Neumann architectures. The TrueNorth chip and its tool flow serve as building blocks for future cognitive systems, and give designers an opportunity to develop novel brain-inspired architectures and systems based on the knowledge obtained from this paper.
international symposium on neural networks | 2013
Andrew S. Cassidy; Paul A. Merolla; John V. Arthur; Steven K. Esser; Bryan L. Jackson; Rodrigo Alvarez-Icaza; Pallab Datta; Jun Sawada; Theodore M. Wong; Vitaly Feldman; Arnon Amir; Daniel Ben Dayan Rubin; Filipp Akopyan; Emmett McQuinn; William P. Risk; Dharmendra S. Modha
Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brains function and efficiency. Judiciously balancing the dual objectives of functional capability and implementation/operational cost, we develop a simple, digital, reconfigurable, versatile spiking neuron model that supports one-to-one equivalence between hardware and simulation and is implementable using only 1272 ASIC gates. Starting with the classic leaky integrate-and-fire neuron, we add: (a) configurable and reproducible stochasticity to the input, the state, and the output; (b) four leak modes that bias the internal state dynamics; (c) deterministic and stochastic thresholds; and (d) six reset modes for rich finite-state behavior. The model supports a wide variety of computational functions and neural codes. We capture 50+ neuron behaviors in a library for hierarchical composition of complex computations and behaviors. Although designed with cognitive algorithms and applications in mind, serendipitously, the neuron model can qualitatively replicate the 20 biologically-relevant behaviors of a dynamical neuron model.
international symposium on neural networks | 2013
Steven K. Esser; Alexander Andreopoulos; Rathinakumar Appuswamy; Pallab Datta; Davis; Arnon Amir; John V. Arthur; Andrew S. Cassidy; Myron Flickner; Paul Merolla; Shyamal Chandra; Nicola Basilico; Stefano Carpin; Tom Zimmerman; Frank Zee; Rodrigo Alvarez-Icaza; Jeffrey A. Kusnitz; Theodore M. Wong; William P. Risk; Emmett McQuinn; Tapan Kumar Nayak; Raghavendra Singh; Dharmendra S. Modha
Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brains function and efficiency. The non-von Neumann nature of the TrueNorth architecture necessitates a novel approach to efficient system design. To this end, we have developed a set of abstractions, algorithms, and applications that are natively efficient for TrueNorth. First, we developed repeatedly-used abstractions that span neural codes (such as binary, rate, population, and time-to-spike), long-range connectivity, and short-range connectivity. Second, we implemented ten algorithms that include convolution networks, spectral content estimators, liquid state machines, restricted Boltzmann machines, hidden Markov models, looming detection, temporal pattern matching, and various classifiers. Third, we demonstrate seven applications that include speaker recognition, music composer recognition, digit recognition, sequence prediction, collision avoidance, optical flow, and eye detection. Our results showcase the parallelism, versatility, rich connectivity, spatio-temporality, and multi-modality of the TrueNorth architecture as well as compositionality of the corelet programming paradigm and the flexibility of the underlying neuron model.
international symposium on neural networks | 2013
Arnon Amir; Pallab Datta; William P. Risk; Andrew S. Cassidy; Jeffrey A. Kusnitz; Steven K. Esser; Alexander Andreopoulos; Theodore M. Wong; Myron Flickner; Rodrigo Alvarez-Icaza; Emmett McQuinn; Benjamin Shaw; Norm Pass; Dharmendra S. Modha
Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brains function and efficiency. The sequential programming paradigm of the von Neumann architecture is wholly unsuited for TrueNorth. Therefore, as our main contribution, we develop a new programming paradigm that permits construction of complex cognitive algorithms and applications while being efficient for TrueNorth and effective for programmer productivity. The programming paradigm consists of (a) an abstraction for a TrueNorth program, named Corelet, for representing a network of neurosynaptic cores that encapsulates all details except external inputs and outputs; (b) an object-oriented Corelet Language for creating, composing, and decomposing corelets; (c) a Corelet Library that acts as an ever-growing repository of reusable corelets from which programmers compose new corelets; and (d) an end-to-end Corelet Laboratory that is a programming environment which integrates with the TrueNorth architectural simulator, Compass, to support all aspects of the programming cycle from design, through development, debugging, and up to deployment. The new paradigm seamlessly scales from a handful of synapses and neurons to networks of neurosynaptic cores of progressively increasing size and complexity. The utility of the new programming paradigm is underscored by the fact that we have designed and implemented more than 100 algorithms as corelets for TrueNorth in a very short time span.
ieee international conference on high performance computing data and analytics | 2014
Andrew S. Cassidy; Rodrigo Alvarez-Icaza; Filipp Akopyan; Jun Sawada; John V. Arthur; Paul A. Merolla; Pallab Datta; Marc Gonzalez Tallada; Brian Taba; Alexander Andreopoulos; Arnon Amir; Steven K. Esser; Jeff Kusnitz; Rathinakumar Appuswamy; Chuck Haymes; Bernard Brezzo; Roger Moussalli; Ralph Bellofatto; Christian W. Baks; Michael Mastro; Kai Schleupen; Charles Edwin Cox; Ken Inoue; Steven Edward Millman; Nabil Imam; Emmett McQuinn; Yutaka Nakamura; Ivan Vo; Chen Guok; Don Nguyen
Drawing on neuroscience, we have developed a parallel, event-driven kernel for neurosynaptic computation, that is efficient with respect to computation, memory, and communication. Building on the previously demonstrated highly optimized software expression of the kernel, here, we demonstrate True North, a co-designed silicon expression of the kernel. True North achieves five orders of magnitude reduction in energy to-solution and two orders of magnitude speedup in time-to solution, when running computer vision applications and complex recurrent neural network simulations. Breaking path with the von Neumann architecture, True North is a 4,096 core, 1 million neuron, and 256 million synapse brain-inspired neurosynaptic processor, that consumes 65mW of power running at real-time and delivers performance of 46 Giga-Synaptic OPS/Watt. We demonstrate seamless tiling of True North chips into arrays, forming a foundation for cortex-like scalability. True Norths unprecedented time-to-solution, energy-to-solution, size, scalability, and performance combined with the underlying flexibility of the kernel enable a broad range of cognitive applications.
ieee international conference on high performance computing data and analytics | 2016
Jun Sawada; Filipp Akopyan; Andrew S. Cassidy; Brian Taba; Michael DeBole; Pallab Datta; Rodrigo Alvarez-Icaza; Arnon Amir; John V. Arthur; Alexander Andreopoulos; Rathinakumar Appuswamy; Heinz Baier; Davis; David J. Berg; Carmelo di Nolfo; Steven K. Esser; Myron Flickner; Thomas A. Horvath; Bryan L. Jackson; Jeff Kusnitz; Scott Lekuch; Michael Mastro; Timothy Melano; Paul A. Merolla; Steven Edward Millman; Tapan Kumar Nayak; Norm Pass; Hartmut Penner; William P. Risk; Kai Schleupen
This paper describes the hardware and software ecosystem encompassing the brain-inspired TrueNorth processor – a 70mW reconfigurable silicon chip with 1 million neurons, 256 million synapses, and 4096 parallel and distributed neural cores. For systems, we present a scale-out system loosely coupling 16 single-chip boards and a scale-up system tightly integrating 16 chips in a 4 × 4 configuration by exploiting TrueNorths native tiling. For software, we present an end-to-end ecosystem consisting of a simulator, a programming language, an integrated programming environment, a library of algorithms and applications, firmware, tools for deep learning, a teaching curriculum, and cloud enablement. For the scale-up systems we summarize our approach to physical placement of neural network, to reduce intra- and inter-chip network traffic. The ecosystem is in use at over 30 universities and government/corporate labs. Our platform is a substrate for a spectrum of applications from mobile and embedded computing to cloud and supercomputers.
international symposium on circuits and systems | 2016
Andreas G. Andreou; Andrew Dykman; Kate D. Fischl; Guillaume Garreau; Daniel R. Mendat; Garrick Orchard; Andrew S. Cassidy; Paul A. Merolla; John V. Arthur; Rodrigo Alvarez-Icaza; Bryan L. Jackson; Dharmendra S. Modha
Summary form only given. The IBM TrueNorth (TN) Neurosynaptic System, is a chip multi processor with a tightly coupled processor/memory architecture, that results in energy efficient neurocomputing and it is a significant milestone to over 30 years of neuromorphic engineering! It comprises of 4096 cores each core with 65K of local memory (6T SRAM)-synapses- and 256 arithmetic logic units - neurons-that operate on a unary number representation and compute by counting up to a maximum of 19 bits. The cores are event-driven using custom asynchronous and synchronous logic, and they are globally connected through an asynchronous packet switched mesh network on chip (NOC). The chip development board, includes a Zyng Xilinx FPGA that does the housekeeping and provides support for standard communication support through an Ethernet UDP interface. The asynchronous Addressed Event Representation (AER) in the NOC is al so exposed to the user for connection to AER based peripherals through a packet with bundled data full duplex interface. The unary data values represented on the system buses can take on a wide variety of spatial and temporal encoding schemes. Pulse density coding (the number of events Ne represents a number N), thermometer coding, time-slot encoding, and stochastic encoding are examples. Additional low level interfaces are available for communicating directly with the TrueNorth chip to aid programming and parameter setting. A hierarchical, compositional programming language, Corelet, is available to aid the development of TN applications. IBM provides support and a development system as well as “Compass” a scalable simulator. The software environment runs under standard Linux installations (Red Hat, CentOS and Ubuntu) and has standard interfaces to Matlab and to Caffe that is employed to train deep neural network models. The TN architecture can be interfaced using native AER to a number of bio-inspired sensory devices developed over many years of neuromorphic engineering (silicon retinas and silicon cochleas). In addition the architecture is well suited for implementing deep neural networks with many applications in computer vision, speech recognition and language processing. In a sensory information processing system architecture one desires both pattern processing in space and time to extract features in symbolic sub-spaces as well as natural language processing to provide contextual and semantic information in the form of priors. In this paper we discuss results from ongoing experimental work on real-time sensory information processing using the TN architecture in three different areas (i) spatial pattern processing -computer vision(ii) temporal pattern processing -speech processing and recognition(iii) natural language processing -word similarity-. A real-time demonstration will be done at ISCAS 2016 using the TN system and neuromorphic event based sensors for audition (silicon cochlea) and vision (silicon retina).
Biological Cybernetics | 2012
Rodrigo Alvarez-Icaza; Kwabena Boahen
To produce smooth and coordinated motion, our nervous systems need to generate precisely timed muscle activation patterns that, due to axonal conduction delay, must be generated in a predictive and feedforward manner. Kawato proposed that the cerebellum accomplishes this by acting as an inverse controller that modulates descending motor commands to predictively drive the spinal cord such that the musculoskeletal dynamics are canceled out. This and other cerebellar theories do not, however, account for the rich biophysical properties expressed by the olivocerebellar complex’s various cell types, making these theories difficult to verify experimentally. Here we propose that a multizonal microcomplex’s (MZMC) inferior olivary neurons use their subthreshold oscillations to mirror a musculoskeletal joint’s underdamped dynamics, thereby achieving inverse control. We used control theory to map a joint’s inverse model onto an MZMC’s biophysics, and we used biophysical modeling to confirm that inferior olivary neurons can express the dynamics required to mirror biomechanical joints. We then combined both techniques to predict how experimentally injecting current into the inferior olive would affect overall motor output performance. We found that this experimental manipulation unmasked a joint’s natural dynamics, as observed by motor output ringing at the joint’s natural frequency, with amplitude proportional to the amount of current. These results support the proposal that the cerebellum—in particular an MZMC—is an inverse controller; the results also provide a biophysical implementation for this controller and allow one to make an experimentally testable prediction.