Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew S. Cassidy is active.

Publication


Featured researches published by Andrew S. Cassidy.


Science | 2014

A million spiking-neuron integrated circuit with a scalable communication network and interface

Paul A. Merolla; John V. Arthur; Rodrigo Alvarez-Icaza; Andrew S. Cassidy; Jun Sawada; Filipp Akopyan; Bryan L. Jackson; Nabil Imam; Chen Guo; Yutaka Nakamura; Bernard Brezzo; Ivan Vo; Steven K. Esser; Rathinakumar Appuswamy; Brian Taba; Arnon Amir; Myron Flickner; William P. Risk; Rajit Manohar; Dharmendra S. Modha

Modeling computer chips on real brains Computers are nowhere near as versatile as our own brains. Merolla et al. applied our present knowledge of the structure and function of the brain to design a new computer chip that uses the same wiring rules and architecture. The flexible, scalable chip operated efficiently in real time, while using very little power. Science, this issue p. 668 A large-scale computer chip mimics many features of a real brain. Inspired by the brain’s structure, we have developed an efficient, scalable, and flexible non–von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts.


Proceedings of the National Academy of Sciences of the United States of America | 2016

Convolutional networks for fast, energy-efficient neuromorphic computing

Steven K. Esser; Paul A. Merolla; John V. Arthur; Andrew S. Cassidy; Rathinakumar Appuswamy; Alexander Andreopoulos; David J. Berg; Jeffrey L. McKinstry; Timothy Melano; R Davis; Carmelo di Nolfo; Pallab Datta; Arnon Amir; Brian Taba; Myron Flickner; Dharmendra S. Modha

Significance Brain-inspired computing seeks to develop new technologies that solve real-world problems while remaining grounded in the physical requirements of energy, speed, and size. Meeting these challenges requires high-performing algorithms that are capable of running on efficient hardware. Here, we adapt deep convolutional neural networks, which are today’s state-of-the-art approach for machine perception in many domains, to perform classification tasks on neuromorphic hardware, which is today’s most efficient platform for running neural networks. Using our approach, we demonstrate near state-of-the-art accuracy on eight datasets, while running at between 1,200 and 2,600 frames/s and using between 25 and 275 mW. Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2015

TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip

Filipp Akopyan; Jun Sawada; Andrew S. Cassidy; Rodrigo Alvarez-Icaza; John V. Arthur; Paul A. Merolla; Nabil Imam; Yutaka Nakamura; Pallab Datta; Gi-Joon Nam; Brian Taba; Michael P. Beakes; Bernard Brezzo; Jente B. Kuang; Rajit Manohar; William P. Risk; Bryan L. Jackson; Dharmendra S. Modha

The new era of cognitive computing brings forth the grand challenge of developing systems capable of processing massive amounts of noisy multisensory data. This type of intelligent computing poses a set of constraints, including real-time operation, low-power consumption and scalability, which require a radical departure from conventional system design. Brain-inspired architectures offer tremendous promise in this area. To this end, we developed TrueNorth, a 65 mW real-time neurosynaptic processor that implements a non-von Neumann, low-power, highly-parallel, scalable, and defect-tolerant architecture. With 4096 neurosynaptic cores, the TrueNorth chip contains 1 million digital neurons and 256 million synapses tightly interconnected by an event-driven routing infrastructure. The fully digital 5.4 billion transistor implementation leverages existing CMOS scaling trends, while ensuring one-to-one correspondence between hardware and software. With such aggressive design metrics and the TrueNorth architecture breaking path with prevailing architectures, it is clear that conventional computer-aided design (CAD) tools could not be used for the design. As a result, we developed a novel design methodology that includes mixed asynchronous-synchronous circuits and a complete tool flow for building an event-driven, low-power neurosynaptic chip. The TrueNorth chip is fully configurable in terms of connectivity and neural parameters to allow custom configurations for a wide range of cognitive and sensory perception applications. To reduce the systems communication energy, we have adapted existing application-agnostic very large-scale integration CAD placement tools for mapping logical neural networks to the physical neurosynaptic core locations on the TrueNorth chips. With that, we have successfully demonstrated the use of TrueNorth-based systems in multiple applications, including visual object recognition, with higher performance and orders of magnitude lower power consumption than the same algorithms run on von Neumann architectures. The TrueNorth chip and its tool flow serve as building blocks for future cognitive systems, and give designers an opportunity to develop novel brain-inspired architectures and systems based on the knowledge obtained from this paper.


international symposium on neural networks | 2012

Building block of a programmable neuromorphic substrate: A digital neurosynaptic core

John V. Arthur; Paul A. Merolla; Filipp Akopyan; Rodrigo Alvarez; Andrew S. Cassidy; Shyamal Chandra; Steven K. Esser; Nabil Imam; William P. Risk; Daniel Ben Dayan Rubin; Rajit Manohar; Dharmendra S. Modha

The grand challenge of neuromorphic computation is to develop a flexible brain-inspired architecture capable of a wide array of real-time applications, while striving towards the ultra-low power consumption and compact size of biological neural systems. Toward this end, we fabricated a building block of a modular neuromorphic architecture, a neurosynaptic core. Our implementation consists of 256 integrate-and-fire neurons and a 1,024×256 SRAM crossbar memory for synapses that fits in 4.2mm2 using a 45nm SOI process and consumes just 45pJ per spike. The core is fully configurable in terms of neuron parameters, axon types, and synapse states and its fully digital implementation achieves one-to-one correspondence with software simulation models. One-to-one correspondence allows us to introduce an abstract neural programming model for our chip, a contract guaranteeing that any application developed in software functions identically in hardware. This contract allows us to rapidly test and map applications from control, machine vision, and classification. To demonstrate, we present four test cases (i) a robot driving in a virtual environment, (ii) the classic game of pong, (iii) visual digit recognition and (iv) an autoassociative memory.


international symposium on neural networks | 2013

Cognitive computing building block: A versatile and efficient digital neuron model for neurosynaptic cores

Andrew S. Cassidy; Paul A. Merolla; John V. Arthur; Steven K. Esser; Bryan L. Jackson; Rodrigo Alvarez-Icaza; Pallab Datta; Jun Sawada; Theodore M. Wong; Vitaly Feldman; Arnon Amir; Daniel Ben Dayan Rubin; Filipp Akopyan; Emmett McQuinn; William P. Risk; Dharmendra S. Modha

Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brains function and efficiency. Judiciously balancing the dual objectives of functional capability and implementation/operational cost, we develop a simple, digital, reconfigurable, versatile spiking neuron model that supports one-to-one equivalence between hardware and simulation and is implementable using only 1272 ASIC gates. Starting with the classic leaky integrate-and-fire neuron, we add: (a) configurable and reproducible stochasticity to the input, the state, and the output; (b) four leak modes that bias the internal state dynamics; (c) deterministic and stochastic thresholds; and (d) six reset modes for rich finite-state behavior. The model supports a wide variety of computational functions and neural codes. We capture 50+ neuron behaviors in a library for hierarchical composition of complex computations and behaviors. Although designed with cognitive algorithms and applications in mind, serendipitously, the neuron model can qualitatively replicate the 20 biologically-relevant behaviors of a dynamical neuron model.


international symposium on neural networks | 2013

Cognitive computing systems: Algorithms and applications for networks of neurosynaptic cores

Steven K. Esser; Alexander Andreopoulos; Rathinakumar Appuswamy; Pallab Datta; Davis; Arnon Amir; John V. Arthur; Andrew S. Cassidy; Myron Flickner; Paul Merolla; Shyamal Chandra; Nicola Basilico; Stefano Carpin; Tom Zimmerman; Frank Zee; Rodrigo Alvarez-Icaza; Jeffrey A. Kusnitz; Theodore M. Wong; William P. Risk; Emmett McQuinn; Tapan Kumar Nayak; Raghavendra Singh; Dharmendra S. Modha

Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brains function and efficiency. The non-von Neumann nature of the TrueNorth architecture necessitates a novel approach to efficient system design. To this end, we have developed a set of abstractions, algorithms, and applications that are natively efficient for TrueNorth. First, we developed repeatedly-used abstractions that span neural codes (such as binary, rate, population, and time-to-spike), long-range connectivity, and short-range connectivity. Second, we implemented ten algorithms that include convolution networks, spectral content estimators, liquid state machines, restricted Boltzmann machines, hidden Markov models, looming detection, temporal pattern matching, and various classifiers. Third, we demonstrate seven applications that include speaker recognition, music composer recognition, digit recognition, sequence prediction, collision avoidance, optical flow, and eye detection. Our results showcase the parallelism, versatility, rich connectivity, spatio-temporality, and multi-modality of the TrueNorth architecture as well as compositionality of the corelet programming paradigm and the flexibility of the underlying neuron model.


international symposium on neural networks | 2013

Cognitive computing programming paradigm: A Corelet Language for composing networks of neurosynaptic cores

Arnon Amir; Pallab Datta; William P. Risk; Andrew S. Cassidy; Jeffrey A. Kusnitz; Steven K. Esser; Alexander Andreopoulos; Theodore M. Wong; Myron Flickner; Rodrigo Alvarez-Icaza; Emmett McQuinn; Benjamin Shaw; Norm Pass; Dharmendra S. Modha

Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brains function and efficiency. The sequential programming paradigm of the von Neumann architecture is wholly unsuited for TrueNorth. Therefore, as our main contribution, we develop a new programming paradigm that permits construction of complex cognitive algorithms and applications while being efficient for TrueNorth and effective for programmer productivity. The programming paradigm consists of (a) an abstraction for a TrueNorth program, named Corelet, for representing a network of neurosynaptic cores that encapsulates all details except external inputs and outputs; (b) an object-oriented Corelet Language for creating, composing, and decomposing corelets; (c) a Corelet Library that acts as an ever-growing repository of reusable corelets from which programmers compose new corelets; and (d) an end-to-end Corelet Laboratory that is a programming environment which integrates with the TrueNorth architectural simulator, Compass, to support all aspects of the programming cycle from design, through development, debugging, and up to deployment. The new paradigm seamlessly scales from a handful of synapses and neurons to networks of neurosynaptic cores of progressively increasing size and complexity. The utility of the new programming paradigm is underscored by the fact that we have designed and implemented more than 100 algorithms as corelets for TrueNorth in a very short time span.


Neural Networks | 2013

2013 Special Issue: Design of silicon brains in the nano-CMOS era: Spiking neurons, learning synapses and neural architecture optimization

Andrew S. Cassidy; Julius Georgiou; Andreas G. Andreou

We present a design framework for neuromorphic architectures in the nano-CMOS era. Our approach to the design of spiking neurons and STDP learning circuits relies on parallel computational structures where neurons are abstracted as digital arithmetic logic units and communication processors. Using this approach, we have developed arrays of silicon neurons that scale to millions of neurons in a single state-of-the-art Field Programmable Gate Array (FPGA). We demonstrate the validity of the design methodology through the implementation of cortical development in a circuit of spiking neurons, STDP synapses, and neural architecture optimization.


biomedical circuits and systems conference | 2008

Dynamical digital silicon neurons

Andrew S. Cassidy; Andreas G. Andreou

We present an array of dynamical digital silicon neurons implementing the Izhikevich neuron model. The FPGA based array consists of 32 physical neurons, each time multiplexing the state of 8 virtual neurons, for a total of 256 independent neurons. The neural array operates at 5,000 times faster than real time, performing over 20.48 GOPS (giga operations per second). It is intended for neural simulation acceleration, neural prostheses, and neuromorphic systems.


biomedical circuits and systems conference | 2007

FPGA Based Silicon Spiking Neural Array

Andrew S. Cassidy; Susan L. Denham; Patrick O. Kanold; Andreas G. Andreou

Rapid design time, low cost, flexibility, digital precision, and stability are characteristics that favor FPGAs as a promising alternative to analog VLSI based approaches for designing neuromorphic systems. High computational power as well as low size, weight, and power (SWAP) are advantages that FPGAs demonstrate over software based neuromorphic systems. We present an FPGA based array of Leaky-Integrate and Fire (LIF) artificial neurons. Using this array, we demonstrate three neural computational experiments: auditory Spatio-Temporal Receptive Fields (STRFs), a neural parameter optimizing algorithm, and an implementation of the Spike Time Dependant Plasticity (STDP) learning rule.

Researchain Logo
Decentralizing Knowledge