Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven K. Esser is active.

Publication


Featured researches published by Steven K. Esser.


Science | 2014

A million spiking-neuron integrated circuit with a scalable communication network and interface

Paul A. Merolla; John V. Arthur; Rodrigo Alvarez-Icaza; Andrew S. Cassidy; Jun Sawada; Filipp Akopyan; Bryan L. Jackson; Nabil Imam; Chen Guo; Yutaka Nakamura; Bernard Brezzo; Ivan Vo; Steven K. Esser; Rathinakumar Appuswamy; Brian Taba; Arnon Amir; Myron Flickner; William P. Risk; Rajit Manohar; Dharmendra S. Modha

Modeling computer chips on real brains Computers are nowhere near as versatile as our own brains. Merolla et al. applied our present knowledge of the structure and function of the brain to design a new computer chip that uses the same wiring rules and architecture. The flexible, scalable chip operated efficiently in real time, while using very little power. Science, this issue p. 668 A large-scale computer chip mimics many features of a real brain. Inspired by the brain’s structure, we have developed an efficient, scalable, and flexible non–von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts.


ieee international conference on high performance computing data and analytics | 2009

The cat is out of the bag: cortical simulations with 10 9 neurons, 10 13 synapses

Rajagopal Ananthanarayanan; Steven K. Esser; Horst D. Simon; Dharmendra S. Modha

In the quest for cognitive computing, we have built a massively parallel cortical simulator, C2, that incorporates a number of innovations in computation, memory, and communication. Using C2 on LLNLs Dawn Blue Gene/P supercomputer with 147, 456 CPUs and 144 TB of main memory, we report two cortical simulations -- at unprecedented scale -- that effectively saturate the entire memory capacity and refresh it at least every simulated second. The first simulation consists of 1.6 billion neurons and 8.87 trillion synapses with experimentally-measured gray matter thalamocortical connectivity. The second simulation has 900 million neurons and 9 trillion synapses with probabilistic connectivity. We demonstrate nearly perfect weak scaling and attractive strong scaling. The simulations, which incorporate phenomenological spiking neurons, individual learning synapses, axonal delays, and dynamic synaptic channels, exceed the scale of the cat cortex, marking the dawn of a new era in the scale of cortical simulations.


Communications of The ACM | 2011

Cognitive computing

Dharmendra S. Modha; Rajagopal Ananthanarayanan; Steven K. Esser; Anthony Ndirango; Anthony J. Sherbondy; Raghavendra Singh

Unite neuroscience, supercomputing, and nanotechnology to discover, demonstrate, and deliver the brains core algorithms.


custom integrated circuits conference | 2011

A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons

Jae-sun Seo; Bernard Brezzo; Yong Liu; Benjamin D. Parker; Steven K. Esser; Robert K. Montoye; Bipin Rajendran; Jose A. Tierno; Leland Chang; Dharmendra S. Modha; Daniel J. Friedman

Efforts to achieve the long-standing dream of realizing scalable learning algorithms for networks of spiking neurons in silicon have been hampered by (a) the limited scalability of analog neuron circuits; (b) the enormous area overhead of learning circuits, which grows with the number of synapses; and (c) the need to implement all inter-neuron communication via off-chip address-events. In this work, a new architecture is proposed to overcome these challenges by combining innovations in computation, memory, and communication, respectively, to leverage (a) robust digital neuron circuits; (b) novel transposable SRAM arrays that share learning circuits, which grow only with the number of neurons; and (c) crossbar fan-out for efficient on-chip inter-neuron communication. Through tight integration of memory (synapses) and computation (neurons), a highly configurable chip comprising 256 neurons and 64K binary synapses with on-chip learning based on spike-timing dependent plasticity is demonstrated in 45nm SOI-CMOS. Near-threshold, event-driven operation at 0.53V is demonstrated to maximize power efficiency for real-time pattern classification, recognition, and associative memory tasks. Future scalable systems built from the foundation provided by this work will open up possibilities for ubiquitous ultra-dense, ultra-low power brain-like cognitive computers.


Proceedings of the National Academy of Sciences of the United States of America | 2016

Convolutional networks for fast, energy-efficient neuromorphic computing

Steven K. Esser; Paul A. Merolla; John V. Arthur; Andrew S. Cassidy; Rathinakumar Appuswamy; Alexander Andreopoulos; David J. Berg; Jeffrey L. McKinstry; Timothy Melano; R Davis; Carmelo di Nolfo; Pallab Datta; Arnon Amir; Brian Taba; Myron Flickner; Dharmendra S. Modha

Significance Brain-inspired computing seeks to develop new technologies that solve real-world problems while remaining grounded in the physical requirements of energy, speed, and size. Meeting these challenges requires high-performing algorithms that are capable of running on efficient hardware. Here, we adapt deep convolutional neural networks, which are today’s state-of-the-art approach for machine perception in many domains, to perform classification tasks on neuromorphic hardware, which is today’s most efficient platform for running neural networks. Using our approach, we demonstrate near state-of-the-art accuracy on eight datasets, while running at between 1,200 and 2,600 frames/s and using between 25 and 275 mW. Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.


international symposium on neural networks | 2012

Building block of a programmable neuromorphic substrate: A digital neurosynaptic core

John V. Arthur; Paul A. Merolla; Filipp Akopyan; Rodrigo Alvarez; Andrew S. Cassidy; Shyamal Chandra; Steven K. Esser; Nabil Imam; William P. Risk; Daniel Ben Dayan Rubin; Rajit Manohar; Dharmendra S. Modha

The grand challenge of neuromorphic computation is to develop a flexible brain-inspired architecture capable of a wide array of real-time applications, while striving towards the ultra-low power consumption and compact size of biological neural systems. Toward this end, we fabricated a building block of a modular neuromorphic architecture, a neurosynaptic core. Our implementation consists of 256 integrate-and-fire neurons and a 1,024×256 SRAM crossbar memory for synapses that fits in 4.2mm2 using a 45nm SOI process and consumes just 45pJ per spike. The core is fully configurable in terms of neuron parameters, axon types, and synapse states and its fully digital implementation achieves one-to-one correspondence with software simulation models. One-to-one correspondence allows us to introduce an abstract neural programming model for our chip, a contract guaranteeing that any application developed in software functions identically in hardware. This contract allows us to rapidly test and map applications from control, machine vision, and classification. To demonstrate, we present four test cases (i) a robot driving in a virtual environment, (ii) the classic game of pong, (iii) visual digit recognition and (iv) an autoassociative memory.


international symposium on neural networks | 2013

Cognitive computing building block: A versatile and efficient digital neuron model for neurosynaptic cores

Andrew S. Cassidy; Paul A. Merolla; John V. Arthur; Steven K. Esser; Bryan L. Jackson; Rodrigo Alvarez-Icaza; Pallab Datta; Jun Sawada; Theodore M. Wong; Vitaly Feldman; Arnon Amir; Daniel Ben Dayan Rubin; Filipp Akopyan; Emmett McQuinn; William P. Risk; Dharmendra S. Modha

Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brains function and efficiency. Judiciously balancing the dual objectives of functional capability and implementation/operational cost, we develop a simple, digital, reconfigurable, versatile spiking neuron model that supports one-to-one equivalence between hardware and simulation and is implementable using only 1272 ASIC gates. Starting with the classic leaky integrate-and-fire neuron, we add: (a) configurable and reproducible stochasticity to the input, the state, and the output; (b) four leak modes that bias the internal state dynamics; (c) deterministic and stochastic thresholds; and (d) six reset modes for rich finite-state behavior. The model supports a wide variety of computational functions and neural codes. We capture 50+ neuron behaviors in a library for hierarchical composition of complex computations and behaviors. Although designed with cognitive algorithms and applications in mind, serendipitously, the neuron model can qualitatively replicate the 20 biologically-relevant behaviors of a dynamical neuron model.


international symposium on neural networks | 2013

Cognitive computing systems: Algorithms and applications for networks of neurosynaptic cores

Steven K. Esser; Alexander Andreopoulos; Rathinakumar Appuswamy; Pallab Datta; Davis; Arnon Amir; John V. Arthur; Andrew S. Cassidy; Myron Flickner; Paul Merolla; Shyamal Chandra; Nicola Basilico; Stefano Carpin; Tom Zimmerman; Frank Zee; Rodrigo Alvarez-Icaza; Jeffrey A. Kusnitz; Theodore M. Wong; William P. Risk; Emmett McQuinn; Tapan Kumar Nayak; Raghavendra Singh; Dharmendra S. Modha

Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brains function and efficiency. The non-von Neumann nature of the TrueNorth architecture necessitates a novel approach to efficient system design. To this end, we have developed a set of abstractions, algorithms, and applications that are natively efficient for TrueNorth. First, we developed repeatedly-used abstractions that span neural codes (such as binary, rate, population, and time-to-spike), long-range connectivity, and short-range connectivity. Second, we implemented ten algorithms that include convolution networks, spectral content estimators, liquid state machines, restricted Boltzmann machines, hidden Markov models, looming detection, temporal pattern matching, and various classifiers. Third, we demonstrate seven applications that include speaker recognition, music composer recognition, digit recognition, sequence prediction, collision avoidance, optical flow, and eye detection. Our results showcase the parallelism, versatility, rich connectivity, spatio-temporality, and multi-modality of the TrueNorth architecture as well as compositionality of the corelet programming paradigm and the flexibility of the underlying neuron model.


international symposium on neural networks | 2013

Cognitive computing programming paradigm: A Corelet Language for composing networks of neurosynaptic cores

Arnon Amir; Pallab Datta; William P. Risk; Andrew S. Cassidy; Jeffrey A. Kusnitz; Steven K. Esser; Alexander Andreopoulos; Theodore M. Wong; Myron Flickner; Rodrigo Alvarez-Icaza; Emmett McQuinn; Benjamin Shaw; Norm Pass; Dharmendra S. Modha

Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brains function and efficiency. The sequential programming paradigm of the von Neumann architecture is wholly unsuited for TrueNorth. Therefore, as our main contribution, we develop a new programming paradigm that permits construction of complex cognitive algorithms and applications while being efficient for TrueNorth and effective for programmer productivity. The programming paradigm consists of (a) an abstraction for a TrueNorth program, named Corelet, for representing a network of neurosynaptic cores that encapsulates all details except external inputs and outputs; (b) an object-oriented Corelet Language for creating, composing, and decomposing corelets; (c) a Corelet Library that acts as an ever-growing repository of reusable corelets from which programmers compose new corelets; and (d) an end-to-end Corelet Laboratory that is a programming environment which integrates with the TrueNorth architectural simulator, Compass, to support all aspects of the programming cycle from design, through development, debugging, and up to deployment. The new paradigm seamlessly scales from a handful of synapses and neurons to networks of neurosynaptic cores of progressively increasing size and complexity. The utility of the new programming paradigm is underscored by the fact that we have designed and implemented more than 100 algorithms as corelets for TrueNorth in a very short time span.


ieee international conference on high performance computing data and analytics | 2012

Compass: a scalable simulator for an architecture for cognitive computing

Robert Preissl; Theodore M. Wong; Pallab Datta; Myron Flickner; Raghavendra Singh; Steven K. Esser; William P. Risk; Horst D. Simon; Dharmendra S. Modha

Inspired by the function, power, and volume of the organic brain, we are developing TrueNorth, a novel modular, non-von Neumann, ultra-low power, compact architecture. TrueNorth consists of a scalable network of neurosynaptic cores, with each core containing neurons, dendrites, synapses, and axons. To set sail for TrueNorth, we developed Compass, a multi-threaded, massively parallel functional simulator and a parallel compiler that maps a network of long-distance pathways in the macaque monkey brain to TrueNorth. We demonstrate near-perfect weak scaling on a 16 rack IBM® Blue Gene®/Q (262144 CPUs, 256 TB memory), achieving an unprecedented scale of 256 million neurosynaptic cores containing 65 billion neurons and 16 trillion synapses running only 388x slower than real time with an average spiking rate of 8.1 Hz. By using emerging PGAS communication primitives, we also demonstrate 2x better real-time performance over MPI primitives on a 4 rack Blue Gene/P (16384 CPUs, 16 TB memory).

Researchain Logo
Decentralizing Knowledge