Bryan L. Jackson
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bryan L. Jackson.
Science | 2014
Paul A. Merolla; John V. Arthur; Rodrigo Alvarez-Icaza; Andrew S. Cassidy; Jun Sawada; Filipp Akopyan; Bryan L. Jackson; Nabil Imam; Chen Guo; Yutaka Nakamura; Bernard Brezzo; Ivan Vo; Steven K. Esser; Rathinakumar Appuswamy; Brian Taba; Arnon Amir; Myron Flickner; William P. Risk; Rajit Manohar; Dharmendra S. Modha
Modeling computer chips on real brains Computers are nowhere near as versatile as our own brains. Merolla et al. applied our present knowledge of the structure and function of the brain to design a new computer chip that uses the same wiring rules and architecture. The flexible, scalable chip operated efficiently in real time, while using very little power. Science, this issue p. 668 A large-scale computer chip mimics many features of a real brain. Inspired by the brain’s structure, we have developed an efficient, scalable, and flexible non–von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts.
Journal of Vacuum Science & Technology. B. Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena | 2010
Geoffrey W. Burr; Matthew J. Breitwisch; Michele M. Franceschini; Davide Garetto; Kailash Gopalakrishnan; Bryan L. Jackson; B. N. Kurdi; Chung H. Lam; Luis A. Lastras; Alvaro Padilla; Bipin Rajendran; Simone Raoux; R. S. Shenoy
The authors survey the current state of phase change memory (PCM), a nonvolatile solid-state memory technology built around the large electrical contrast between the highly resistive amorphous and highly conductive crystalline states in so-called phase change materials. PCM technology has made rapid progress in a short time, having passed older technologies in terms of both sophisticated demonstrations of scaling to small device dimensions, as well as integrated large-array demonstrators with impressive retention, endurance, performance, and yield characteristics. They introduce the physics behind PCM technology, assess how its characteristics match up with various potential applications across the memory-storage hierarchy, and discuss its strengths including scalability and rapid switching speed. Challenges for the technology are addressed, including the design of PCM cells for low reset current, the need to control device-to-device variability, and undesirable changes in the phase change material that c...
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2015
Filipp Akopyan; Jun Sawada; Andrew S. Cassidy; Rodrigo Alvarez-Icaza; John V. Arthur; Paul A. Merolla; Nabil Imam; Yutaka Nakamura; Pallab Datta; Gi-Joon Nam; Brian Taba; Michael P. Beakes; Bernard Brezzo; Jente B. Kuang; Rajit Manohar; William P. Risk; Bryan L. Jackson; Dharmendra S. Modha
The new era of cognitive computing brings forth the grand challenge of developing systems capable of processing massive amounts of noisy multisensory data. This type of intelligent computing poses a set of constraints, including real-time operation, low-power consumption and scalability, which require a radical departure from conventional system design. Brain-inspired architectures offer tremendous promise in this area. To this end, we developed TrueNorth, a 65 mW real-time neurosynaptic processor that implements a non-von Neumann, low-power, highly-parallel, scalable, and defect-tolerant architecture. With 4096 neurosynaptic cores, the TrueNorth chip contains 1 million digital neurons and 256 million synapses tightly interconnected by an event-driven routing infrastructure. The fully digital 5.4 billion transistor implementation leverages existing CMOS scaling trends, while ensuring one-to-one correspondence between hardware and software. With such aggressive design metrics and the TrueNorth architecture breaking path with prevailing architectures, it is clear that conventional computer-aided design (CAD) tools could not be used for the design. As a result, we developed a novel design methodology that includes mixed asynchronous-synchronous circuits and a complete tool flow for building an event-driven, low-power neurosynaptic chip. The TrueNorth chip is fully configurable in terms of connectivity and neural parameters to allow custom configurations for a wide range of cognitive and sensory perception applications. To reduce the systems communication energy, we have adapted existing application-agnostic very large-scale integration CAD placement tools for mapping logical neural networks to the physical neurosynaptic core locations on the TrueNorth chips. With that, we have successfully demonstrated the use of TrueNorth-based systems in multiple applications, including visual object recognition, with higher performance and orders of magnitude lower power consumption than the same algorithms run on von Neumann architectures. The TrueNorth chip and its tool flow serve as building blocks for future cognitive systems, and give designers an opportunity to develop novel brain-inspired architectures and systems based on the knowledge obtained from this paper.
international symposium on neural networks | 2013
Andrew S. Cassidy; Paul A. Merolla; John V. Arthur; Steven K. Esser; Bryan L. Jackson; Rodrigo Alvarez-Icaza; Pallab Datta; Jun Sawada; Theodore M. Wong; Vitaly Feldman; Arnon Amir; Daniel Ben Dayan Rubin; Filipp Akopyan; Emmett McQuinn; William P. Risk; Dharmendra S. Modha
Marching along the DARPA SyNAPSE roadmap, IBM unveils a trilogy of innovations towards the TrueNorth cognitive computing system inspired by the brains function and efficiency. Judiciously balancing the dual objectives of functional capability and implementation/operational cost, we develop a simple, digital, reconfigurable, versatile spiking neuron model that supports one-to-one equivalence between hardware and simulation and is implementable using only 1272 ASIC gates. Starting with the classic leaky integrate-and-fire neuron, we add: (a) configurable and reproducible stochasticity to the input, the state, and the output; (b) four leak modes that bias the internal state dynamics; (c) deterministic and stochastic thresholds; and (d) six reset modes for rich finite-state behavior. The model supports a wide variety of computational functions and neural codes. We capture 50+ neuron behaviors in a library for hierarchical composition of complex computations and behaviors. Although designed with cognitive algorithms and applications in mind, serendipitously, the neuron model can qualitatively replicate the 20 biologically-relevant behaviors of a dynamical neuron model.
ACM Journal on Emerging Technologies in Computing Systems | 2013
Bryan L. Jackson; Bipin Rajendran; Gregory S. Corrado; Matthew J. Breitwisch; Geoffrey W. Burr; Roger W. Cheek; Kailash Gopalakrishnan; Simone Raoux; C. T. Rettner; Alvaro Padilla; Alejandro G. Schrott; R. S. Shenoy; B. N. Kurdi; Chung Hon Lam; Dharmendra S. Modha
The memory capacity, computational power, communication bandwidth, energy consumption, and physical size of the brain all tend to scale with the number of synapses, which outnumber neurons by a factor of 10,000. Although progress in cortical simulations using modern digital computers has been rapid, the essential disparity between the classical von Neumann computer architecture and the computational fabric of the nervous system makes large-scale simulations expensive, power hungry, and time consuming. Over the last three decades, CMOS-based neuromorphic implementations of “electronic cortex” have emerged as an energy efficient alternative for modeling neuronal behavior. However, the key ingredient for electronic implementation of any self-learning system—programmable, plastic Hebbian synapses scalable to biological densities—has remained elusive. We demonstrate the viability of implementing such electronic synapses using nanoscale phase change devices. We introduce novel programming schemes for modulation of device conductance to closely mimic the phenomenon of Spike Timing Dependent Plasticity (STDP) observed biologically, and verify through simulations that such plastic phase change devices should support simple correlative learning in networks of spiking neurons. Our devices, when arranged in a crossbar array architecture, could enable the development of synaptronic systems that approach the density (∼1011 synapses per sq cm) and energy efficiency (consuming ∼1pJ per synaptic programming event) of the human brain.
symposium on vlsi technology | 2010
Kailash Gopalakrishnan; R. S. Shenoy; C. T. Rettner; Kumar Virwani; Donald S. Bethune; Robert M. Shelby; Geoffrey W. Burr; A. J. Kellock; R. S. King; K. Nguyen; A. N. Bowers; M. Jurich; Bryan L. Jackson; A. M. Friz; Teya Topuria; Philip M. Rice; B. N. Kurdi
Phase change memory (PCM) could potentially achieve high density with large, 3Dstacked crosspoint arrays, but not without a BEOL-friendly access device (AD) that can provide high current densities and large ON/OFF ratios. We demonstrate a novel AD based on Cu-ion motion in novel Cu-containing Mixed Ionic Electronic Conduction (MIEC) materials[1, 2]. Experimental results on various device structures show that these ADs provide the ultra-high current densities needed for PCM, exhibit high ON/OFF ratios with excellent uniformity, are highly scalable, and are compatible with <400°C Back-End-Of-the-Line (BEOL) fabrication.
Journal of Applied Physics | 2012
Geoffrey W. Burr; Pierre Tchoulfian; Teya Topuria; Clemens Nyffeler; Kumar Virwani; Alvaro Padilla; Robert M. Shelby; Mona Eskandari; Bryan L. Jackson; B. Lee
The relationship between the polycrystalline nature of phase change materials (such as Ge2Sb2Te5) and the intermediate resistance states of phase change memory (PCM) devices has not been widely studied. A full understanding of such states will require knowledge of how polycrystalline grains form, how they interact with each other at various temperatures, and how the differing electrical (and thermal) characteristics within the grains and at their boundaries combine through percolation to produce the externally observed electrical (and thermal) characteristics of a PCM device. We address the first of these tasks (and introduce a vehicle for the second) by studying the formation of fcc polycrystalline grains from the as-deposited amorphous state in undoped Ge2Sb2Te5. We perform ex situ transmission electron microscopy membrane experiments and then match these observations against numerical simulation. Ramped-anneal experiments show that the temperature ramp-rate strongly influences the median grain size. By...
symposium on vlsi technology | 2012
Geoffrey W. Burr; Kumar Virwani; R. S. Shenoy; Alvaro Padilla; M. BrightSky; Eric A. Joseph; M. Lofaro; A. J. Kellock; R. S. King; K. Nguyen; A. N. Bowers; M. Jurich; C. T. Rettner; Bryan L. Jackson; Donald S. Bethune; Robert M. Shelby; Teya Topuria; N. Arellano; Philip M. Rice; B. N. Kurdi; Kailash Gopalakrishnan
BEOL-friendly Access Devices (AD) based on Cu-containing MIEC materials [1-4] are integrated in large (512 × 1024) arrays at 100% yield, and are successfully co-integrated together with Phase Change Memory (PCM). Numerous desirable attributes are demonstrated: the large currents (>;200μA) needed for PCM, the bipolar operation required for high-performance RRAM, the single-target sputter deposition essential for high-volume manufacturing, and the ultra-low leakage ( 10 pA) and high voltage margin (1.5V) needed to enable large crosspoint arrays.
Journal of Applied Physics | 2011
Alvaro Padilla; Geoffrey W. Burr; C. T. Rettner; Teya Topuria; Philip M. Rice; Bryan L. Jackson; Kumar Virwani; A. J. Kellock; Diego G. Dupouy; Anthony Debunne; Robert M. Shelby; Kailash Gopalakrishnan; R. S. Shenoy; B. N. Kurdi
We assess voltage polarity effects in phase-change memory (PCM) devices that contain Ge2Sb2Te5 (GST) as the active material through the study of vertically asymmetric pore-cell and laterally symmetric bridge-cell structures. We show that bias polarity can greatly accelerate device failure in such GST-based PCM devices and, through extensive transmission electron microscopy-based failure analysis, trace these effects to a two-stage elemental segregation process. Segregation is initially driven by bias across the molten region of the cell and is then greatly enhanced during the crystallization process at lower temperatures. These results have implications for the design of pulses and PCM cells for maximum endurance, the use of reverse polarity for extending endurance, the requirements for uni- or bi-polar access devices, the need for materials science on active rather than initial stoichiometries, and the need to evaluate new PCM materials under both bias polarities.
ieee international conference on high performance computing data and analytics | 2014
Andrew S. Cassidy; Rodrigo Alvarez-Icaza; Filipp Akopyan; Jun Sawada; John V. Arthur; Paul A. Merolla; Pallab Datta; Marc Gonzalez Tallada; Brian Taba; Alexander Andreopoulos; Arnon Amir; Steven K. Esser; Jeff Kusnitz; Rathinakumar Appuswamy; Chuck Haymes; Bernard Brezzo; Roger Moussalli; Ralph Bellofatto; Christian W. Baks; Michael Mastro; Kai Schleupen; Charles Edwin Cox; Ken Inoue; Steven Edward Millman; Nabil Imam; Emmett McQuinn; Yutaka Nakamura; Ivan Vo; Chen Guok; Don Nguyen
Drawing on neuroscience, we have developed a parallel, event-driven kernel for neurosynaptic computation, that is efficient with respect to computation, memory, and communication. Building on the previously demonstrated highly optimized software expression of the kernel, here, we demonstrate True North, a co-designed silicon expression of the kernel. True North achieves five orders of magnitude reduction in energy to-solution and two orders of magnitude speedup in time-to solution, when running computer vision applications and complex recurrent neural network simulations. Breaking path with the von Neumann architecture, True North is a 4,096 core, 1 million neuron, and 256 million synapse brain-inspired neurosynaptic processor, that consumes 65mW of power running at real-time and delivers performance of 46 Giga-Synaptic OPS/Watt. We demonstrate seamless tiling of True North chips into arrays, forming a foundation for cortex-like scalability. True Norths unprecedented time-to-solution, energy-to-solution, size, scalability, and performance combined with the underlying flexibility of the kernel enable a broad range of cognitive applications.