Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander H. Hsia is active.

Publication


Featured researches published by Alexander H. Hsia.


Optics Express | 2011

Low-voltage differentially-signaled modulators

William A. Zortman; Anthony L. Lentine; Alexander H. Hsia; Michael R. Watts

For exascale computing applications, viable optical solutions will need to operate using low voltage signaling and with low power consumption. In this work, the first differentially signaled silicon resonator is demonstrated which can provide a 5dB extinction ratio using 3fJ/bit and 500mV signal amplitude at 10Gbps. Modulation with asymmetric voltage amplitudes as low as 150mV with 3dB extinction are demonstrated at 10Gbps as well. Differentially signaled resonators simplify and expand the design space for modulator implementation and require no special drivers.


Frontiers in Neuroscience | 2016

Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding

Sapan Agarwal; Tu-Thach Quach; Ojas Parekh; Alexander H. Hsia; Erik P. DeBenedictis; Conrad D. James; Matthew Marinella; James B. Aimone

The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.


international joint conference on neural network | 2016

Resistive memory device requirements for a neural algorithm accelerator.

Sapan Agarwal; Steven J. Plimpton; David R. Hughart; Alexander H. Hsia; Isaac Richter; Jonathan A. Cox; Conrad D. James; Matthew Marinella

Resistive memories enable dramatic energy reductions for neural algorithms. We propose a general purpose neural architecture that can accelerate many different algorithms and determine the device properties that will be needed to run backpropagation on the neural architecture. To maintain high accuracy, the read noise standard deviation should be less than 5% of the weight range. The write noise standard deviation should be less than 0.4% of the weight range and up to 300% of a characteristic update (for the datasets tested). Asymmetric nonlinearities in the change in conductance vs pulse cause weight decay and significantly reduce the accuracy, while moderate symmetric nonlinearities do not have an effect. In order to allow for parallel reads and writes the write current should be less than 100 nA as well.


symposium on vlsi technology | 2017

Achieving ideal accuracies in analog neuromorphic computing using periodic carry

Sapan Agarwal; Robin B. Jacobs Gedrim; Alexander H. Hsia; David Russell Hughart; Elliot J. Fuller; A. Alec Talin; Conrad D. James; Steven J. Plimpton; Matthew Marinella

Analog resistive memories promise to reduce the energy of neural networks by orders of magnitude. However, the write variability and write nonlinearity of current devices prevent neural networks from training to high accuracy. We present a novel periodic carry method that uses a positional number system to overcome this while maintaining the benefit of parallel analog matrix operations. We demonstrate how noisy, nonlinear TaOx devices that could only train to 80% accuracy on MNIST, can now reach 97% accuracy, only 1% away from an ideal numeric accuracy of 98%. On a file type dataset, the TaOx devices achieve ideal numeric accuracy. In addition, low noise, linear Li1−xCoO2 devices train to ideal numeric accuracies using periodic carry on both datasets.


IEEE Journal on Emerging and Selected Topics in Circuits and Systems | 2018

Multiscale Co-Design Analysis of Energy, Latency, Area, and Accuracy of a ReRAM Analog Neural Training Accelerator

Matthew Marinella; Sapan Agarwal; Alexander H. Hsia; Isaac Richter; Robin Jacobs-Gedrim; John Niroula; Steven J. Plimpton; Engin Ipek; Conrad D. James


Archive | 2015

Methods for resistive switching of memristors

Patrick R. Mickel; Conrad D. James; Andrew J. Lohn; Matthew Marinella; Alexander H. Hsia


2017 Fifth Berkeley Symposium on Energy Efficient Electronic Systems & Steep Transistors Workshop (E3S) | 2017

Designing an analog crossbar based neuromorphic accelerator

Sapan Agarwal; Alexander H. Hsia; Robin Jacobs-Gedrim; David R. Hughart; Steven J. Plimpton; Conrad D. James; Matthew Marinella


Archive | 2016

Neuromorphic Algorithm Acceleration with Resistive Memory NanoCrossbars.

Matthew Marinella; Sapan Agarwal; Elliot J. Fuller; Albert Alec Talin; Farid El Gabaly Marquez; Robin B. Jacobs-Gedrim; David Russell Hughart; Ronald S. Goeke; Alexander H. Hsia; Richard Louis Schiek; Steven J. Plimpton; Conrad D. James


Archive | 2016

Device to System Modeling Framework to Enable a 10 fJ per Instruction Neuromorphic Computer.

Matthew Marinella; Sapan Agarwal; Albert Alec Talin; Frederick B. McCormick; Steven J. Plimpton; Farid El Gabaly Marquez; Elliot J. Fuller; Robin B. Jacobs-Gedrim; David Russell Hughart; Ronald S. Goeke; Alexander H. Hsia


Archive | 2015

Resistive Memory for Neuromorphic Algorithm Acceleration.

Matthew Marinella; Sapan Agarwal; David Russell Hughart; Patrick R. Mickel; Alexander H. Hsia; Steven J. Plimpton; Seth Decker; Roger Apodaca; James B. Aimone; Conrad D. James; Timothy J. Draelos

Collaboration


Dive into the Alexander H. Hsia's collaboration.

Top Co-Authors

Avatar

Matthew Marinella

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Conrad D. James

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Sapan Agarwal

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Steven J. Plimpton

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Albert Alec Talin

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

David R. Hughart

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Isaac Richter

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

James B. Aimone

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge