Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William Severa is active.

Publication


Featured researches published by William Severa.


international symposium on neural networks | 2017

Neurogenesis deep learning: Extending deep networks to accommodate new classes

Timothy J. Draelos; Nadine E. Miner; Christopher C. Lamb; Jonathan A. Cox; Craig M. Vineyard; Kristofor D. Carlson; William Severa; Conrad D. James; James B. Aimone

Neural machine learning methods, such as deep neural networks (DNN), have achieved remarkable success in a number of complex data processing tasks. These methods have arguably had their strongest impact on tasks such as image and audio processing — data processing domains in which humans have long held clear advantages over conventional algorithms. In contrast to biological neural systems, which are capable of learning continuously, deep artificial networks have a limited ability for incorporating new information in an already trained network. As a result, methods for continuous learning are potentially highly impactful in enabling the application of deep networks to dynamic data sets. Here, inspired by the process of adult neurogenesis in the hippocampus, we explore the potential for adding new neurons to deep layers of artificial neural networks in order to facilitate their acquisition of novel information while preserving previously trained data representations. Our results on the MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes lower and upper case letters and digits, demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.


2016 IEEE International Conference on Rebooting Computing (ICRC) | 2016

Spiking network algorithms for scientific computing

William Severa; Ojas Parekh; Kristofor D. Carlson; Conrad D. James; James B. Aimone

For decades, neural networks have shown promise for next-generation computing, and recent breakthroughs in machine learning techniques, such as deep neural networks, have provided state-of-the-art solutions for inference problems. However, these networks require thousands of training processes and are poorly suited for the precise computations required in scientific or similar arenas. The emergence of dedicated spiking neuromorphic hardware creates a powerful computational paradigm which can be leveraged towards these exact scientific or otherwise objective computing tasks. We forego any learning process and instead construct the network graph by hand. In turn, the networks produce guaranteed success often with easily computable complexity. We demonstrate a number of algorithms exemplifying concepts central to spiking networks including spike timing and synaptic delay. We also discuss the application of cross-correlation particle image velocimetry and provide two spiking algorithms; one uses time-division multiplexing, and the other runs in constant time.


Neural Computation | 2017

A combinatorial model for dentate gyrus sparse coding

William Severa; Ojas Parekh; Conrad D. James; James B. Aimone

The dentate gyrus forms a critical link between the entorhinal cortex and CA3 by providing a sparse version of the signal. Concurrent with this increase in sparsity, a widely accepted theory suggests the dentate gyrus performs pattern separation—similar inputs yield decorrelated outputs. Although an active region of study and theory, few logically rigorous arguments detail the dentate gyrus’s (DG) coding. We suggest a theoretically tractable, combinatorial model for this action. The model provides formal methods for a highly redundant, arbitrarily sparse, and decorrelated output signal.To explore the value of this model framework, we assess how suitable it is for two notable aspects of DG coding: how it can handle the highly structured grid cell representation in the input entorhinal cortex region and the presence of adult neurogenesis, which has been proposed to produce a heterogeneous code in the DG. We find tailoring the model to grid cell input yields expansion parameters consistent with the literature. In addition, the heterogeneous coding reflects activity gradation observed experimentally. Finally, we connect this approach with more conventional binary threshold neural circuit models via a formal embedding.


Proceedings of the Neuromorphic Computing Symposium on | 2017

Neural computing for scientific computing applications: more than just machine learning

James B. Aimone; Ojas Parekh; William Severa

Neural computing has been identified as a computing alternative in the post-Moores Law era, however much of its attention has been directed at specialized applications such as machine learning. For scientific computing applications, particularly those that often depend on supercomputing, it is not clear that neural machine learning is the exclusive contribution to be made by neuromorphic platforms. In our presentation, we will discuss ways that looking to the brain as a whole and neurons specifically can provide new sources for inspiration for computing beyond current machine learning applications. Particularly for scientific computing, where approximate methods for computation introduce additional challenges, the development of non-approximate methods for neural computation is potentially quite valuable. In addition, the brains dramatic ability to utilize context at many different scales and incorporate information from many different modalities is a capability currently poorly realized by neural machine learning approaches yet offers considerable potential impact on scientific applications.


2017 IEEE International Conference on Rebooting Computing (ICRC) | 2017

A Spike-Timing Neuromorphic Architecture

Aaron Jamison Hill; Jonathon W. Donaldson; Fredrick Rothganger; Craig M. Vineyard; David Follett; Pamela L. Follett; Michael R. Smith; Stephen J. Verzi; William Severa; Felix Wang; James B. Aimone; John H. Naegle; Conrad D. James


Archive | 2016

Can we be formal in assessing the strengths and weaknesses of neural architectures? A case study using a spiking cross-correlation algorithm.

William Severa; Kristofor D. Carlson; Ojas Parekh; Craig M. Vineyard; James B. Aimone


international symposium on neural networks | 2018

Spiking Neural Algorithms for Markov Process Random Walk

William Severa; Rich Lehoucq; Ojas Parekh; James B. Aimone


arXiv: Neural and Evolutionary Computing | 2018

Data-driven Feature Sampling for Deep Hyperspectral Classification and Segmentation

William Severa; Jerilyn A. Timlin; Suraj Kholwadwala; Conrad D. James; James B. Aimone


arXiv: Neural and Evolutionary Computing | 2018

Whetstone: A Method for Training Deep Artificial Neural Networks for Binary Communication.

William Severa; Craig M. Vineyard; Ryan Dellana; Stephen J. Verzi; James B. Aimone


Archive | 2018

Neural Algorithms for Low Power Implementation of Partial Differential Equations.

James B. Aimone; Aaron Jamison Hill; Richard B. Lehoucq; Ojas Parekh; Leah Reeder; William Severa

Collaboration


Dive into the William Severa's collaboration.

Top Co-Authors

Avatar

James B. Aimone

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Conrad D. James

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Ojas Parekh

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Craig M. Vineyard

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Kristofor D. Carlson

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Aaron Jamison Hill

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fredrick Rothganger

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Ann Speed

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge