Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Craig M. Vineyard is active.

Publication


Featured researches published by Craig M. Vineyard.


international symposium on neural networks | 2017

Neurogenesis deep learning: Extending deep networks to accommodate new classes

Timothy J. Draelos; Nadine E. Miner; Christopher C. Lamb; Jonathan A. Cox; Craig M. Vineyard; Kristofor D. Carlson; William Severa; Conrad D. James; James B. Aimone

Neural machine learning methods, such as deep neural networks (DNN), have achieved remarkable success in a number of complex data processing tasks. These methods have arguably had their strongest impact on tasks such as image and audio processing — data processing domains in which humans have long held clear advantages over conventional algorithms. In contrast to biological neural systems, which are capable of learning continuously, deep artificial networks have a limited ability for incorporating new information in an already trained network. As a result, methods for continuous learning are potentially highly impactful in enabling the application of deep networks to dynamic data sets. Here, inspired by the process of adult neurogenesis in the hippocampus, we explore the potential for adding new neurons to deep layers of artificial neural networks in order to facilitate their acquisition of novel information while preserving previously trained data representations. Our results on the MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes lower and upper case letters and digits, demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.


international symposium on neural networks | 2017

A novel digital neuromorphic architecture efficiently facilitating complex synaptic response functions applied to liquid state machines

Michael R. Smith; Aaron Jamison Hill; Kristofor D. Carlson; Craig M. Vineyard; Jonathon W. Donaldson; David Follett; Pamela L. Follett; John H. Naegle; Conrad D. James; James B. Aimone

Information in neural networks is represented as weighted connections, or synapses, between neurons. This poses a problem as the primary computational bottleneck for neural networks is the vector-matrix multiply when inputs are multiplied by the neural network weights. Conventional processing architectures are not well suited for simulating neural networks, often requiring large amounts of energy and time. Additionally, synapses in biological neural networks are not binary connections, but exhibit a nonlinear response function as neurotransmitters are emitted and diffuse between neurons. Inspired by neuroscience principles, we present a digital neuromorphic architecture, the Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex synaptic response functions without requiring additional hardware components. We consider the paradigm of spiking neurons with temporally coded information as opposed to non-spiking rate coded neurons used in most neural networks. In this paradigm we examine liquid state machines applied to speech recognition and show how a liquid state machine with temporal dynamics maps onto the STPU — demonstrating the flexibility and efficiency of the STPU for instantiating neural algorithms.


international symposium on neural networks | 2017

Optimization-based computation with spiking neurons

Stephen J. Verzi; Craig M. Vineyard; Eric D. Vugrin; Meghan Galiardi; Conrad D. James; James B. Aimone

Considerable effort is currently being spent designing neuromorphic hardware for addressing challenging problems in a variety of pattern-matching applications. These neuromorphic systems offer low power architectures with intrinsically parallel and simple spiking neuron processing elements. Unfortunately, these new hardware architectures have been largely developed without a clear justification for using spiking neurons to compute quantities for problems of interest. Specifically, the use of spiking for encoding information in time has not been explored theoretically with complexity analysis to examine the operating conditions under which neuromorphic computing provides a computational advantage (time, space, power, etc.) In this paper, we present and formally analyze the use of temporal coding in a neural-inspired algorithm for optimization-based computation in neural spiking architectures.


international joint conference on neural network | 2016

Quantifying neural information content: A case study of the impact of hippocampal adult neurogenesis.

Craig M. Vineyard; Stephen J. Verzi; Conrad D. James; James B. Aimone

Through various means of structural and synaptic plasticity enabling online learning, neural networks are constantly reconfiguring their computational functionality. Neural information content is embodied within the configurations, representations, and computations of neural networks. To explore neural information content, we have developed metrics and computational paradigms to quantify neural information content. We have observed that conventional compression methods may help overcome some of the limiting factors of standard information theoretic techniques employed in neuroscience, and allows us to approximate information in neural data. To do so we have used compressibility as a measure of complexity in order to estimate entropy to quantitatively assess information content of neural ensembles. Using Lempel-Ziv compression we are able to assess the rate of generation of new patterns across a neural ensembles firing activity over time to approximate the information content encoded by a neural circuit. As a specific case study, we have been investigating the effect of neural mixed coding schemes due to hippocampal adult neurogenesis.


international symposium on neural networks | 2015

Repeated play of the SVM game as a means of adaptive classification

Craig M. Vineyard; Stephen J. Verzi; Conrad D. James; James B. Aimone; Gregory L. Heileman

The field of machine learning strives to develop algorithms that, through learning, lead to generalization; that is, the ability of a machine to perform a task that it was not explicitly trained for. An added challenge arises when the problem domain is dynamic or non-stationary with the data distributions or categorizations changing over time. This phenomenon is known as concept drift. Game-theoretic algorithms are often iterative by nature, consisting of repeated game play rather than a single interaction. Effectively, rather than requiring extensive retraining to update a learning model, a game-theoretic approach can adjust strategies as a novel approach to concept drift. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in an adaptive manner with repeated play to address concept drift, and show results of applying this algorithm to synthetic as well as real data.


international symposium on neural networks | 2009

Temporal semantics: An Adaptive Resonance Theory approach

Shawn E. Taylor; Michael Lewis Bernard; Stephen J. Verzi; James D. Morrow; Craig M. Vineyard; Michael J. Healy; Thomas P. Caudell

Encoding sensor observations across time is a critical component in the ability to model cognitive processes. All biological cognitive systems receive sensory stimuli as continuous streams of observed data over time. Therefore, the perceptual grounding of all biological cognitive processing is in temporal semantic encodings, where the particular grounding semantics are sensor modalities. We introduce a technique that encodes temporal semantic data as temporally integrated patterns stored in Adaptive Resonance Theory (ART) modules.


Procedia Computer Science | 2015

MapReduce SVM Game

Craig M. Vineyard; Stephen J. Verzi; Conrad D. James; James B. Aimone; Gregory L. Heileman

Abstract Despite technological advances making computing devices faster, smaller, and more prevalent in todays age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently and recom- bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.


international conference on augmented cognition | 2013

Adult Neurogenesis: Implications on Human And Computational Decision Making

Craig M. Vineyard; Stephen J. Verzi; Thomas P. Caudell; Michael Lewis Bernard; James B. Aimone

Adult neurogenesis is the incorporation of new neurons into established, functioning neural circuits. Current theoretical work in the neurogenesis field has suggested that new neurons are of greatest importance in the encoding of new memories, particularly in the ability to fully capture features which are entirely novel or being experienced in a unique way. We present two models of neurogenesis (a spiking, biologically realistic model as well as a basic growing feedforward model) to investigate possible functional implications. We use an information theoretic computational complexity measure to quantitatively analyze the information content encoded with and without neurogenesis in our spiking model. And neural encoding capacity (as a function of neuron maturation) is examined in our simple feedforward network. Finally, we discuss potential functional implications for neurogenesis in high risk environments.


BICA | 2013

Neurogenesis in a High Resolution Dentate Gyrus Model

Craig M. Vineyard; James B. Aimone; Glory Ruth Emmanuel

It has often been thought that adult brains are unable to produce new neurons. However, neurogenesis, or the birth of new neurons, is a naturally occurring phenomenon in a few specific brain regions. The well-studied dentate gyrus (DG) region of hippocampus in the medial temporal lobe is one such region. Nevertheless, the functional significance of neurogenesis is still unknown. Artificial neural network models of the DG not only provide a framework for investigating existing theories, but also aid in the development of new hypothesis and lead to greater neurogenesis understanding.


international conference on social computing | 2012

The impact of attitude resolve on population wide attitude change

Craig M. Vineyard; Kiran Lakkaraju; Joseph M. Collard; Stephen J. Verzi

Attitudes play a critical role in informing resulting behavior. Extending previous work, we have developed a model of population wide attitude change that captures social factors through a social network, cognitive factors through a cognitive network and individual differences in influence. All three of these factors are supported by literature as playing a role in attitude and behavior change. In this paper we present a new computational model of attitude resolve which incorporates the affects of player interaction dynamics that uses game theory in an integrated model of socio-cognitive strategy-based individual interaction and provide preliminary experiments.

Collaboration


Dive into the Craig M. Vineyard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

James B. Aimone

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Conrad D. James

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fredrick Rothganger

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Kristofor D. Carlson

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

William Severa

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge