Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shawn E. Taylor is active.

Publication


Featured researches published by Shawn E. Taylor.


Neurocomputing | 2009

Applying category theory to improve the performance of a neural architecture

Michael J. Healy; Richard D. Olinger; Robert J. Young; Shawn E. Taylor; Thomas P. Caudell; Kurt W. Larson

A recently developed mathematical semantic theory explains the relationship between knowledge and its representation in connectionist systems. The semantic theory is based upon category theory, the mathematical theory of structure. A product of its explanatory capability is a set of principles to guide the design of future neural architectures and enhancements to existing designs. We claim that this mathematical semantic approach to network design is an effective basis for advancing the state of the art. We offer two experiments to support this claim. One of these involves multispectral imaging using data from a satellite camera.


international symposium on neural networks | 2009

Temporal semantics: An Adaptive Resonance Theory approach

Shawn E. Taylor; Michael Lewis Bernard; Stephen J. Verzi; James D. Morrow; Craig M. Vineyard; Michael J. Healy; Thomas P. Caudell

Encoding sensor observations across time is a critical component in the ability to model cognitive processes. All biological cognitive systems receive sensory stimuli as continuous streams of observed data over time. Therefore, the perceptual grounding of all biological cognitive processing is in temporal semantic encodings, where the particular grounding semantics are sensor modalities. We introduce a technique that encodes temporal semantic data as temporally integrated patterns stored in Adaptive Resonance Theory (ART) modules.


international symposium on neural networks | 2007

Categorical Mapping from Ontology to Neural Network: Initial Studies of Simple Neural Networks' Concept Capacity

Shawn E. Taylor; Michael J. Healy; Thomas P. Caudell

A recent neural network semantic theory provides the framework for mapping ontologies to neural networks. We use category theory, the mathematical theory of structure, to explore the concept representational abilities of select neural networks. Methodologies suggested by the semantic theory have been gainfully applied to specific applications. This paper describes a rigorous and numerical study of the implementation of neural category representations into an actual neural network.


Procedia Computer Science | 2012

An Autonomous Distal Reward Learning Architecture for Embodied Agents

Shawn E. Taylor; Michael J. Healy; Thomas P. Caudell

Distal reward refers to a class of problems where reward is temporally distal from actions that lead to reward. The difficulty for any biological neural system is that the neural activations that caused an agent to achieve reward may no longer be present when the reward is experienced. Therefore in addition to the usual reward assignment problem, there is the additional complexity of rewarding through time based on neural activations that may no longer be present. Although this problem has been thoroughly studied over the years using methods such as reinforcement learning, we are interested in a more biologically motivated neural architectural approach. This paper introduces one such architecture that exhibits rudimentary distal reward learning based on associations of bottom-up visual sensory sequences with bottom-up proprioceptive motor sequences while an agent explores an environment. After sufficient learning, the agent is able to locate the reward through chaining together of top-down motor command sequences. This paper will briefly discuss the details of the neural architecture, the agent-based modeling system in which it is embodied, a virtual Morris water maze environment used for training and evaluation, and a sampling of numerical experiments characterizing its learning properties.


biologically inspired cognitive architectures | 2010

A Neurologically Plausible Artificial Neural Network Computational Architecture of Episodic Memory and Recall

Craig M. Vineyard; Michael Lewis Bernard; Shawn E. Taylor; Thomas P. Caudell; Patrick D. Watson; Stephen J. Verzi; Neal J. Cohen; Howard Eichenbaum

Episodic memory is supported by the relational memory functions of the hippocampus. Building upon extensive neuroscience research on hippocampal processing, neural density, and connectivity we have implemented a computational architecture using variants of adaptive resonance theory artificial neural networks. Consequently, this model is capable of encoding, storing and processing multi-modal sensory inputs as well as simulating qualitative memory phenomena such as auto-association and recall. The performance of the model is compared with human subject performance. Thus, in this paper we present a neurologically plausible artificial neural network computational architecture of episodic memory and recall modeled after cortical-hippocampal structure and function.


Security Informatics | 2012

A multi-modal network architecture for knowledge discovery

Craig M. Vineyard; Stephen J. Verzi; Michael Lewis Bernard; Shawn E. Taylor; Irene Dubicka; Thomas P. Caudell

The collection and assessment of national security related information often involves an arduous process of detecting relevant associations between people, events, and locations—typically within very large data sets. The ability to more effectively perceive these connections could greatly aid in the process of knowledge discovery. This same process—pre-consciously collecting and associating multimodal information—naturally occurs in mammalian brains. With this in mind, this effort sought to draw upon the neuroscience community’s understanding of the relevant areas of the brain that associate multi-modal information for long-term storage for the purpose of creating a more effective, and more automated, association mechanism for the analyst community. Using the biology and functionality of the hippocampus as an analogy for inspiration, we have developed an artificial neural network architecture to associate k-tuples (paired associates) of multimodal input records. The architecture is composed of coupled unimodal self-organizing neural modules that learn generalizations of unimodal components of the input record. Cross modal associations, stored as a higher-order tensor, are learned incrementally as these generalizations are formed. Graph algorithms are then applied to the tensor to extract multi-modal association networks formed during learning. Doing so yields a potential novel approach to data mining for intelligence-related knowledge discovery. This paper describes the neurobiology, architecture, and operational characteristics, as well as provides a simple intelligence-based example to illustrate the model’s functionality.


international symposium on neural networks | 2011

A neurophysiologically inspired hippocampus based associative-ART Artificial neural network architecture

Craig M. Vineyard; Stephen J. Verzi; Michael Lewis Bernard; Shawn E. Taylor; Wendy Shaneyfelt; Irene Dubicka; Jonathan T. McClain; Thomas P. Caudell

Hippocampus within medial temporal lobe of the brain is essentially involved in episodic memory formation. Rather than simply being a mechanism of storing information, episodic memory associates information such as the spatial and temporal context of an event. Using hippocampus neurophysiology and functionality as an inspiration, we have developed an artificial neural network architecture called Associative-ART to associate k-tuples of inputs. In this paper we present an overview of hippocampus neurophysiology, explain the design of our neural network architecture, and present experimental results from an implementation of our architecture.


Archive | 2011

Augmented cognition tool for rapid military decision making.

Shawn E. Taylor; Michael Lewis Bernard; Stephen J. Verzi; Irene Dubicka; Craig M. Vineyard

This report describes the laboratory directed research and development work to model relevant areas of the brain that associate multi-modal information for long-term storage for the purpose of creating a more effective, and more automated, association mechanism to support rapid decision making. Using the biology and functionality of the hippocampus as an analogy or inspiration, we have developed an artificial neural network architecture to associate k-tuples (paired associates) of multimodal input records. The architecture is composed of coupled unimodal self-organizing neural modules that learn generalizations of unimodal components of the input record. Cross modal associations, stored as a higher-order tensor, are learned incrementally as these generalizations form. Graph algorithms are then applied to the tensor to extract multi-modal association networks formed during learning. Doing so yields a novel approach to data mining for knowledge discovery. This report describes the neurobiological inspiration, architecture, and operational characteristics of our model, and also provides a real world terrorist network example to illustrate the models functionality.


Archive | 2009

Modeling Aspects of Human Memory for Scientific Study

Thomas P. Caudell; Patrick D. Watson; Mark A. McDaniel; Howard Eichenbaum; Neal J. Cohen; Craig M. Vineyard; Shawn E. Taylor; Michael Lewis Bernard; James D. Morrow; Stephen J. Verzi

Working with leading experts in the field of cognitive neuroscience and computational intelligence, SNL has developed a computational architecture that represents neurocognitive mechanisms associated with how humans remember experiences in their past. The architecture represents how knowledge is organized and updated through information from individual experiences (episodes) via the cortical-hippocampal declarative memory system. We compared the simulated behavioral characteristics with those of humans measured under well established experimental standards, controlling for unmodeled aspects of human processing, such as perception. We used this knowledge to create robust simulations of & human memory behaviors that should help move the scientific community closer to understanding how humans remember information. These behaviors were experimentally validated against actual human subjects, which was published. An important outcome of the validation process will be the joining of specific experimental testing procedures from the field of neuroscience with computational representations from the field of cognitive modeling and simulation.


computer science and information engineering | 2009

Memory in Silico: Building a Neuromimetic Episodic Cognitive Model

Shawn E. Taylor; Craig M. Vineyard; Michael J. Healy; Thomas P. Caudell; Neal J. Cohen; Patrick D. Watson; Stephen J. Verzi; James D. Morrow; Michael Lewis Bernard; Howard Eichenbaum

Collaboration


Dive into the Shawn E. Taylor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Craig M. Vineyard

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Irene Dubicka

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

James D. Morrow

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Jonathan T. McClain

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Kurt W. Larson

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge