Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Catherine D. Schuman is active.

Publication


Featured researches published by Catherine D. Schuman.


PLOS ONE | 2013

Dynamic Artificial Neural Networks with Affective Systems

Catherine D. Schuman; J. Douglas Birdwell

Artificial neural networks (ANNs) are processors that are trained to perform particular tasks. We couple a computational ANN with a simulated affective system in order to explore the interaction between the two. In particular, we design a simple affective system that adjusts the threshold values in the neurons of our ANN. The aim of this paper is to demonstrate that this simple affective system can control the firing rate of the ensemble of neurons in the ANN, as well as to explore the coupling between the affective system and the processes of long term potentiation (LTP) and long term depression (LTD), and the effect of the parameters of the affective system on its performance. We apply our networks with affective systems to a simple pole balancing example and briefly discuss the effect of affective systems on network performance.


international symposium on neural networks | 2016

An evolutionary optimization framework for neural networks and neuromorphic architectures

Catherine D. Schuman; James S. Plank; Adam Disney; John Reynolds

As new neural network and neuromorphic architectures are being developed, new training methods that operate within the constraints of the new architectures are required. Evolutionary optimization (EO) is a convenient training method for new architectures. In this work, we review a spiking neural network architecture and a neuromorphic architecture, and we describe an EO training framework for these architectures. We present the results of this training framework on four classification data sets and compare those results to other neural network and neuromorphic implementations. We also discuss how this EO framework may be extended to other architectures.


Procedia Computer Science | 2014

Spatiotemporal Classification Using Neuroscience-Inspired Dynamic Architectures

Catherine D. Schuman; J. Douglas Birdwell; Mark Edward Dean

Abstract We discuss a neuroscience-inspired dynamic architecture (NIDA) and associated design method based on evolutionary optimization. NIDA networks designed to perform anomaly detection tasks and control tasks have been shown to be successful in previous work. In particular, NIDA networks perform well on tasks that have a temporal component. We present methods for using NIDA networks on classification tasks in which there is no temporal component, in particular, the handwritten digit classification task. The approach we use for both methods produces useful subnetworks that can be combined to produce a final network or combined to produce results using an ensemble method. We discuss how a similar approach can be applied to other problem types.


dependable systems and networks | 2012

Heuristics for optimizing matrix-based erasure codes for fault-tolerant storage systems

James S. Plank; Catherine D. Schuman; B. Devin Robison

Large scale, archival and wide-area storage systems use erasure codes to protect users from losing data due to the inevitable failures that occur. All but the most basic erasure codes employ bit-matrices so that encoding and decoding may be effected solely with the bitwise exclusive-OR (XOR) operation. There are CPU savings that can result from strategically scheduling these XOR operations so that fewer XORs are performed. It is an open problem to derive a schedule from a bit-matrix that minimizes the number of XOR operations. We attack this open problem, deriving two new heuristics called Uber-CHRS and X-Sets to schedule encoding and decoding bit-matrices with reduced XOR operations. We evaluate these heuristics in a variety of realistic erasure coding settings and demonstrate that they are a significant improvement over previously published heuristics. We provide an open-source implementation of these heuristics so that practitioners may leverage our work.


Proceedings of the 2014 Biomedical Sciences and Engineering Conference | 2014

Neuroscience-inspired inspired dynamic architectures

Catherine D. Schuman; J. Douglas Birdwell; Mark Edward Dean

Neuroscience-inspired computational elements and architectures are one of the most popular ideas for replacing the von Neumann architecture. In this work, we propose a neuroscience-inspired dynamic architecture (NIDA) and discuss a method for automatically designing NIDA networks to accomplish tasks. We discuss the reasons we chose evolutionary optimization as the main design method and propose future directions for the work.


ieee international conference on high performance computing data and analytics | 2015

Dynamic adaptive neural network arrays: a neuromorphic architecture

Catherine D. Schuman; Adam Disney; John Reynolds

Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.


international symposium on neural networks | 2016

An Application Development Platform for Neuromorphic Computing

Mark Edward Dean; Jason Chan; Christopher Daffron; Adam Disney; John Reynolds; Garrett S. Rose; James S. Plank; J. Douglas Birdwell; Catherine D. Schuman

Dynamic Adaptive Neural Network Arrays (DANNAs) are neuromorphic computing systems developed as a hardware based approach to the implementation of neural networks. They feature highly adaptive and programmable structural elements, which model artificial neural networks with spiking behavior. We design them to solve problems using evolutionary optimization. In this paper, we highlight the current hardware and software implementations of DANNA, including their features, functionalities and performance. We then describe the development of an Application Development Platform (ADP) to support efficient application implementation and testing of DANNA based solutions. We conclude with future directions.


ieee international conference on high performance computing data and analytics | 2016

Parallel evolutionary optimization for neuromorphic network training

Catherine D. Schuman; Adam Disney; Susheela Singh; Grant Bruer; J. Parker Mitchell; Aleksander Klibisz; James S. Plank

One of the key impediments to the success of current neuromorphic computing architectures is the issue of how best to program them. Evolutionary optimization (EO) is one promising programming technique; in particular, its wide applicability makes it especially attractive for neuromorphic architectures, which can have many different characteristics. In this paper, we explore different facets of EO on a spiking neuromorphic computing model called DANNA. We focus on the performance of EO in the design of our DANNA simulator, and on how to structure EO on both multicore and massively parallel computing systems. We evaluate how our parallel methods impact the performance of EO on Titan, the U.S.s largest open science supercomputer, and BOB, a Beowulf-style cluster of Raspberry Pis. We also focus on how to improve the EO by evaluating commonality in higher performing neural networks, and present the result of a study that evaluates the EO performed by Titan.


foundations of computational intelligence | 2014

Visual analytics for neuroscience-inspired dynamic architectures

Margaret Drouhard; Catherine D. Schuman; J. Douglas Birdwell; Mark Edward Dean

We introduce a visual analytics tool for neuroscience-inspired dynamic architectures (NIDA), a network type that has been previously shown to perform well on control, anomaly detection, and classification tasks. NIDA networks are a type of spiking neural network, a non-traditional network type that captures dynamics throughout the network. We demonstrate the utility of our visualization tool in exploring and understanding the structure and activity of NIDA networks. Finally, we describe several extensions to the visual analytics tool that will further aid in the development and improvement of NIDA networks and their associated design method.


ieee international conference on high performance computing data and analytics | 2016

A study of complex deep learning networks on high performance, neuromorphic, and quantum computers

Thomas E. Potok; Catherine D. Schuman; Steven R. Young; Robert M. Patton; Federico M. Spedalieri; Jeremy Liu; Ke-Thia Yao; Garrett S. Rose; Gangotree Chakma

Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power.In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation.The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware.This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.

Collaboration


Dive into the Catherine D. Schuman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Disney

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Grant Bruer

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar

Robert M. Patton

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Thomas E. Potok

Oak Ridge National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Steven R. Young

Oak Ridge National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge