Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fredrick Rothganger is active.

Publication


Featured researches published by Fredrick Rothganger.


Frontiers in Neural Circuits | 2014

N2A: a computational tool for modeling from neurons to algorithms.

Fredrick Rothganger; Christina E. Warrender; Derek Trumbo; James B. Aimone

The exponential increase in available neural data has combined with the exponential growth in computing (“Moores law”) to create new opportunities to understand neural systems at large scale and high detail. The ability to produce large and sophisticated simulations has introduced unique challenges to neuroscientists. Computational models in neuroscience are increasingly broad efforts, often involving the collaboration of experts in different domains. Furthermore, the size and detail of models have grown to levels for which understanding the implications of variability and assumptions is no longer trivial. Here, we introduce the model design platform N2A which aims to facilitate the design and validation of biologically realistic models. N2A uses a hierarchical representation of neural information to enable the integration of models from different users. N2A streamlines computational validation of a model by natively implementing standard tools in sensitivity analysis and uncertainty quantification. The part-relationship representation allows both network-level analysis and dynamical simulations. We will demonstrate how N2A can be used in a range of examples, including a simple Hodgkin-Huxley cable model, basic parameter sensitivity of an 80/20 network, and the expression of the structural plasticity of a growing dendrite and stem cell proliferation and differentiation.


Biological Cybernetics | 2009

Using input minimization to train a cerebellar model to simulate regulation of smooth pursuit

Fredrick Rothganger; Thomas J. Anastasio

Cerebellar learning appears to be driven by motor error, but whether or not error signals are provided by climbing fibers (CFs) remains a matter of controversy. Here we show that a model of the cerebellum can be trained to simulate the regulation of smooth pursuit eye movements by minimizing its inputs from parallel fibers (PFs), which carry various signals including error and efference copy. The CF spikes act as “learn now” signals. The model can be trained to simulate the regulation of smooth pursuit of visual objects following circular or complex trajectories and provides insight into how Purkinje cells might encode pursuit parameters. In minimizing both error and efference copy, the model demonstrates how cerebellar learning through PF input minimization (InMin) can make movements more accurate and more efficient. An experimental test is derived that would distinguish InMin from other models of cerebellar learning which assume that CFs carry error signals.


international symposium on neural networks | 2015

Training neural hardware with noisy components

Fredrick Rothganger; Brian Robert Evans; James B. Aimone; Erik P. DeBenedictis

Some next generation computing devices may consist of resistive memory arranged as a crossbar. Currently, the dominant approach is to use crossbars as the weight matrix of a neural network, and to use learning algorithms that require small incremental weight updates, such as gradient descent (for example Backpropagation). Using real-world measurements, we demonstrate that resistive memory devices are unlikely to support such learning methods. As an alternative, we offer a random search algorithm tailored to the measured characteristics of our devices.


Archive | 2015

Cognitive Computing for Security.

Erik P. DeBenedictis; Fredrick Rothganger; James B. Aimone; Matthew Marinella; Brian Robert Evans; Christina E. Warrender; Patrick R. Mickel

Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.


Neural Computation | 2018

Computing with Spikes: The Advantage of Fine-Grained Timing

Stephen J. Verzi; Fredrick Rothganger; Ojas Parekh; Tu Thach Quach; Nadine E. Miner; Craig M. Vineyard; Conrad D. James; James B. Aimone

Neural-inspired spike-based computing machines often claim to achieve considerable advantages in terms of energy and time efficiency by using spikes for computation and communication. However, fundamental questions about spike-based computation remain unanswered. For instance, how much advantage do spike-based approaches have over conventional methods, and under what circumstances does spike-based computing provide a comparative advantage? Simply implementing existing algorithms using spikes as the medium of computation and communication is not guaranteed to yield an advantage. Here, we demonstrate that spike-based communication and computation within algorithms can increase throughput, and they can decrease energy cost in some cases. We present several spiking algorithms, including sorting a set of numbers in ascending/descending order, as well as finding the maximum or minimum or median of a set of numbers. We also provide an example application: a spiking median-filtering approach for image processing providing a low-energy, parallel implementation. The algorithms and analyses presented here demonstrate that spiking algorithms can provide performance advantages and offer efficient computation of fundamental operations useful in more complex algorithms.


Archive | 2013

Neurons to algorithms LDRD final report.

Fredrick Rothganger; James B. Aimone; Christina E. Warrender; Derek Trumbo

Over the last three years the Neurons to Algorithms (N2A) LDRD project teams has built infrastructure to discover computational structures in the brain. This consists of a modeling language, a tool that enables model development and simulation in that language, and initial connections with the Neuroinformatics community, a group working toward similar goals. The approach of N2A is to express large complex systems like the brain as populations of a discrete part types that have specific structural relationships with each other, along with internal and structural dynamics. Such an evolving mathematical system may be able to capture the essence of neural processing, and ultimately of thought itself. This final report is a cover for the actual products of the project: the N2A Language Specification, the N2A Application, and a journal paper summarizing our methods.


Archive | 2009

Technique for identifying, tracing, or tracking objects in image data

Robert J. Anderson; Fredrick Rothganger


biologically inspired cognitive architectures | 2017

A historical survey of algorithms and hardware architectures for neural-inspired and neuromorphic computing applications

Conrad D. James; James B. Aimone; Nadine E. Miner; Craig M. Vineyard; Fredrick Rothganger; Kristofor D. Carlson; Samuel A. Mulder; Timothy J. Draelos; Aleksandra Faust; Matthew Marinella; John H. Naegle; Steven J. Plimpton


2017 IEEE International Conference on Rebooting Computing (ICRC) | 2017

A Spike-Timing Neuromorphic Architecture

Aaron Jamison Hill; Jonathon W. Donaldson; Fredrick Rothganger; Craig M. Vineyard; David Follett; Pamela L. Follett; Michael R. Smith; Stephen J. Verzi; William Severa; Felix Wang; James B. Aimone; John H. Naegle; Conrad D. James


Archive | 2015

Neural Computing at Sandia National Laboratories.

Craig M. Vineyard; James B. Aimone; Michael Lewis Bernard; Kristofor D. Carlson; Frances S. Chance; James C. Forsythe; Conrad D. James; Fredrick Rothganger; William Severa; Ann Speed; Stephen J. Verzi; Christina E. Warrender; John S. Wagner; LeAnn Adams Miller

Collaboration


Dive into the Fredrick Rothganger's collaboration.

Top Co-Authors

Avatar

James B. Aimone

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Conrad D. James

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Craig M. Vineyard

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Derek Trumbo

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Brian Robert Evans

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Erik P. DeBenedictis

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brandon Rohrer

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

John H. Naegle

Sandia National Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge