Heather Ames
Boston University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Heather Ames.
IEEE Computer | 2011
Greg Snider; Rick Amerson; Dick Carter; Hisham Abdalla; Muhammad Shakeel Qureshi; Jasmin Léveillé; Massimiliano Versace; Heather Ames; Sean Patrick; Benjamin Chandler; Anatoli Gorchetchnikov; Ennio Mingolla
In a synchronous digital platform for building large cognitive models, memristive nanodevices form dense, resistive memories that can be placed close to conventional processing circuitry. Through adaptive transformations, the devices can interact with the world in real time.
Journal of the Acoustical Society of America | 2007
Heather Ames; Stephen Grossberg
Auditory signals of speech are speaker dependent, but representations of language meaning are speaker independent. The transformation from speaker-dependent to speaker-independent language representations enables speech to be learned and understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitch-independent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by adaptive resonance theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [Peterson, G. E., and Barney, H.L., J. Acoust. Soc. Am. 24, 175-184 (1952).] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.
international symposium on neural networks | 2011
Anatoli Gorchetchnikov; Massimiliano Versace; Heather Ames; Ben Chandler; Jasmin Léveillé; Gennady Livitz; Ennio Mingolla; Greg Snider; Rick Amerson; Dick Carter; Hisham Abdalla; Muhammad Shakeel Qureshi
Realizing adaptive brain functions subserving perception, cognition, and motor behavior on biological temporal and spatial scales remains out of reach for even the fastest computers. Newly introduced memristive hardware approaches open the opportunity to implement dense, low-power synaptic memories of up to 1015 bits per square centimeter. Memristors have the unique property of “remembering” the past history of their stimulation in their resistive state and do not require power to maintain their memory, making them ideal candidates to implement large arrays of plastic synapses supporting learning in neural models. Over the past decades, many learning rules have been proposed in the literature to explain how neural activity shapes synaptic connections to support adaptive behavior. To ensure an optimal implementation of a large variety of learning rules in hardware, some general and easily parameterized form of learning rule must be designed. This general form learning equation would allow instantiation of multiple learning rules through different parameterizations, without rewiring the hardware. This paper characterizes a subset of local learning rules amenable to implementation in memristive hardware. The analyzed rules belong to four broad classes: Hebb rule derivatives with various methods for gating learning and decay, Threshold rule variations including the covariance and BCM families, Input reconstruction-based learning rules, and Explicit temporal trace-based rules.
Neuroinformatics | 2008
Massimiliano Versace; Heather Ames; Jasmin Léveillé; Bret Fortenberry; Anatoli Gorchetchnikov
Making use of very detailed neurophysiological, anatomical, and behavioral data to build biologically-realistic computational models of animal behavior is often a difficult task. Until recently, many software packages have tried to resolve this mismatched granularity with different approaches. This paper presents KInNeSS, the KDE Integrated NeuroSimulation Software environment, as an alternative solution to bridge the gap between data and model behavior. This open source neural simulation software package provides an expandable framework incorporating features such as ease of use, scalability, an XML based schema, and multiple levels of granularity within a modern object oriented programming design. KInNeSS is best suited to simulate networks of hundreds to thousands of branched multi-compartmental neurons with biophysical properties such as membrane potential, voltage-gated and ligand-gated channels, the presence of gap junctions or ionic diffusion, neuromodulation channel gating, the mechanism for habituative or depressive synapses, axonal delays, and synaptic plasticity. KInNeSS outputs include compartment membrane voltage, spikes, local-field potentials, and current source densities, as well as visualization of the behavior of a simulated agent. An explanation of the modeling philosophy and plug-in development is also presented. Further development of KInNeSS is ongoing with the ultimate goal of creating a modular framework that will help researchers across different disciplines to effectively collaborate using a modern neural simulation platform.
IEEE Pulse | 2012
Heather Ames; Ennio Mingolla; Aisha Sohail; Benjamin Chandler; Anatoli Gorchetchnikov; Jasmin Léveillé; Gennady Livitz; Massimiliano Versace
The researchers at Boston University (BU)s Neuromorphics Laboratory, part of the National Science Foundation (NSF)-sponsored Center of Excellence for Learning in Education, Science, and Technology (CELEST), are working in collaboration with the engineers and scientists at Hewlett-Packard (HP) to implement neural models of intelligent processes for the next generation of dense, low-power, computer hardware that will use memristive technology to bring data closer to the processor where computation occurs. The HP and BU teams are jointly designing an optimal infrastructure, simulation, and software platform to build an artificial brain. The resulting Cog Ex Machina (Cog) software platform has been successfully used to implement a large-scale, multicomponent brain system that is able to simulate some key rat behavioral results in a virtual environment and has been applied to control robotic platforms as they learn to interact with their environment.
international symposium on neural networks | 2011
Zlatko Vasilkoski; Heather Ames; Ben Chandler; Anatoli Gorchetchnikov; Jasmin Léveillé; Gennady Livitz; Ennio Mingolla; Massimiliano Versace
In the foreseeable future, synergistic advances in high-density memristive memory, scalable and massively parallel hardware, and neural network research will enable modelers to design large-scale, adaptive neural systems to support complex behaviors in virtual and robotic agents. A large variety of learning rules have been proposed in the literature to explain how neural activity shapes synaptic connections to support adaptive behavior. A generalized parametrizable form for many of these rules is proposed in a satellite paper in this volume [1]. Implementation of these rules in hardware raises a concern about the stability of memories created by these rules when the learning proceeds continuously and affects the performance in a network controlling freely-behaving agents. This paper can serve as a reference document as it summarizes in a concise way using a uniform notation the stability properties of the rules that are covered by the general form in [1].
BMC Neuroscience | 2010
Anatoli Gorchetchnikov; Massimiliano Versace; Heather Ames; Ben Chandler; Arash Yazdanbakhsh; Jasmin Léveillé; Ennio Mingolla; Greg Snider
The DARPA Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) initiative aims to create a new generation of high-density, low-power consumption chips capable of replicating adaptive and intelligent behavior observed in animals. To ensure fast speed, low power consumption, and parallel learning in billions of synapses, the learning laws that govern the adaptive behavior must be implemented in hardware. Over the past decades, multitudes of learning laws have been proposed in the literature to explain how neural activity shapes synaptic connections to support adaptive behavior. In order to implement as many of these laws as possible on the hardware, some general and easily parameterized form of learning law has to be designed and implemented on the chip. Such a general form would allow instantiation of multiple learning laws through different parameterizations without rewiring the hardware. From the perspectives of usefulness, stability, homeostatic properties, and spatial and temporal locality, this project analyzes four categories of existing learning rules: 1. Hebb rule derivatives with various methods for gating learning and decay; 2. Threshold rule variations including the covariance and BCM families; 3. Error-based learning rules; and 4. Reinforcement rules For each individual category a general form that can be implemented in hardware was derived. Even more general forms that include multiple categories are further suggested.
Archive | 2012
Heather Ames; Massimiliano Versace; Anatoli Gorchetchnikov; Benjamin Chandler; Gennady Livitz; Jasmin Léveillé; Ennio Mingolla; Dick Carter; Hisham Abdalla; Greg Snider
Convergent advances in neural modeling, neuroinformatics, neuromorphic engineering, materials science, and computer science will soon enable the development and manufacture of novel computer architectures, including those based on memristive technologies that seek to emulate biological brain structures. A new computational platform, Cog Ex Machina, is a flexible modeling tool that enables a variety of biological-scale neuromorphic algorithms to be implemented on heterogeneous processors, including both conventional and neuromorphic hardware. Cog Ex Machina is specifically designed to leverage the upcoming introduction of dense memristive memories close to computing cores. The MoNETA (Modular Neural Exploring Traveling Agent) model is comprised of such algorithms to generate complex behaviors based on functionalities that include perception, motivation, decision-making, and navigation. MoNETA is being developed with Cog Ex Machina to exploit new hardware devices and their capabilities as well as to demonstrate intelligent, autonomous behaviors in both virtual animats and robots. These innovations in hardware, software, and brain modeling will not only advance our understanding of how to build adaptive, simulated, or robotic agents, but will also create innovative technological applications with major impacts on general-purpose and high-performance computing.
BMC Neuroscience | 2011
Anatoli Gorchetchnikov; Jasmin Léveillé; Massimiliano Versace; Heather Ames; Gennady Livitz; Benjamin Chandler; Ennio Mingolla; Dick Carter; Rick Amerson; Hisham Abdalla; Shakeel M Qureshi; Greg Snider
The primary goal of a Modular Neural Exploring Traveling Agent (MoNETA) project is to create an autonomous agent capable of object recognition and localization, navigation, and planning in virtual and real environments. Major components of the system perform sensory object recognition, motivation and rewards processing, goal selection, allocentric representation of the world, spatial planning, and motor execution. MoNETA is based on the real time, massively parallel, Cog Ex Machina environment co-developed by Hewlett-Packard Laboratories and the Neuromorphics Lab at Boston University. The agent is tested in virtual environments replicating neurophysiological and psychological experiments with real rats. The currently used environment replicates the Morris water maze [1]. The motivational system represents the internal state of the agent that can be adjusted by sensory inputs. In the Morris water maze, only one drive can be satisfied (a desire to get out of the water) that persists as long as the animat is swimming and sharply decreases as soon as it is fully positioned on the platform. Another drive – curiosity – is constantly active and is never satisfied. It forces the agent to explore unfamiliar parts of the environment. Familiarity with environmental locations provides inhibition to the curiosity drive in a selective manner, so that recently explored locations are less appealing than either unexplored locations or locations that were explored long time ago. The main output of the motivational system is a goal selection map. It is based on competition between goals set by the curiosity system and goals learned by the animat. The goal selection map uses a winner-take-all selection of the most prominent input signal as a winning goal. Because curiosity-driven goals receive weaker inputs than well-learned reward locations, they can only win if there are no prominent inputs corresponding to the learned goals with an active motivational drive. The spatial planning system is built around a previously developed neural algorithm for goal-directed navigation [2]. The original model provided the desired destination and left it up to the virtual environment to move the animat in this location. In MoNETA the model was extended by a chain of neural populations that convert the allocentric desired destination into an allocentric desired direction and further into a rotational velocity motor command. A second extension of the model deals with the mapping of the environment. The original algorithm included goal and obstacle information into path planning, but this information was provided in the form of allocentric maps where the locations of both the goals and obstacles were received directly from the environment. MoNETA uses these maps, but also creates them from egocentric sensory information through a process of active exploration. Although the current version only uses somatosensory information, visual input will be added in later stages. The system converts egocentric representations to allocentric ones and then learns the mapping of obstacles and goals in the environment. It uses a learning rule that is local to dendrites and does not require any postsynaptic activity. The complete implementation of MoNETA consists of 75,301 neurons and 1,362,705 synapses.
international ieee/embs conference on neural engineering | 2013
Varsha Shankar; Lena Sherbakov; Byron V. Galbraith; Aisha Sohail; Gennady Livitz; Anatoli Gorchetchnikov; Heather Ames; Frank H. Guenther; Massimiliano Versace
Co-robotic assistants, or Cobots can improve the quality of life for individuals with locked-in syndrome (LIS) by allowing augmented control over their surroundings. Implemented in collaboration between the Boston University Neuromorphics Lab and Neural Prosthesis Lab, this work provides a proof of concept of an autonomous robot coupled with a non-invasive brain machine interface (BMI). The system uses Steady State Visually Evoked Potential (SSVEP) for target selection and a massively parallel neural network that models functionality of the primate “where” and “what” visual pathways. The simulated visual processes perform object recognition on images streamed from the Cobot equipped with a pan-and-tilt camera. In this paper, we describe a subcomponent of the system designed to allow the neural network to learn the identity of objects, the user to select a target object and the Cobot to perform autonomous visual investigation.