Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rick Amerson is active.

Publication


Featured researches published by Rick Amerson.


field programmable custom computing machines | 1997

Defect tolerance on the Teramac custom computer

W. Culbertson; Rick Amerson; Richard J. Carter; Philip J. Kuekes; Greg Snider

Teramac is a large custom computer which works correctly despite the fact that three quarters of its FPGAs contain defects. This is accomplished through unprecedented use of defect tolerance, which substantially reduces Teramacs cost and permits it to have an unusually complex interconnection network. Teramac tolerates defective resources, like gates and wires, that are introduced during the manufacture of its FPGAs and other components, and during assembly of the system. We have developed methods to precisely locate defects. User designs are mapped onto the system by a completely automated process that avoids the defects and hides the defect tolerance from the user. Defective components are not physically removed from the system.


field programmable gate arrays | 1995

Teramac-configurable custom computing

Rick Amerson; Richard J. Carter; W. Culbertson; Phillip J. Kuekes; Greg Snider

Prototypes are invaluable for studying special purpose parallel architectures and custom computing. We have built a configurable custom computing engine, based on field programmable gate arrays, to enable experiments on an interesting scale. The Teramac configurable hardware system can execute synchronous logic designs of up to one million gates at rates up to one megahertz. Search and retrieval of nontext data from very large databases can be greatly accelerated using special purpose parallel hardware. We are using Teramac to conduct experiments with special purpose processors involving search of nontext databases.


IEEE Computer | 2011

From Synapses to Circuitry: Using Memristive Memory to Explore the Electronic Brain

Greg Snider; Rick Amerson; Dick Carter; Hisham Abdalla; Muhammad Shakeel Qureshi; Jasmin Léveillé; Massimiliano Versace; Heather Ames; Sean Patrick; Benjamin Chandler; Anatoli Gorchetchnikov; Ennio Mingolla

In a synchronous digital platform for building large cognitive models, memristive nanodevices form dense, resistive memories that can be placed close to conventional processing circuitry. Through adaptive transformations, the devices can interact with the world in real time.


field programmable logic and applications | 1995

The Teramac Configurable Computer Engine

Greg Snider; Philip J. Kuekes; W. Bruce Culbertson; Richard J. Carter; Arnold S. Berger; Rick Amerson

The difficulty in creating a configurable machine lies in providing enough wires that placement and routing can be done with no human intervention. Several researchers have previously used tens of FPGAs to create configurable custom machines [8–11]; Teramac allows experiments using many hundreds of FPGAs by providing a routing-rich environment for implementing user designs by using custom FPGAs, MCMs and PC boards.


Computers & Graphics | 1997

Implementations of Cube-4 on the Teramac custom computing machine

Urs Kanus; Michael Meißner; Wolfgang Straßer; Hanspeter Pfister; Arie E. Kaufman; Rick Amerson; Richard J. Carter; W. Bruce Culbertson; Philip J. Kuekes; Greg Snider

Abstract We present two implementations of the Cube-4 volume rendering architecture, developed at SUNY Stony Brook, on the Teramac custom computing machine. Cube-4 uses a slice-parallel ray-casting algorithm that allows for a parallel and pipelined implementation of ray-casting. Tri-linear interpolation, surface normal estimation from interpolated samples, shading, classification, and compositing are part of the rendering pipeline. Using the partitioning schemes introduced in this paper, Cube-4 is capable of rendering in real-time large datasets (e.g., 10243) with a limited number of rendering pipelines. Teramac is a hardware simulator developed at Hewlett-Packard Research Laboratories. Teramac belongs to the new class of custom computing machines, which combine the speed of special-purpose hardware with the flexibility of general-purpose computers. Using Teramac as a development tool, we implemented two working Cube-4 prototypes capable of rendering 1283 datasets in 0.65 s at a very low 0.96 MHz processing frequency. The results from these implementations indicate scalable performance with the number of rendering pipelines and real-time frame-rates for high-resolution datasets.


international symposium on neural networks | 2011

Review and unification of learning framework in Cog Ex Machina platform for memristive neuromorphic hardware

Anatoli Gorchetchnikov; Massimiliano Versace; Heather Ames; Ben Chandler; Jasmin Léveillé; Gennady Livitz; Ennio Mingolla; Greg Snider; Rick Amerson; Dick Carter; Hisham Abdalla; Muhammad Shakeel Qureshi

Realizing adaptive brain functions subserving perception, cognition, and motor behavior on biological temporal and spatial scales remains out of reach for even the fastest computers. Newly introduced memristive hardware approaches open the opportunity to implement dense, low-power synaptic memories of up to 1015 bits per square centimeter. Memristors have the unique property of “remembering” the past history of their stimulation in their resistive state and do not require power to maintain their memory, making them ideal candidates to implement large arrays of plastic synapses supporting learning in neural models. Over the past decades, many learning rules have been proposed in the literature to explain how neural activity shapes synaptic connections to support adaptive behavior. To ensure an optimal implementation of a large variety of learning rules in hardware, some general and easily parameterized form of learning rule must be designed. This general form learning equation would allow instantiation of multiple learning rules through different parameterizations, without rewiring the hardware. This paper characterizes a subset of local learning rules amenable to implementation in memristive hardware. The analyzed rules belong to four broad classes: Hebb rule derivatives with various methods for gating learning and decay, Threshold rule variations including the covariance and BCM families, Input reconstruction-based learning rules, and Explicit temporal trace-based rules.


custom integrated circuits conference | 1996

An FPGA for multi-chip reconfigurable logic

Rick Amerson; Richard J. Carter; W. Culbertson; Phillip J. Kuekes; Greg Snider; Lyle Albertson

The Plasma chip, designed specifically to address issues important to custom computing machines (CCM), completes a 100% fully automatic place and route in approximately three seconds. Plasma FPGAs using 0.8 micron CMOS are packaged in large multichip modules (MCMs). Plasma introduces some innovative architecture concepts including hardware support for large multiported register files.


BMC Neuroscience | 2011

MoNETA: massive parallel application of biological models navigating through virtual Morris water maze and beyond

Anatoli Gorchetchnikov; Jasmin Léveillé; Massimiliano Versace; Heather Ames; Gennady Livitz; Benjamin Chandler; Ennio Mingolla; Dick Carter; Rick Amerson; Hisham Abdalla; Shakeel M Qureshi; Greg Snider

The primary goal of a Modular Neural Exploring Traveling Agent (MoNETA) project is to create an autonomous agent capable of object recognition and localization, navigation, and planning in virtual and real environments. Major components of the system perform sensory object recognition, motivation and rewards processing, goal selection, allocentric representation of the world, spatial planning, and motor execution. MoNETA is based on the real time, massively parallel, Cog Ex Machina environment co-developed by Hewlett-Packard Laboratories and the Neuromorphics Lab at Boston University. The agent is tested in virtual environments replicating neurophysiological and psychological experiments with real rats. The currently used environment replicates the Morris water maze [1]. The motivational system represents the internal state of the agent that can be adjusted by sensory inputs. In the Morris water maze, only one drive can be satisfied (a desire to get out of the water) that persists as long as the animat is swimming and sharply decreases as soon as it is fully positioned on the platform. Another drive – curiosity – is constantly active and is never satisfied. It forces the agent to explore unfamiliar parts of the environment. Familiarity with environmental locations provides inhibition to the curiosity drive in a selective manner, so that recently explored locations are less appealing than either unexplored locations or locations that were explored long time ago. The main output of the motivational system is a goal selection map. It is based on competition between goals set by the curiosity system and goals learned by the animat. The goal selection map uses a winner-take-all selection of the most prominent input signal as a winning goal. Because curiosity-driven goals receive weaker inputs than well-learned reward locations, they can only win if there are no prominent inputs corresponding to the learned goals with an active motivational drive. The spatial planning system is built around a previously developed neural algorithm for goal-directed navigation [2]. The original model provided the desired destination and left it up to the virtual environment to move the animat in this location. In MoNETA the model was extended by a chain of neural populations that convert the allocentric desired destination into an allocentric desired direction and further into a rotational velocity motor command. A second extension of the model deals with the mapping of the environment. The original algorithm included goal and obstacle information into path planning, but this information was provided in the form of allocentric maps where the locations of both the goals and obstacles were received directly from the environment. MoNETA uses these maps, but also creates them from egocentric sensory information through a process of active exploration. Although the current version only uses somatosensory information, visual input will be added in later stages. The system converts egocentric representations to allocentric ones and then learns the mapping of obstacles and goals in the environment. It uses a learning rule that is local to dendrites and does not require any postsynaptic activity. The complete implementation of MoNETA consists of 75,301 neurons and 1,362,705 synapses.


eurographics conference on graphics hardware | 1996

Cube-4 implementations on the teramac custom computing machine

Urs Kanus; Michael Meißner; Wolfgang Straßer; Hanspeter Pfister; Arie E. Kaufman; Rick Amerson; Richard J. Carter; W. Bruce Culbertson; Philip J. Kuekes; Greg Snider

We present two implementations of the Cube-4 volume rendering architecture on the Teramac custom computing machine. Cube-4 uses a sliceparallel ray-casting algorithm that allows for a parallel and pipelined implementation of ray-casting with tri-linear interpolation and surface normal estimation from interpolated samples. Shading, classification and compositing are part of rendering pipeline. With the partitioning schemes introduced in this paper, Cube-4 is capable of rendering large datasets with a limited number of pipelines. The Teramac hardware simulator at the Hewlett-Packard research laboratories, Palo Alto, CA, on which Cube-4 was implemented, belongs to the new class of custom computing machines. Teramac combines the speed of special-purpose hardware with the flexibility of general-purpose computers. With Teramac as a development tool we were able to implement in just five weeks working Cube-4 prototypes, capable of rendering for example datasets of 1283 voxels in 0.65 seconds at 0.96 MHz processing frequency. The performance results from these implementations indicate real-time performance for high-resolution data-sets.


Storage and Retrieval for Image and Video Databases | 1994

A Twenty-Seven Chip MCM-C

Rick Amerson; Philip J. Kuekes

Collaboration


Dive into the Rick Amerson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge