Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard W. Linderman is active.

Publication


Featured researches published by Richard W. Linderman.


IEEE Transactions on Neural Networks | 2014

Memristor Crossbar-Based Neuromorphic Computing System: A Case Study

Miao Hu; Hai Li; Yiran Chen; Qing Wu; Garrett S. Rose; Richard W. Linderman

By mimicking the highly parallel biological systems, neuromorphic hardware provides the capability of information processing within a compact and energy-efficient platform. However, traditional Von Neumann architecture and the limited signal connections have severely constrained the scalability and performance of such hardware implementations. Recently, many research efforts have been investigated in utilizing the latest discovered memristors in neuromorphic systems due to the similarity of memristors to biological synapses. In this paper, we explore the potential of a memristor crossbar array that functions as an autoassociative memory and apply it to brain-state-in-a-box (BSB) neural networks. Especially, the recall and training functions of a multianswer character recognition process based on the BSB model are studied. The robustness of the BSB circuit is analyzed and evaluated based on extensive Monte Carlo simulations, considering input defects, process variations, and electrical fluctuations. The results show that the hardware-based training scheme proposed in the paper can alleviate and even cancel out the majority of the noise issue.


IEEE Transactions on Computers | 2013

A Parallel Neuromorphic Text Recognition System and Its Implementation on a Heterogeneous High-Performance Computing Cluster

Qinru Qiu; Qing Wu; Morgan Bishop; Robinson E. Pino; Richard W. Linderman

Given the recent progress in the evolution of high-performance computing (HPC) technologies, the research in computational intelligence has entered a new era. In this paper, we present an HPC-based context-aware intelligent text recognition system (ITRS) that serves as the physical layer of machine reading. A parallel computing architecture is adopted that incorporates the HPC technologies with advances in neuromorphic computing models. The algorithm learns from what has been read and, based on the obtained knowledge, it forms anticipations of the word and sentence level context. The information processing flow of the ITRS imitates the function of the neocortex system. It incorporates large number of simple pattern detection modules with advanced information association layer to achieve perception and recognition. Such architecture provides robust performance to images with large noise. The implemented ITRS software is able to process about 16 to 20 scanned pages per second on the 500 trillion floating point operations per second (TFLOPS) Air Force Research Laboratory (AFRL)/Information Directorate (RI) Condor HPC after performance optimization.


international conference on multimedia and expo | 2008

Performance optimization for pattern recognition using associative neural memory

Qing Wu; Prakash Mukre; Richard W. Linderman; Thomas E. Renz; Daniel J. Burns; Michael J. Moore; Qinru Qiu

In this paper, we present our work in the implementation and performance optimization of the recall operation of the brain-state-in-a-box (BSB) model on the cell broadband engine processor. We have applied optimization techniques on different parts of the algorithm to improve the overall computing and communication performance of the BSB recall algorithm. Runtime measurements show that, we have been able to achieve about 70% of the theoretical peak performance of the processor.


2011 IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB) | 2011

Confabulation based sentence completion for machine reading

Qinru Qiu; Qing Wu; Daniel J. Burns; Michael J. Moore; Robinson E. Pino; Morgan Bishop; Richard W. Linderman

Sentence completion and prediction refers to the capability of filling missing words in any incomplete sentences. It is one of the keys to reading comprehension, thus making sentence completion an indispensible component of machine reading. Cogent confabulation is a bio-inspired computational model that mimics the human information processing. The building of confabulation knowledge base uses an unsupervised machine learning algorithm that extracts the relations between objects at the symbolic level. In this work, we propose performance improved training and recall algorithms that apply the cogent confabulation model to solve the sentence completion problem. Our training algorithm adopts a two-level hash table, which significantly improves the training speed, so that a large knowledge base can be built at relatively low computation cost. The proposed recall function fills missing words based on the sentence context. Experimental results show that our software can complete trained sentences with 100% accuracy. It also gives semantically correct answers to more than two thirds of the testing sentences that have not been trained before.


IEEE Transactions on Aerospace and Electronic Systems | 2000

Design, implementation and evaluation of parallel pipelined STAP on parallel computers

Alok N. Choudhary; Wei-keng Liao; Donald D. Weiner; Pramod K. Varshney; Richard W. Linderman; Mark Linderman; Russell D. Brown

Performance results are presented for the design and implementation of parallel pipelined space-time adaptive processing (STAP) algorithms on parallel computers. In particular, the issues involved in parallelization, our approach to parallelization, and performance results on an Intel Paragon are described. The process of developing software for such an application on parallel computers when latency and throughput are both considered together is discussed and tradeoffs considered with respect to inter and intratask communication and data redistribution are presented. The results show that not only scalable performance was achieved for individual component tasks of STAP but linear speedups were obtained for the integrated task performance, both for latency as well as throughput. Results are presented for up to 236 compute nodes (limited by the machine size available to us). Another interesting observation made from the implementation results is that performance improvement due to the assignment of additional processors to one task can improve the performance of other tasks without any increase in the number of processors assigned to them. Normally, this cannot be predicted by theoretical analysis.


international symposium on neural networks | 2010

Neuromorphic algorithms on clusters of PlayStation 3s

Tarek M. Taha; Pavan Yalamanchili; Mohammad Ashraf Bhuiyan; Rommel Jalasutram; Chong Chen; Richard W. Linderman

There is a significant interest in the research community to develop large scale, high performance implementations of neuromorphic models. These have the potential to provide significantly stronger information processing capabilities than current computing algorithms. In this paper we present the implementation of five neuromorphic models on a 50 TeraFLOPS 336 node Playstation 3 cluster at the Air Force Research Laboratory. The five models examined span two classes of neuromorphic algorithms: hierarchical Bayesian and spiking neural networks. Our results indicate that the models scale well on this cluster and can emulate between 108 to 1010 neurons. In particular, our study indicates that a cluster of Playstation 3s can provide an economical, yet powerful, platform for simulating large scale neuromorphic models.


ieee radar conference | 2006

Swathbuckler: wide swath SAR system architecture

Richard W. Linderman

Between 2001 and 2005, the Swathbuckler wide-swath SAR real-time image formation multinational project evolved a system architecture to continually process 40 KM strips into high resolution (<1 m) imagery. The rapid advance of COTS memory, I/O, and processor technology drove the supercomputer cost down from over


IEEE Transactions on Computers | 1998

A dependable high performance wafer scale architecture for embedded signal processing

Richard W. Linderman; Ralph Kohler; Mark Linderman

1 M to under


international symposium on neural networks | 2011

Unified perception-prediction model for context aware text recognition on a heterogeneous many-core platform

Qinru Qiu; Qing Wu; Richard W. Linderman

100 K. This paper discusses the key technology improvements driving the affordable solution. In particular, the 8 gigabytes of memory attached to standard dual Xeon server nodes arranged in a standard cluster greatly simplified the previously daunting task of SAR image formation.


ieee radar conference | 2006

Swathbuckler: HPC processing and information exploitation

Scot Tucker; Robert Vienneau; Joshua Corner; Richard W. Linderman

A high performance, programmable, floating point multiprocessor architecture has been specifically designed to exploit advanced two- and three-dimensional hybrid wafer scale packaging to achieve low size, weight, and power, and improve reliability for embedded systems applications. Processing elements comprised of a 0.8 micron CMOS dual processor chip and commercial synchronous SRAMs achieve more than 100 MFLOPS/Watt. This power efficiency allows up to 32 processing elements to be incorporated into a single 3D multichip module, eliminating multiple discrete packages and thousands of wirebonds. The dual processor chip can dynamically switch between independent processing, watchdog checking, and coprocessing modes. A flat, SRAM memory provides predictable instruction set timing and independent and accurate performance prediction.

Collaboration


Dive into the Richard W. Linderman's collaboration.

Top Co-Authors

Avatar

Qing Wu

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

George O. Ramseyer

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Scott E. Spetka

State University of New York Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael J. Moore

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Morgan Bishop

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Daniel J. Burns

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dennis Fitzgerald

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Mark Linderman

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Robinson E. Pino

Air Force Materiel Command

View shared research outputs
Researchain Logo
Decentralizing Knowledge