Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian Telfer is active.

Publication


Featured researches published by Brian Telfer.


Neural Networks | 1992

Original Contribution: High capacity pattern recognition associative processors

David Casasent; Brian Telfer

We distinguish between many:1 (distortion-invariant) and 1:1 (large class) pattern recognition associative processors (with many different input keys associated with the same output recollection vector and with each key associated with a different recollection vector). A variety of different associative processor synthesis algorithms are compared showing that one can: store M vector pairs (where M > N, and N is the dimension of the keys) in fewer memory elements than standard digital storage requires; handle linearly dependent key vectors; and achieve robust noise performance and quantization by design. We show that one must employ new recollection vector encoding techniques to improve storage density, else the standard direct storage nearest neighbor processor is preferable. We find Ho-Kashyap associative processors and L-max recollection vector encoding to be preferable and we suggest new and preferable performance measures for associative processors.


Applied Optics | 1990

Ho-Kashyap optical associative processors

Brian Telfer; David Casasent

A Ho-Kashyap (H-K) associative processor (AP) is shown to have a larger storage capacity than the pseudoinverse and correlation APs and to accurately store linearly dependent key vectors. Prior APs have not demonstrated good performance on linearly dependent key vectors. The AP is attractive for optical implementation. A new robust H-K AP is proposed to improve noise performance. These results are demonstrated both theoretically and by Monte Carlo simulation. The H-K AP is also shown to outperform the pseudoinverse AP in an aircraft recognition case study. A technique is developed to indicate the least reliable output vector elements and a new AP error correcting synthesis technique is advanced.


Applied Optics | 1989

Key and recollection vector effects on heteroassociative memory performance.

David Casasent; Brian Telfer

Most associative memory work has concentrated on autoassociative memories (AAMs). These associative processors provide reduced noise and error correction in their output data. We will consider heteroassociative memories (HAMs), which are needed to provide decisions on the class of the input data and inferences for subsequent processing. We derive new equations for the storage capacity and noise performance of HAMs, emphasize how they differ from those derived for AAMs, suggest new performance measures to be used, and show how different recollection vector encodings can improve HAM performance.


Applied Optics | 1989

Updating optical pseudoinverse associative memories.

Brian Telfer; David Casasent

Selected algorithms for adding to and deleting from optical pseudoinverse associative memories are presented and compared. New realizations of pseudoinverse updating methods using vector inner product matrix bordering and reduced-dimensionality Karhunen-Loeve approximations (which have been used for updating optical filters) are described in the context of associative memories. Grevilles theorem is reviewed and compared with the Widrow-Hoff algorithm. Kohonens gradient projection method is expressed in a different form suitable for optical implementation. The data matrix memory is also discussed for comparison purposes. Memory size, speed and ease of updating, and key vector requirements are the comparison criteria used.


Neural Networks | 1993

Minimum-cost associative processor for piecewise-hyperspherical classification

Brian Telfer; David Casasent

A new algorithm is presented for generating a neural associative processor with piecewise-hyperspherical decision boundaries for difficult multiclass classification. Two important characteristics of the algorithm are that it represents each class with a near-minimum number of hyperspheres and has proven convergence properties. The algorithm generates hyperspheres sequentially for each class, with the first hyperspheres classifying more training vectors of that class than the later hyperspheres. If a limited number of hyperspheres (neurons) are desired, one can thus select those that correctly classify the largest number of training vectors. Classification results are presented for a three-class 3-dimensional distortion-invariant case study (invariant to changes in position, scale, and in-plane and out-of-plane rotation). For the case study, the new method is shown to give better recall accuracy with fewer weights than other neural network and conventional pattern recognition methods tested.


international symposium on neural networks | 1991

Minimum-cost Ho-Kashyap associative processor for piecewise-hyperspherical classification

Brian Telfer; David Casasent

A synthesis algorithm is presented for generating a neural associative processor with piecewise-hyperspherical decision boundaries. Two important characteristics of the algorithm are that it represents each class with a near-minimum number of hyperspheres and that it has proven convergence properties. Classification results are presented for a three-class 3D distortion-invariant aircraft case study (invariant to changes in position, scale, and in-plane and out-of-plane rotation). The processor gives 98% accuracy.<<ETX>>


Optical Information Processing Systems and Architectures II | 1990

Ho-Kashyap advanced pattern-recognition heteroassociative processors

David Casasent; Brian Telfer

We review different categories of associative processors with attention to the properties of their key and recollection vectors, the test procedures to be used and the performance measures to be used to compare various associative processors. We review new pseudoinverse and Ho-Kashyap associative processors and robust versions of each. Q uantitative data is presented on the performance of these new pattern recognition associative processors. In all cases we show significant improvement over prior data with M >> N (M is the number of key/recollection vectors pairs stored and N is the dimensionality of the input key vector). Quantization of the number of analog levels and comparisons of various recollection vector encodings are considered.


international symposium on neural networks | 1990

Ho-Kashyap content-addressable associative processors

Brian Telfer; David Casasent

The authors compare the storage capacity and other properties of various neural associative processors (APs) and find that the Ho-Kashyap (H-K) AP has the largest storage capacity and can handle linearly dependent keys. General memory (random keys) and distortion-invariant pattern-recognition APs are considered. A discussion is presented of a content-addressable structure that further improves recall accuracy and noise performance and decreases the size of the memory matrix. Results from the new H-K content-addressable AP are given. In all cases, the APs use only one pass (no iterations) in recall


Optics, Illumination, and Image Sensing for Machine Vision III | 1989

Ho-Kashyap Associative Processors

Brian Telfer; David Casasent

A Ho-Kashyap (H-K) associative processor (AP) is demonstrated to have a larger storage capacity than the pseudoinverse AP and to allow linearly dependent key vectors to be accurately stored. A new Robust H-K AP is shown to perform well over all M/N (where M is the number of keys and N is their dimension), specifically when M ≈ N, where the standard pseudoinverse and H-K APs perform poorly. Also considered are variable thresholds, an error-correcting algorithm to allow analog synthesis of the H-K AP, and the different reliabilities of the recollection elements.


Neural Network Models for Optical Computing | 1988

Optical Associative Processors For Visual Perception

David Casasent; Brian Telfer

We consider various associative processor modifications required to allow these systems to be used for visual perception, scene analysis, and object recognition. For these applications, decisions on the class of the objects present in the input image are required and thus heteroassociative memories are necessary (rather than the autoassociative memories that have been given most attention). We analyze the performance of both associative processors and note that there is considerable difference between heteroassociative and autoassociative memories. We describe associative processors suitable for realizing functions such as: distortion invariance (using linear discriminant function memory synthesis techniques), noise and image processing performance (using autoassociative memories in cascade with with a heteroassociative processor and with a finite number of autoassociative memory iterations employed), shift invariance (achieved through the use of associative processors operating on feature space data), and the analysis of multiple objects in high noise (which is achieved using associative processing of the output from symbolic correlators). We detail and provide initial demonstrations of the use of associative processors operating on iconic, feature space and symbolic data, as well as adaptive associative processors.

Collaboration


Dive into the Brian Telfer's collaboration.

Top Co-Authors

Avatar

David Casasent

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge