Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John B. Hampshire is active.

Publication


Featured researches published by John B. Hampshire.


IEEE Transactions on Neural Networks | 1990

A novel objective function for improved phoneme recognition using time-delay neural networks

John B. Hampshire; Alex Waibel

The authors present single- and multispeaker recognition results for the voiced stop consonants /b, d, g/ using time-delay neural networks (TDNN), a new objective function for training these networks, and a simple arbitration scheme for improved classification accuracy. With these enhancements a median 24% reduction in the number of misclassifications made by TDNNs trained with the traditional backpropagation objective function is achieved. This redundant results in /b, d, g/ recognition rates that consistently exceed 98% for TDNNs trained with individual speakers; it yields a 98.1% recognition rate for a TDNN trained with three male speakers.<<ETX>>


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1992

The Meta-Pi network: building distributed knowledge representations for robust multisource pattern recognition

John B. Hampshire; Alex Waibel

The authors present the Meta-Pi network, a multinetwork connectionist classifier that forms distributed low-level knowledge representations for robust pattern recognition, given random feature vectors generated by multiple statistically distinct sources. They illustrate how the Meta-Pi paradigm implements an adaptive Bayesian maximum a posteriori classifier. They also demonstrate its performance in the context of multispeaker phoneme recognition in which the Meta-Pi superstructure combines speaker-dependent time-delay neural network (TDNN) modules to perform multispeaker /b,d,g/ phoneme recognition with speaker-dependent error rates of 2%. Finally, the authors apply the Meta-Pi architecture to a limited source-independent recognition task, illustrating its discrimination of a novel source. They demonstrate that it can adapt to the novel source (speaker), given five adaptation examples of each of the three phonemes. >


Connectionist Models#R##N#Proceedings of the 1990 Summer School | 1991

Equivalence Proofs for Multi-Layer Perceptron Classifiers and the Bayesian Discriminant Function

John B. Hampshire; Barak A. Pearlmutter

This paper presents a number of proofs that equate the outputs of a Multi-Layer Perceptron (MLP) classifier and the optimal Bayesian discriminant function for asymptotically large sets of statistically independent training samples. Two broad classes of objective functions are shown to yield Bayesian discriminant performance. The first class are “reasonable error measures,” which achieve Bayesian discriminant performance by engendering classifier outputs that asymptotically equate to a posterioriprobabilities. This class includes the mean-squared error (MSE) objective function as well as a number of information theoretic objective functions. The second class are classification figures of merit (CFM mono ), which yield a qualified approximation to Bayesian discriminant performance by engendering classifier outputs that asymptotically identify the maximum a posteriori probability for a given input. Conditions and relationships for Bayesian discriminant functional equivalence are given for both classes of objective functions. Differences between the two classes are then discussed very briefly in the context of how they might affect MLP classifier generalization, given relatively small training sets.


Proceedings of SPIE, the International Society for Optical Engineering | 1999

Collaborative surveillance using both fixed and mobile unattended ground sensor platforms

Christopher P. Diehl; Mahesh Saptharishi; John B. Hampshire; Pradeep K. Khosla

We begin by considering current shortfalls with conventional surveillance systems and discuss the potential advantages of distributed, collaborative surveillance systems. Distributed surveillance systems offer the capability to monitor activity from multiple locations over time thereby increasing the likelihood of obtaining discriminating data necessary for interpretation of the activity. Yet the multiplicity of sensors magnifies the volumes of data that must be processed. We present our vision of a system which generates timely interpretations of activities in the scene automatically through the use of mechanisms for collaboration among sensing systems and efficient perception methods which complement the sensing paradigm. Then we review our recent efforts toward achieving this goal and present initial results.


international conference on acoustics, speech, and signal processing | 1990

The Meta-Pi network: connectionist rapid adaptation for high-performance multi-speaker phoneme recognition

John B. Hampshire; Alex Waibel

A multinetwork time-delay-neural-network (TDNN)-based connectionist architecture that allows multispeaker phoneme discrimination (/b,d,g/) to be performed at the speaker-dependent recognition rate of 98.4% is presented. The overall network gates the phonemic decisions of modules trained on individual speakers to form its overall classification decision. By dynamically adapting to the input speech and focusing on a combination of speaker-specific modules, the network outperforms a single TDNN trained on the speech of all six speakers (95.9%). To train this network a form of multiplicative connection called the Meta-Pi connection is developed. It is illustrated how the Mega-Pi paradigm implements a dynamically adaptive Bayesian MAP classifier. It learns-without supervision-to recognize the speech of one particular speaker (99.8%) using a dynamic combination of internal models of other speakers exclusively. The Meta-Pi model is a viable basis for a connectionist speech recognition system that can rapidly adapt to new speakers and varying speaker dialects.<<ETX>>


IEEE Transactions on Information Theory | 1992

Tobit maximum-likelihood estimation for stochastic time series affected by receiver saturation

John B. Hampshire; John W. Strohbehn

The Tobit (Tobin Probit) model is adapted from the field of econometrics as a maximum likelihood estimator of PDF (probability density function) parameters for data that have been censored and truncated. A general expression for the Tobit estimator is presented. It is shown that when the (standard) maximum likelihood estimator is efficient for the random variable with unlimited dynamic range, the unbiased Tobit estimator is efficient for the censored/truncated random variable. The model is presented in detail for the Rayleigh PDF; its efficiency is confirmed, independent of the degree of truncation/censoring. Results from the application of Tobit estimation to simulated data with Rayleigh, log-normal, Rice-Nakagami, and Nagakami-M PDFs are shown to exhibit very low mean-squared error as well. The limitations and computational complexities of the Tobit estimator are discussed. >


SPIE International Symposium on Sensor Fusion and Decentralized Control in Autonomous Robotic Systems | 1997

A modified reactive control framework for cooperative mobile robots

Jesus Tercero Salido; John M. Dolan; John B. Hampshire; Pradeep K. Khosla

An important class of robotic applications potentially involves multiple, cooperating robots: security or military surveillance, rescue, mining, etc. One of the main challenges in this area is effective cooperative control: how does one determine and orchestrate individual robot behaviors which result in a desired group behavior? Cognitive (planning) approaches allow for explicit coordination between robots, but suffer from high computational demands and a need for a priori, detailed world models. Purely reactive approaches such as that of Brooks are efficient, but lack a mechanism for global control and learning. Neither approach by itself provides a formalism capable of a sufficiently rapid and rich range of cooperative behaviors. Although we accept the usefulness of the reactive paradigm in building up complex behaviors from simple ones, we seek to extend and modify it in several ways. First, rather than restricting primitive behaviors to fixed input-output relationships, we include memory and learning through feedback adaptation of behaviors. Second, rather than a fixed priority of behaviors, our priorities are implicit: they vary depending on environmental stimuli. Finally, we scale this modified reactive architecture to apply not only for an individual robot, but also at the level of multiple cooperating robots: at this level, individual robots are like individual behaviors which combine to achieve a desired aggregate behavior. In this paper, we describe our proposed architecture and its current implementation. The application of particular interest to us is the control of a team of mobile robots cooperating to perform area surveillance and target acquisition and tracking.


Proceedings of SPIE | 1993

Differential theory of learning for efficient neural network pattern recognition

John B. Hampshire; Bhagavatula Vijaya Kumar

We describe a new theory of differential learning by which a broad family of pattern classifiers (including many well-known neural network paradigms) can learn stochastic concepts efficiently. We describe the relationship between a classifiers ability to generalize well to unseen test examples and the efficiency of the strategy by which it learns. We list a series of proofs that differential learning is efficient in its information and computational resource requirements, whereas traditional probabilistic learning strategies are not. The proofs are illustrated by a simple example that lends itself to closed-form analysis. We conclude with an optical character recognition task for which three different types of differentially generated classifiers generalize significantly better than their probabilistically generated counterparts.


international symposium on neural networks | 1992

Why error measures are sub-optimal for training neural network pattern classifiers

John B. Hampshire; B. V. K. Vijaya Kumar

Pattern classifiers that are trained in a supervised fashion are typically trained with an error measure objective function such as mean-squared error (MSE) or cross-entropy (CE). These classifiers can in theory yield Bayesian discrimination, but in practice they often fail to do so. The authors explain why this happens and identify a number of characteristics that the optimal objective function for training classifiers must have. They show that classification figures of merit (CFM/sub mono/) possess these optimal characteristics, whereas error measures such as MSE and CE do not. The arguments are illustrated with a simple example in which a CFM/sub mono/-trained low-order polynomial neural network approximates Bayesian discrimination on a random scalar with the fewest number of training samples and the minimum functional complexity necessary for the task. A comparable MSE-trained net yields significantly worse discrimination on the same task.<<ETX>>


international conference on acoustics, speech, and signal processing | 1993

Differential learning leads to efficient neural network classifiers

John B. Hampshire; B. V. K. Vijaya Kumar

The authors outline a differential theory of learning for statistical pattern classification. The theory is based on classification figure-of-merit (CFM) objective functions, described by J. P. Hampshire II and A. H. Waibel (IEEE Trans. Neural Netw. vol.1, no.2, p.216-218, June 1990). They give the proof that differential learning is efficient, requiring the least classifier complexity and the smallest training sample size necessary to achieve Bayesian (i.e., minimum error) discrimination. A practical application of the theory is included in which a simple differentially trained linear neural network classifier discriminations handwritten digits of the AT&T DB1 database with a 1.3% error rate. This error rate is less than one half of the best previous result for a linear classifier on this optical character recognition (OCR) task.<<ETX>>

Collaboration


Dive into the John B. Hampshire's collaboration.

Top Co-Authors

Avatar

Alex Waibel

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pradeep K. Khosla

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John M. Dolan

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christiaan J.J. Paredis

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David A. Watola

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge