Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John E. Shore is active.

Publication


Featured researches published by John E. Shore.


IEEE Transactions on Information Theory | 1980

Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy

John E. Shore; Rodney W. Johnson

Jayness principle of maximum entropy and Kullbacks principle of minimum cross-entropy (minimum directed divergence) are shown to be uniquely correct methods for inductive inference when new information is given in the form of expected values. Previous justifications use intuitive arguments and rely on the properties of entropy and cross-entropy as information measures. The approach here assumes that reasonable methods of inductive inference should lead to consistent results when there are different ways of taking the same information into account (for example, in different coordinate system). This requirement is formalized as four consistency axioms. These are stated in terms of an abstract information operator and make no reference to information measures. It is proved that the principle of maximum entropy is correct in the following sense: maximizing any function but entropy will lead to inconsistency unless that function and entropy have identical maxima. In other words given information in the form of constraints on expected values, there is only one (distribution satisfying the constraints that can be chosen by a procedure that satisfies the consistency axioms; this unique distribution can be obtained by maximizing entropy. This result is established both directly and as a special case (uniform priors) of an analogous result for the principle of minimum cross-entropy. Results are obtained both for continuous probability densities and for discrete distributions.


IEEE Transactions on Information Theory | 1981

Properties of cross-entropy minimization

John E. Shore; Rodney W. Johnson

The principle of minimum cross-entropy (minimum directed divergence, minimum discrimination information) is a general method of inference about an unknown probability density when there exists a prior estimate of the density and new information in the form of constraints on expected values. Various fundamental properties of cross-entropy minimization are proven and collected in one place. Cross-entropys well-known properties as an information measure are extended and strengthened when one of the densities involved is the result of cross-entropy minimization. The interplay between properties of cross-entropy minimization as an inference procedure and properties of cross-entropy as an information measure is pointed out. Examples are included and general analytic and computational methods of finding minimum cross-entropy probability densities are discussed.


IEEE Transactions on Acoustics, Speech, and Signal Processing | 1981

Minimum cross-entropy spectral analysis

John E. Shore

The principle of minimum cross-entropy (minimum directed divergence, minimum discrimination information, minimum relative entropy) is summarized, discussed, and applied to the classical problem of estimating power spectra given values of the autocorrelation function. This new method differs from previous methods in its explicit inclusion of a prior estimate of the power spectrum, and it reduces to maximum entropy spectral analysis as a special case. The prior estimate can be viewed as a means of shaping the spectral estimator. Cross-entropy minimization yields a family of shaped spectral estimators consistent with known autocorrelations. Results are derived in two equivalent ways: once by minimizing the cross-entropy of underlying probability densities, and once by arguments concerning the cross-entropy between the input and output of linear filters. Several example minimum cross-entropy spectra are included.


IEEE Transactions on Information Theory | 1981

Rate-distortion speech coding with a minimum discrimination information distortion measure

Robert M. Gray; Augustine H. Gray; Guillermo Rebolledo; John E. Shore

An information theory approach to the theory and practice of linear predictive coded (LPC) speech compression systems is developed. It is shown that a traditional LPC system can be viewed as a minimum distortion or nearest-neighbor system where the distortion measure is a minimum discrimination information between a speech process model and an observed frame of actual speech. This distortion measure is used in an algorithm for computer-aided design of block source codes subject to a fidelity criterion to obtain a 750-bits/s speech compression system that resembles an LPC system but has a much lower rate, a larger memory requirement, and requires no on-line LPC analysis. Quantitative and informal subjective comparisons are made among our system and LPC systems.


IEEE Transactions on Acoustics, Speech, and Signal Processing | 1976

Letter-to-sound rules for automatic translation of english text to phonetics

Honey S. Elovitz; Rodney W. Johnson; Astrid McHugh; John E. Shore

Speech synthesizers for computer voice output are most useful when not restricted to a prestored vocabulary. The simplest approach to unrestricted text-to-speech translation uses a small set of letter-to-sound rules, each specifying a pronunciation for one or more letters in some context. Unless this approach yields sufficient intelligibility, routine addition of text-to-speech translation to computer systems is unlikely, since more elaborate approaches, embodying large pronunciation dictionaries or linguistic analysis, require too much of the available computing resources. The work here described demonstrates the practicality of routine text-to-speech translation. A set of 329 letter-to-sound rules has been developed. These translate English text into the international phonetic alphabet (IPA), producing correct pronunciations for approximately 90 percent of the words, or nearly 97 percent of the phonemes, in an average text sample. Most of the remaining words have single errors easily correctable by the listener. Another set of rules translates IPA into the phonetic coding for a particular commercial speech synthesizer. This report describes the technical approach used and the support hardware and software developed. It gives overall performance figures, detailed statistics showing the importance of each rule, and listings of a translation program and another used in rule development.


IEEE Transactions on Information Theory | 1983

Discrete utterance speech recognition without time alignment

John E. Shore; David K. Burton

The results of a new method are presented for discrete utterance speech recognition. The method is based on rate-distortion speech coding (speech coding by vector quantization), minimum cross-entropy pattern classification, and information-theoretic spectral distortion measures. Separate vector quantization code books are designed from training sequences for each word in the recognition vocabulary. Inputs from outside the training sequence are classified by performing vector quantization and finding the code book that achieves the lowest average distortion per speech frame. The new method obviates time alignment. It achieves 99 percent accuracy for speaker-dependent recognition of a 20 -word vocabulary that includes the ten digits, with higher accuracy for recognition of the digit subset. For speaker-independent recognition, the method achieves 88 percent accuracy for the 20 -word vocabulary and 95 percent for the digit subset. Background of the method, detailed empirical results, and an analysis of computational requirements are presented.


IEEE Transactions on Acoustics, Speech, and Signal Processing | 1984

Which is the better entropy expression for speech processing: -S log S or log S?

Rodney W. Johnson; John E. Shore

In maximum entropy spectral analysis (MESA), one maximizes the integral of \logS(f) , where S(f) is a power spectrum. The resulting spectral estimate, which is equivalent to that obtained by linear prediction and other methods, is popular in speech processing applications. An alternative expression, -S(f)\logS(f) , is used in optical processing and elsewhere. This paper considers whether the alternative expression leads to spectral estimates useful in speech processing. We investigate the question both theoretically and empirically. The theoretical investigation is based on generalizations of file two estimates-the generalizations take into account prior estimates of the unknown power spectrum. It is shown that both estimates result from applying a generalized version of the principle of maximum entropy, but they differ concerning the quantities that are treated as random variables. The empirical investigation is based on speech synthesized using the different spectral estimates. Although both estimates lead to intelligible speech, speech based on the MESA estimate is qualitatively superior.


Journal of Chemical Physics | 1975

Dielectric relaxation and dynamic susceptibility of a one‐dimensional model for perpendicular‐dipole polymers

John E. Shore; Robert Zwanzig

The dielectric properties of a simple model are studied. The model consists of a one‐dimensional lattice of interacting objects, called spins. Each spin is oriented in a plane perpendicular to the lattice axis and is free to rotate in that plane. The spins undergo harmonic interactions with each other and, if they are dipolar, a cosine interaction with external fields. At sufficiently low temperature and for sufficiently strong spin–spin coupling, the model is equivalent to one in which the spin–spin interactions are proportional to the cosine of the angular difference between spin positions. It is shown that solution of the rotational diffusion equation leads to an interaction‐independent, nonexponential decay function that is quite similar to a decay function derived empirically from polymer data by Williams and his co‐workers. Exact decay functions are derived for both strong and weak applied fields, for both periodic and open boundary conditions, and for both nearest‐neighbor and longer‐range spin–spi...


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1982

Minimum Cross-Entropy Pattern Classification and Cluster Analysis

John E. Shore; Robert M. Gray

This paper considers the problem of classifying an input vector of measurements by a nearest neighbor rule applied to a fixed set of vectors. The fixed vectors are sometimes called characteristic feature vectors, codewords, cluster centers, models, reproductions, etc. The nearest neighbor rule considered uses a non-Euclidean information-theoretic distortion measure that is not a metric, but that nevertheless leads to a classification method that is optimal in a well-defined sense and is also computationally attractive. Furthermore, the distortion measure results in a simple method of computing cluster centroids. Our approach is based on the minimization of cross-entropy (also called discrimination information, directed divergence, K-L number), and can be viewed as a refinement of a general classification method due to Kullback. The refinement exploits special properties of cross-entropy that hold when the probability densities involved happen to be minimum cross-entropy densities. The approach is a generalization of a recently developed speech coding technique called speech coding by vector quantization.


IEEE Transactions on Information Theory | 1983

Comments on and correction to "Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy" (Jan 80 26-37) [Corresp.]

Rodney W. Johnson; John E. Shore

An error in the subject paper is pointed out: when the axioms given there are restricted to the discrete case, they do not imply the discrete case of the principle of minimum cross-entropy. The principle is shown to follow, however, from the adoption of an additional axiom: if new information is consistent with a prior estimate of a probability distribution, then the posterior estimate equals the prior. Minor other improvements and corrections to the arguments in the paper are made.

Collaboration


Dive into the John E. Shore's collaboration.

Top Co-Authors

Avatar

Rodney W. Johnson

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

David K. Burton

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Honey S. Elovitz

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Astrid McHugh

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Joseph T. Buck

University of California

View shared research outputs
Top Co-Authors

Avatar

Kathryn L. Heninger

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge