Peter Elias
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter Elias.
IEEE Transactions on Information Theory | 1975
Peter Elias
Countable prefix codeword sets are constructed with the universal property that assigning messages in order of decreasing probability to codewords in order of increasing length gives an average code-word length, for any message set with positive entropy, less than a constant times the optimal average codeword length for that source. Some of the sets also have the asymptotically optimal property that the ratio of average codeword length to entropy approaches one uniformly as entropy increases. An application is the construction of a uniformly universal sequence of codes for countable memoryless sources, in which the n th code has a ratio of average codeword length to source rate bounded by a function of n for all sources with positive rate; the bound is less than two for n = 0 and approaches one as n increases.
IEEE Transactions on Information Theory | 1955
Peter Elias
Predictive coding is a procedure for transmitting messages which are sequences of magnitudes. In this coding method, the transmitter and the receiver store past message terms, and from them estimate the value of the next message term. The transmitter transmits, not the message term, but the difference between it and its predicted value. At the receiver this error term is added to the receiver prediction to reproduce the message term. This procedure is defined and messages, prediction, entropy, and ideal coding are discussed to provide a basis for Part II, which will give the mathematical criterion for the best predictor for use in the predictive coding of particular messages, will give examples of such messages, and will show that the error term which is transmitted in predictive coding may always be coded efficiently.
IEEE Transactions on Information Theory | 1956
Peter Elias; Amiel Feinstein; Claude E. Shannon
This note discusses the problem of maximizing the rate of flow from one terminal to another, through a network which consists of a number of branches, each of which has a limited capacity. The main result is a theorem: The maximum possible flow from left to right through a network is equal to the minimum value among all simple cut-sets. This theorem is applied to solve a more general problem, in which a number of input nodes and a number of output nodes are used.
IEEE Transactions on Information Theory | 1991
Peter Elias
In the list-of-L decoding of a block code the receiver of a noisy sequence lists L possible transmitted messages, and is in error only if the correct message is not on the list. Consideration is given to (n,e,L) codes, which correct all sets of e or fewer errors in a block of n bits under list-of-L decoding. New geometric relations between the number of errors corrected under list-of-1 decoding and the (larger) number corrected under list-of-L decoding of the same code lead to new lower bounds on the maximum rate of (n,e,L) codes. They show that a jammer who can change a fixed fraction p >
Journal of the ACM | 1974
Peter Elias
We consider a set of static files or inventories, each consisting of the same number of entries, each entry a binary word of the same fixed length selected (with replacement) from the set of all binary sequences of that length, and the entries in each file sorted into lexical order. We also consider several retrieval questions of interest for each such file. One is to find the value of the jth entry, another to find the number of entries of value less than k. When a binary representation of such a file is stored in computer memory and an algorithm or machine which knows only the file parameters (i.e. number of entries, number of possible values per entry) accesses some of the stored bits to answer a retrieval question, the number of bits stored and the number of bits accessed per retrieval question are two cost measures for the storage and retrieval task which have been used by Minsky and Papert. Bits stored depends on the representation chosen: bits accessed also depends on the retrieval question asked and on the algorithm used. We give firm lower bounds to minimax measures of bits stored and bits accessed for each of four retrieval questions, and construct representations and algorithms for a bit-addressable machine which come within factors of two or three of attaining all four bounds at once for files of any size. All four factors approach one for large enough files.
IEEE Transactions on Information Theory | 1987
Peter Elias
In the schemes presented the encoder maps each message into a codeword in a prefix-free codeword set. In interval encoding the codeword is indexed by the interval since the last previous occurrence of that message, and the codeword set must be countably infinite. In recency rank encoding the codeword is indexed by the number of distinct messages in that interval, and there must be no fewer codewords than messages. The decoder decodes each codeword on receipt. Users need not know message probabilities, but must agree on indexings, of the codeword set in an order of increasing length and of the message set in some arbitrary order. The average codeword length over a communications bout is never much larger than the value for an off-line scheme which maps the j th most frequent message in the bout into the j th shortest codeword in the given set, and is never too much larger than the value for off-line Huffman encoding of messages into the best codeword set for the bout message frequencies. Both schemes can do much better than Huffman coding when successive selections of each message type cluster much more than in the independent case.
Journal of the Optical Society of America | 1952
Peter Elias; David S. Grey; David Z. Robinson
Many optical processes of image formation, image transfer, and image analysis may be represented as one, or a succession of several, linear operations. A linear operation upon a flux distribution function of an n-dimensional argument is defined as one which replaces the value of the function at a point by a linear, weighted average taken over a neighborhood of that point. While such an operation is completely determined by the weighting function used, it is also determined by a “wave-number” spectrum which is a function of an n-dimensional wave-number vector. This wave-number spectrum is the complex conjugate of the n-dimensional Fourier transform of the weighting function. The wave-number spectrum of the flux distribution modified by any number of successive linear operations is the product of the wave-number spectrum of the original distribution, and the wave-number spectra of the several linear operations. An analysis thus performed in wave-number space replaces successive integrations by successive multiplications.This method of analysis is an extension of the usual method of treating filters in electronic circuits, and may be used to solve problems analogous to those treated in circuit theory. These are: (1) to evaluate the performance of a system; (2) to design a process to search an image for a configuration; (3) to reproduce a picture, with discrimination in favor of a configuration desired, and against others; and (4) to equalize a picture, i.e., to remove image degradation.
IEEE Transactions on Information Theory | 1955
Peter Elias
In Part I predictive coding was defined and messages, prediction, entropy, and ideal coding were discussed. In the present paper the criterion to be used for predictors for the purpose of predictive coding is defined: that predictor is optimum in the information theory (IT) sense which minimizes the entropy of the average error-term distribution. Ordered averages of distributions are defined and it is shown that if a predictor gives an ordered average error term distribution it will be a best IT predictor. Special classes of messages are considered for which a best IT predictor can easily be found, and some examples are given. The error terms which are transmitted in predictive coding are treated as if they were statistically independent. If this is indeed the case, or a good approximation, then it is still necessary to show that sequences of message terms which are statistically independent may always be coded efficiently, without impractically large memory requirements, in order to show that predictive coding may be practical and efficient in such cases. This is done in the final section of this paper.
IEEE Transactions on Information Theory | 1967
Peter Elias
This paper discusses networks (directed graphs) having one input node, one output node, and an arbitrary number of intermediate nodes, whose branches are noisy communications channels, in which the input to each channel appears at its output corrupted by additive Gaussian noise. Each branch is labeled by a non-negative real parameter which specified how noisy it is. A branch originating at a node has as input a linear combination of the outputs of the branches terminating at that node. The channel capacity of such a network is defined. Its value is bounded in terms of branch parameter values and procedures for computing values for general networks are described. Explicit solutions are given for the class D_{0} which includes series-parallel and simple bridge networks and all other networks having r paths, b branches, and v nodes with r = b - \upsilon + 2 , and for the class D_{1} of networks which is inductively defined to include D_{0} and all networks obtained by replacing a branch of a network in D_{1} by a network in D_{1} . The general results are applied to the particular networks which arise from the decomposition of a simple feedback system into successive forward and reverse (feedback) channels. When the feedback channels are noiseless, the capacities of the forward channels are shown to add. Some explicit expressions and some bounds are given for the case of noisy feedback channels.
Journal of the Optical Society of America | 1953
Peter Elias
The purpose of this paper is to illustrate the application to optics of some mathematical techniques originally developed for the analysis of electric networks and other communications problems. Two general aspects of communication theory may be so applied.The first of these is electrical network theory. This may be further subdivided. First there is the standard treatment of the response of networks to individual signals by means of Fourier analysis. The optical analogue of this is the analysis of images, also by Fourier transform techniques. Second, there is the statistical network theory initiated by N. Wiener and developed by Y. W. Lee. One aspect of this theory is the design of optimum linear systems for separating a signal from a noise. This is relevant to the problem of the removal of grain from photographs.The second aspect of communication theory which is relevant in optics is the general statistical information theory of Shannon and Wiener. This is especially valuable in the analysis of scanning systems treating signals and noise.The autocorrelation function of a picture is useful in both kinds of analysis, and will be discussed.