Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Gutman is active.

Publication


Featured researches published by Michael Gutman.


IEEE Transactions on Information Theory | 1989

Asymptotically optimal classification for multiple tests with empirically observed statistics

Michael Gutman

The decision problem of testing M hypotheses when the source is Kth-order Markov and there are M (or fewer) training sequences of length N and a single test sequence of length n is considered. K, M, n, N are all given. It is shown what the requirements are on M, n, N to achieve vanishing (exponential) error probabilities and how to determine or bound the exponent. A likelihood ratio test that is allowed to produce a no-match decision is shown to provide asymptotically optimal error probabilities and minimum no-match decisions. As an important serial case, the binary hypotheses problem without rejection is discussed. It is shown that, for this configuration, only one training sequence is needed to achieve an asymptotically optimal test. >


IEEE Transactions on Information Theory | 1989

On the estimation of the order of a Markov chain and universal data compression

Neri Merhav; Michael Gutman; Jacob Ziv

The authors estimate the order of a finite Markov source based on empirically observed statistics. The performance criterion adopted is to minimize the probability of underestimating the model order while keeping the overestimation probability exponent at a prescribed level. A universal asymptotically optimal test, in the sense just defined, is proposed for the case where a given integer is known to be the upper bound of the true order. For the case where such a bound is unavailable, an alternative rule based on the Lempel-Ziv data compression algorithm is shown to be asymptotically optimal also and computationally more efficient. >


IEEE Transactions on Information Theory | 1991

On universal hypotheses testing via large deviations

Ofer Zeitouni; Michael Gutman

A prototype problem in hypotheses testing is discussed. The problem of deciding whether an i.i.d. sequence of random variables has originated from a known source P/sub 1/ or an unknown source P/sub 2/ is considered. The exponential rate of decrease in type II probability of error under a constraint on the minimal rate of decrease in type I probability of error is chosen for a criterion of optimality. Using large deviations estimates, a decision rule that is based on the relative entropy of the empirical measure with respect to P/sub 1/ is proposed. In the case of discrete random variables, this approach yields weaker results than the combinatorial approach used by Hoeffding (1965). However, it enables the analysis to be extended to the general case of R/sup n/-valued random variables. Finally, the results are extended to the case where P/sub 1/ is an unknown parameter-dependent distribution that is known to belong to a set of distributions (P/sup 0//sub 1/, theta in Theta ). >


IEEE Transactions on Information Theory | 1993

An algorithm for source coding subject to a fidelity criterion, based on string matching

Yossef Steinberg; Michael Gutman

A practical suboptimal universal block source coding scheme, subject to a fidelity criterion, is proposed. The algorithm is an extension of the Lempel-Ziv algorithm and is based on string matching with distortion. It is shown that given average distortion D>0, the algorithm achieves a rate of exceeding R(D/2) for a large class of sources and distortion measures. Tighter bounds on the rate are derived for discrete memoryless sources and for memoryless Gaussian sources. >


IEEE Transactions on Information Theory | 1993

Some properties of sequential predictors for binary Markov sources

Neri Merhav; Meir Feder; Michael Gutman

Universal predictions of the next outcome of a binary sequence drawn from a Markov source with unknown parameters is considered. For a given source, the predictability is defined as the least attainable expected fraction of prediction errors. A lower bound is derived on the maximum rate at which the predictability is asymptotically approached uniformly over all sources in the Markov class. This bound is achieved by a simple majority predictor. For Bernoulli sources, bounds on the large deviations performance are investigated. A lower bound is derived for the probability that the fraction of errors will exceed the predictability by a prescribed amount Delta >0. This bound is achieved by the same predictor if Delta is sufficiently small. >


IEEE Transactions on Information Theory | 1987

On uniform quantization with various distortion measures (Corresp.)

Michael Gutman

Upper bounds are presented for the difference in entropy between that of a uniform scalar quantizer and that of any N -dimensional quantizer. The bounds are universal in the sense that they suit every input density and every value of distortion. Bounds were found for some common distortion criteria.


convention of electrical and electronics engineers in israel | 1991

Universal prediction of individual sequences

Meir Feder; Neri Merhav; Michael Gutman

The problem of sequentially determining the next, future, outcome of a specific binary individual sequence, based on its observed past, using a finite state predictor is considered. The authors define the finite state predictability of the (infinite) sequence x/sub 1/ . . . z/sub n/ . . . as the minimum fraction of prediction errors that can be made by any such predictor, and prove that this can be achieved, upto an arbitrary small prescribed distance, for each individual sequence, by fully sequential guessing schemes. The rate at which the sequential guessing schemes approach the predictability is calculated. An efficient guessing procedure is based on the incremental parsing algorithm used in Lempel-Ziv data compression, and its fraction of errors also approaches the predictability of the sequence. Some relations between compressibility and predictability are discussed and use of the predictability as an additional measure for the complexity, or randomness, of the sequence is suggested.<<ETX>>


IEEE Transactions on Information Theory | 1991

Correction to 'On Universal Hypotheses Testing Via Large Deviations'.

Ofer Zeitouni; Michael Gutman


Archive | 1994

Reflections on''Universal Prediction of Individual Sequences

Meir Feder; Neri Merhav; Michael Gutman


IEEE Transactions on Information Theory | 1994

Correction to 'Universal prediction of individual sequences' (Jul 92 1258-1270).

Meir Feder; Neri Merhav; Michael Gutman

Collaboration


Dive into the Michael Gutman's collaboration.

Top Co-Authors

Avatar

Neri Merhav

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ofer Zeitouni

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar

Jacob Ziv

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yossef Steinberg

Ben-Gurion University of the Negev

View shared research outputs
Researchain Logo
Decentralizing Knowledge