Neri Merhav
Technion – Israel Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Neri Merhav.
IEEE Transactions on Information Theory | 2002
Yariv Ephraim; Neri Merhav
An overview of statistical and information-theoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discrete-time finite-state homogeneous Markov chain observed through a discrete-time memoryless invariant channel. In recent years, the work of Baum and Petrie (1966) on finite-state finite-alphabet HMPs was expanded to HMPs with finite as well as continuous state spaces and a general alphabet. In particular, statistical properties and ergodic theorems for relative entropy densities of HMPs were developed. Consistency and asymptotic normality of the maximum-likelihood (ML) parameter estimator were proved under some mild conditions. Similar results were established for switching autoregressive processes. These processes generalize HMPs. New algorithms were developed for estimating the state, parameter, and order of an HMP, for universal coding and classification of HMPs, and for universal decoding of hidden Markov channels. These and other related topics are reviewed.
IEEE Transactions on Information Theory | 1998
Neri Merhav; M. Feder
This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given to the notion of probability assignment under the self-information loss function, which is directly related to the theory of universal data compression. Both the probabilistic setting and the deterministic setting of the universal prediction problem are described with emphasis on the analogy and the differences between results in the two settings.
international symposium on information theory | 1993
Neri Merhav; Gideon Kaplan; Amos Lapidoth; S. Shamai Shitz
Reliable transmission over a discrete-time memoryless channel with a decoding metric that is not necessarily matched to the channel (mismatched decoding) is considered. It is assumed that the encoder knows both the true channel and the decoding metric. The lower bound on the highest achievable rate found by Csiszar and Korner (1981) and by Hui (1983) for DMCs, hereafter denoted C/sub LM/, is shown to bear some interesting information-theoretic meanings. The bound C/sub LM/ turns out to be the highest achievable rate in the random coding sense, namely, the random coding capacity for mismatched decoding. It is also demonstrated that the /spl epsiv/-capacity associated with mismatched decoding cannot exceed C/sub LM/. New bounds and some properties of C/sub LM/ are established and used to find relations to the generalized mutual information and to the generalized cutoff rate. The expression for C/sub LM/ is extended to a certain class of memoryless channels with continuous input and output alphabets, and is used to calculate C/sub LM/ explicitly for several examples of theoretical and practical interest. Finally, it is demonstrated that in contrast to the classical matched decoding case, here, under the mismatched decoding regime, the highest achievable rate depends on whether the performance criterion is the bit error rate or the message error probability and whether the coding strategy is deterministic or randomized. >
IEEE Transactions on Circuits and Systems for Video Technology | 1997
Neri Merhav; Vasudev Bhaskaran
Straightforward techniques for spatial domain processing of compressed video via decompression and recompression are computationally expensive. We describe an alternative approach wherein the compressed stream is processed in the compressed, discrete cosine transform (DCT) domain without explicit decompression and spatial domain processing, so that the output compressed stream, corresponding to the output image, conforms to the standard syntax of 8/spl times/8 blocks. We propose computation schemes for downsampling and for inverse motion compensation that are applicable to any DCT-based compression method. Worst case estimates of computation savings vary between 37% and 50% depending on the task. For typically sparse DCT blocks, the reduction in computations is more dramatic. A by-product of the proposed approach is improvement in arithmetic precision.
international symposium on information theory | 1993
Meir Feder; Neri Merhav
The relation between the entropy of a discrete random variable and the minimum attainable probability of error made in guessing its value is examined. While Fanos inequality provides a tight lower bound on the error probability in terms of the entropy, the present authors derive a converse result/spl mdash/a tight upper bound on the minimal error probability in terms of the entropy. Both bounds are sharp, and can draw a relation, as well, between the error probability for the maximum a posteriori (MAP) rule, and the conditional entropy (equivocation), which is a useful uncertainty measure in several applications. Combining this relation and the classical channel coding theorem, the authors present a channel coding theorem for the equivocation which, unlike the channel coding theorem for error probability, is meaningful at all rates. This theorem is proved directly for DMCs, and from this proof it is further concluded that for R/spl ges/C the equivocation achieves its minimal value of R/spl minus/C at the rate of n/sup 1/spl sol/2/ where n is the block length. >
IEEE Transactions on Information Theory | 1989
Neri Merhav; Michael Gutman; Jacob Ziv
The authors estimate the order of a finite Markov source based on empirically observed statistics. The performance criterion adopted is to minimize the probability of underestimating the model order while keeping the overestimation probability exponent at a prescribed level. A universal asymptotically optimal test, in the sense just defined, is proposed for the case where a given integer is known to be the upper bound of the true order. For the case where such a bound is unavailable, an alternative rule based on the Lempel-Ziv data compression algorithm is shown to be asymptotically optimal also and computationally more efficient. >
international symposium on information theory | 1993
Jacob Ziv; Neri Merhav
A new notion of empirical informational divergence (relative entropy) between two individual sequences is introduced. If the two sequences are independent realizations of two finite-order, finite alphabet, stationary Markov processes, the empirical relative entropy converges to the relative entropy almost surely. This empirical divergence is based on a version of the Lempel-Ziv data compression algorithm. A simple universal algorithm for classifying individual sequences into a finite number of classes, which is based on the empirical divergence, is introduced. The algorithm discriminates between the classes whenever they are distinguishable by some finite-memory classifier for almost every given training set and almost any test sequence from these classes. It is universal in the sense that it is independent of the unknown sources. >
IEEE Transactions on Information Theory | 1992
Ofer Zeitouni; Jacob Ziv; Neri Merhav
The generalized likelihood ratio test (GLRT), which is commonly used in composite hypothesis testing problems, is investigated. Conditions for asymptotic optimality of the GLRT in the Neyman-Pearson sense are studied and discussed. First, a general necessary and sufficient condition is established, and then based on this, a sufficient condition, which is easier to verify, is derived. A counterexample where the GLRT is not optimal, is provided as well. A conjecture is stated concerning the optimality of the GLRT for the class of finite-state sources. >
IEEE Transactions on Information Theory | 1998
Erdal Arikan; Neri Merhav
We investigate the problem of guessing a random vector X within distortion level D. Our aim is to characterize the best attainable performance in the sense of minimizing, in some probabilistic sense, the number of required guesses G(X) until the error falls below D. The underlying motivation is that G(X) is the number of candidate codewords to be examined by a rate-distortion block encoder until a satisfactory codeword is found. In particular, for memoryless sources, we provide a single-letter characterization of the least achievable exponential growth rate of the /spl rho/th moment of G(X) as the dimension of the random vector X grows without bound. In this context, we propose an asymptotically optimal guessing scheme that is universal both with respect to the information source and the value of /spl rho/. We then study some properties of the exponent function E(D, /spl rho/) along with its relation to the source-coding exponents. Finally, we provide extensions of our main results to the Gaussian case, guessing with side information, and sources with memory.
international symposium on information theory | 1993
Neri Merhav; Meir Feder
Sequential decision algorithms are investigated in relation to a family of additive performance criteria for individual data sequences. Simple universal sequential schemes are known, under certain conditions, to approach optimality uniformly as fast as n/sup -1/ log n, where n is the sample size. For the case of finite-alphabet observations, the class of schemes that can be implemented by finite-state machines (FSMs) is studied. It is shown that Markovian machines with sufficiently long memory exist, which are asymptotically nearly as good as any given deterministic or randomized FSM for the purpose of sequential decision. For the continuous-valued observation case, a useful class of parametric schemes is discussed with special attention to the recursive least squares algorithm. >