Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Meir Feder is active.

Publication


Featured researches published by Meir Feder.


IEEE Transactions on Acoustics, Speech, and Signal Processing | 1988

Parameter estimation of superimposed signals using the EM algorithm

Meir Feder; Ehud Weinstein

A computationally efficient algorithm for parameter estimation of superimposed signals based on the two-step iterative EM (estimate-and-maximize, with an E step and an M step) algorithm is developed. The idea is to decompose the observed data into their signal components and then to estimate the parameters of each signal component separately. The algorithm iterates back and forth, using the current parameter estimates to decompose the observed data better and thus increase the likelihood of the next parameter estimates. The application of the algorithm to the multipath time delay and multiple-source location estimation problems is considered. >


IEEE Transactions on Speech and Audio Processing | 1993

Multi-channel signal separation by decorrelation

Ehud Weinstein; Meir Feder; Alan V. Oppenheim

Identification of an unknown system and recovery of the input signals from observations of the outputs of an unknown multiple-input, multiple-output linear system are considered. Attention is focused on the two-channel case, in which the outputs of a 2*2 linear time invariant system are observed. The approach consists of reconstructing the input signals by assuming that they are statistically uncorrelated and imposing this constraint on the signal estimates. In order to restrict the set of solutions, additional information on the true signal generation and/or on the form of the coupling systems is incorporated. Specific algorithms are developed and tested. As a special case, these algorithms suggest a potentially interesting modification of Widrows (1975) least-squares method for noise cancellation, where the reference signal contains a component of the desired signal. >


IEEE Transactions on Information Theory | 1995

A universal finite memory source

Marcelo J. Weinberger; Jorma Rissanen; Meir Feder

An irreducible parameterization for a finite memory source is constructed in the form of a tree machine. A universal information source for the set of finite memory sources is constructed by a predictive modification of an earlier studied algorithm-Context. It is shown that this universal source incorporates any minimal data-generating tree machine in an asymptotically optimal manner in the following sense: the negative logarithm of the probability it assigns to any long typical sequence, generated by any tree machine, approaches that assigned by the tree machine at the best possible rate. >


IEEE Transactions on Information Theory | 1992

On universal quantization by randomized uniform/lattice quantizers

Ram Zamir; Meir Feder

Uniform quantization with dither, or lattice quantization with dither in the vector case, followed by a universal lossless source encoder (entropy coder), is a simple procedure for universal coding with distortion of a source that may take continuously many values. The rate of this universal coding scheme is examined, and a general expression is derived for it. An upper bound for the redundancy of this scheme, defined as the difference between its rate and the minimal possible rate, given by the rate distortion function of the source, is derived. This bound holds for all distortion levels. Furthermore, a composite upper bound on the redundancy as a function of the quantizer resolution that leads to a tighter bound in the high rate (low distortion) case is presented. >


international symposium on information theory | 1993

Relations between entropy and error probability

Meir Feder; Neri Merhav

The relation between the entropy of a discrete random variable and the minimum attainable probability of error made in guessing its value is examined. While Fanos inequality provides a tight lower bound on the error probability in terms of the entropy, the present authors derive a converse result/spl mdash/a tight upper bound on the minimal error probability in terms of the entropy. Both bounds are sharp, and can draw a relation, as well, between the error probability for the maximum a posteriori (MAP) rule, and the conditional entropy (equivocation), which is a useful uncertainty measure in several applications. Combining this relation and the classical channel coding theorem, the authors present a channel coding theorem for the equivocation which, unlike the channel coding theorem for error probability, is meaningful at all rates. This theorem is proved directly for DMCs, and from this proof it is further concluded that for R/spl ges/C the equivocation achieves its minimal value of R/spl minus/C at the rate of n/sup 1/spl sol/2/ where n is the block length. >


IEEE Transactions on Information Theory | 2008

Low-Density Lattice Codes

Naftali Sommer; Meir Feder; Ofir Shalvi

Low-density lattice codes (LDLC) are novel lattice codes that can be decoded efficiently and approach the capacity of the additive white Gaussian noise (AWGN) channel. In LDLC a codeword x is generated directly at the n-dimensional Euclidean space as a linear transformation of a corresponding integer message vector b, i.e., x = Gb-1, where H = G-1 is restricted to be sparse. The fact that H is sparse is utilized to develop a linear-time iterative decoding scheme which attains, as demonstrated by simulations, good error performance within ~0.5 dB from capacity at block length of n =100,000 symbols. The paper also discusses convergence results and implementation considerations.


IEEE Transactions on Information Theory | 2006

Distortion Bounds for Broadcasting With Bandwidth Expansion

Zvi Reznic; Meir Feder; Ram Zamir

We consider the problem of broadcasting a single Gaussian source to two listeners over a Gaussian broadcast channel, with rho channel uses per source sample, where rho>1. A distortion pair (D1 ,D2) is said to be achievable if one can simultaneously achieve a mean-squared error (MSE) D1 at receiver 1 and D2 at receiver 2. The main result of this correspondence is an outer bound for the set of all achievable distortion pairs. That is, we find necessary conditions under which (D1,D2) is achievable. We then apply this result to the problem of point-to-point transmission over a Gaussian channel with unknown signal-to-noise ratio (SNR) and rho>1. We show that if a system must be optimal at a certain SNRmin, then, asymptotically, the system distortion cannot decay faster than O(1/SNR). As for achievability, we show that a previously reported scheme, due to Mittal and Phamdo (2002), is optimal at high SNR. We introduce two new schemes for broadcasting with bandwidth expansion, combining digital and analog transmissions. We finally show how a system with a partial feedback, returning from the bad receiver to the transmitter and to the good receiver, achieves a distortion pair that lies on the outer bound derived here


IEEE Transactions on Image Processing | 1994

Image compression via improved quadtree decomposition algorithms

Eli Shusterman; Meir Feder

Quadtree decomposition is a simple technique used to obtain an image representation at different resolution levels. This representation can be useful for a variety of image processing and image compression algorithms. This paper presents a simple way to get better compression performances (in MSE sense) via quadtree decomposition, by using near to optimal choice of the threshold for quadtree decomposition; and bit allocation procedure based on the equations derived from rate-distortion theory. The rate-distortion performance of the improved algorithm is calculated for some Gaussian field, and it is examined vie simulation over benchmark gray-level images. In both these cases, significant improvement in the compression performances is shown.


IEEE Transactions on Signal Processing | 1999

Universal linear prediction by model order weighting

Andrew C. Singer; Meir Feder

A common problem that arises in adaptive filtering, autoregressive modeling, or linear prediction is the selection of an appropriate order for the underlying linear parametric model. We address this problem for linear prediction, but instead of fixing a specific model order, we develop a sequential prediction algorithm whose sequentially accumulated average squared prediction error for any bounded individual sequence is as good as the performance attainable by the best sequential linear predictor of order less than some M. This predictor is found by transforming linear prediction into a problem analogous to the sequential probability assignment problem from universal coding theory. The resulting universal predictor uses essentially a performance-weighted average of all predictors for model orders less than M. Efficient lattice filters are used to generate the predictions of all the models recursively, resulting in a complexity of the universal algorithm that is no larger than that of the largest model order. Examples of prediction performance are provided for autoregressive and speech data as well as an example of adaptive data equalization.


IEEE Transactions on Signal Processing | 1994

Iterative and sequential algorithms for multisensor signal enhancement

Ehud Weinstein; Alan V. Oppenheim; Meir Feder; John R. Buck

In problems of enhancing a desired signal in the presence of noise, multiple sensor measurements will typically have components from both the signal and the noise sources. When the systems that couple the signal and the noise to the sensors are unknown, the problem becomes one of joint signal estimation and system identification. The authors specifically consider the two-sensor signal enhancement problem in which the desired signal is modeled as a Gaussian autoregressive (AR) process, the noise is modeled as a white Gaussian process, and the coupling systems are modeled as linear time-invariant finite impulse response (FIR) filters. The main approach consists of modeling the observed signals as outputs of a stochastic dynamic linear system, and the authors apply the estimate-maximize (EM) algorithm for jointly estimating the desired signal, the coupling systems, and the unknown signal and noise spectral parameters. The resulting algorithm can be viewed as the time-domain version of the frequency-domain approach of Feder et al. (1989), where instead of the noncausal frequency-domain Wiener filter, the Kalman smoother is used. This approach leads naturally to a sequential/adaptive algorithm by replacing the Kalman smoother with the Kalman filter, and in place of successive iterations on each data block, the algorithm proceeds sequentially through the data with exponential weighting applied to allow adaption to nonstationary changes in the structure of the data. A computationally efficient implementation of the algorithm is developed. An expression for the log-likelihood gradient based on the Kalman smoother/filter output is also developed and used to incorporate efficient gradient-based algorithms in the estimation process. >

Collaboration


Dive into the Meir Feder's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge