Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcelo J. Weinberger is active.

Publication


Featured researches published by Marcelo J. Weinberger.


IEEE Transactions on Information Theory | 1995

A universal finite memory source

Marcelo J. Weinberger; Jorma Rissanen; Meir Feder

An irreducible parameterization for a finite memory source is constructed in the form of a tree machine. A universal information source for the set of finite memory sources is constructed by a predictive modification of an earlier studied algorithm-Context. It is shown that this universal source incorporates any minimal data-generating tree machine in an asymptotically optimal manner in the following sense: the negative logarithm of the probability it assigns to any long typical sequence, generated by any tree machine, approaches that assigned by the tree machine at the best possible rate. >


international symposium on information theory | 2003

Universal discrete denoising: known channel

Tsachy Weissman; Erik Ordentlich; Gadiel Seroussi; Sergio Verdú; Marcelo J. Weinberger

A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given single-letter fidelity criterion, we propose a discrete denoising algorithm that does not assume knowledge of statistical properties of the input sequence. Yet, the algorithm is universal in the sense of asymptotically performing as well as the optimum denoiser that knows the input sequence distribution, which is only assumed to be stationary. Moreover, the algorithm is universal also in a semi-stochastic setting, in which the input is an individual sequence, and the randomness is due solely to the channel noise. The proposed denoising algorithm is practical, requiring a linear number of register-level operations and sublinear working storage size relative to the input data length.


international conference on image processing | 2000

Embedded block coding in JPEG2000

David Taubman; Erik Ordentlich; Marcelo J. Weinberger; Gadiel Seroussi; Ikuro Ueno; Fumitaka Ono

This paper describes the embedded block coding algorithm at the heart of the JPEG2000 image compression standard. The algorithm achieves excellent compression performance, usually somewhat higher than that of SPIHT with arithmetic coding, but in some cases substantially higher. The algorithm utilizes the same low complexity binary arithmetic coding engine as JBIG2. Together with careful design of the bit-plane coding primitives, this enables comparable execution speed to that observed with the simpler variant of SPIHT without arithmetic coding. The coder offers additional advantages including memory locality, spatial random access and ease of geometric manipulation.


data compression conference | 1998

A low-complexity modeling approach for embedded coding of wavelet coefficients

Erik Ordentlich; Marcelo J. Weinberger; Gadiel Seroussi

We present a new low-complexity method for modeling and coding the bitplanes of a wavelet-transformed image in a fully embedded fashion. The scheme uses a simple ordering model for embedding, based on the principle that coefficient bits that are likely to reduce the distortion the most should be described first in the encoded bitstream. The ordering model is tied to a conditioning model in a way that deinterleaves the conditioned subsequences of coefficient bits, making them amenable to coding with a very simple, adaptive elementary Golomb (1966) code. The proposed scheme, without relying on zerotrees or arithmetic coding, attains PSNR vs. bit rate performance superior to that of SPIHT, and competitive with its arithmetic coding variant, SPIHT-AC.


Signal Processing-image Communication | 2002

Embedded block coding in JPEG 2000

David Taubman; Erik Ordentlich; Marcelo J. Weinberger; Gadiel Seroussi

This paper describes the embedded block coding algorithm at the heart of the JPEG 2000 image compression standard. The paper discusses key considerations which led to the development and adoption of this algorithm, and also investigates performance and complexity issues. The JPEG 2000 coding system achieves excellent compression performance, somewhat higher (and, in some cases, substantially higher) than that of SPIHT with arithmetic coding, a popular benchmark for comparison The algorithm utilizes the same low complexity binary arithmetic coding engine as JBIG2. Together with careful design of the bit-plane coding primitives, this enables comparable execution speed to that observed with the simpler variant of SPIHT without arithmetic coding. The coder offers additional advantages including memory locality, spatial random access and ease of geometric manipulation.


IEEE Transactions on Information Theory | 2000

Optimal prefix codes for sources with two-sided geometric distributions

Neri Merhav; Gadiel Seroussi; Marcelo J. Weinberger

A complete characterization of optimal prefix codes for off-centered, two-sided geometric distributions of the integers is presented. These distributions are often encountered in lossless image compression applications, as probabilistic models for image prediction residuals. The family of optimal codes described is an extension of the Golomb codes, which are optimal for one-sided geometric distributions. The new family of codes allows for encoding of prediction residuals at a complexity similar to that of Golomb codes, without recourse to the heuristic approximations frequently used when modifying a code designed for nonnegative integers so as to apply to the encoding of any integer. Optimal decision rules for choosing among a lower complexity subset of the optimal codes, given the distribution parameters, are also investigated, and the relative redundancy of the subset with respect to the full family of optimal codes is bounded.


Proceedings of the IEEE | 2000

Lossless compression of continuous-tone images

Bruno Carpentieri; Marcelo J. Weinberger; Gadiel Seroussi

In this paper, we survey some of the recent advances in lossless compression of continuous-tone images. The modeling paradigms underlying the state-of-the-art algorithms, and the principles guiding their design, are discussed in a unified manner. The algorithms are described and experimentally compared.


IEEE Transactions on Image Processing | 2011

The iDUDE Framework for Grayscale Image Denoising

Giovanni Motta; Erik Ordentlich; Ignacio Ramirez; Gadiel Seroussi; Marcelo J. Weinberger

We present an extension of the discrete universal denoiser DUDE, specialized for the denoising of grayscale images. The original DUDE is a low-complexity algorithm aimed at recovering discrete sequences corrupted by discrete memoryless noise of known statistical characteristics. It is universal, in the sense of asymptotically achieving, without access to any information on the statistics of the clean sequence, the same performance as the best denoiser that does have access to such information. The DUDE, however, is not effective on grayscale images of practical size. The difficulty lies in the fact that one of the DUDEs key components is the determination of conditional empirical probability distributions of image samples, given the sample values in their neighborhood. When the alphabet is relatively large (as is the case with grayscale images), even for a small-sized neighborhood, the required distributions would be estimated from a large collection of sparse statistics, resulting in poor estimates that would not enable effective denoising. The present work enhances the basic DUDE scheme by incorporating statistical modeling tools that have proven successful in addressing similar issues in lossless image compression. Instantiations of the enhanced framework, which is referred to as iDUDE, are described for examples of additive and nonadditive noise. The resulting denoisers significantly surpass the state of the art in the case of salt and pepper (S&P) and -ary symmetric noise, and perform well for Gaussian noise.


international conference on image processing | 2003

A discrete universal denoiser and its application to binary images

Erik Ordentlich; Gadiel Seroussi; Sergio Verdú; Marcelo J. Weinberger; Tsachy Weissman

This paper describes a discrete universal denoiser for two dimensional data and also presents an experimental results of its application to noisy binary images. A discrete universal denoiser (DUDE) is introduced for recovering a signal with finite-valued components corrupted by finite-valued, uncorrelated noise. The DUDE is asymptotically optimal and universal, in the sense of asymptotically achieving, without access to any information on the statistics of the clean signal, the same performance as the best denoiser that does have access to such information. It is also practical, and can be implemented in low complexity.


international symposium on information theory | 2002

On universal simulation of information sources using training data

Neri Merhav; Marcelo J. Weinberger

We consider the problem of universal simulation of an unknown random process, or information source, of a certain parametric family, given a training sequence from that source and given a limited budget of purely random bits. The goal is to generate another random sequence (of the same length or shorter), whose probability law is identical to that of the given training sequence, but with minimum statistical dependency (minimum mutual information) between the input training sequence and the output sequence. We derive lower bounds on the mutual information that are shown to he achievable by conceptually simple algorithms proposed here. We show that the behavior of the minimum achievable mutual information depends critically on the relative amount of random bits and on the lengths of the input sequence and the output sequence. While in the ordinary (nonuniversal) simulation problem, the number of random bits per symbol must exceed the entropy rate H of the source in order to simulate it faithfully, in the universal simulation problem considered here, faithful preservation of the probability law is not a problem, yet the same minimum rate of H random bits per symbol is still needed to essentially eliminate the statistical dependency between the input sequence and the output sequence. The results are extended to more general information measures.

Collaboration


Dive into the Marcelo J. Weinberger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Neri Merhav

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge