Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ram Zamir is active.

Publication


Featured researches published by Ram Zamir.


IEEE Transactions on Information Theory | 2002

Nested linear/lattice codes for structured multiterminal binning

Ram Zamir; Shlomo Shamai; Uri Erez

Network information theory promises high gains over simple point-to-point communication techniques, at the cost of higher complexity. However, lack of structured coding schemes limited the practical application of these concepts so far. One of the basic elements of a network code is the binning scheme. Wyner (1974, 1978) and other researchers proposed various forms of coset codes for efficient binning, yet these schemes were applicable only for lossless source (or noiseless channel) network coding. To extend the algebraic binning approach to lossy source (or noisy channel) network coding, previous work proposed the idea of nested codes, or more specifically, nested parity-check codes for the binary case and nested lattices in the continuous case. These ideas connect network information theory with the rich areas of linear codes and lattice codes, and have strong potential for practical applications. We review these developments and explore their tight relation to concepts such as combined shaping and precoding, coding for memories with defects, and digital watermarking. We also propose a few novel applications adhering to a unified approach.


IEEE Transactions on Information Theory | 2004

Achieving 1/2 log (1+SNR) on the AWGN channel with lattice encoding and decoding

Uri Erez; Ram Zamir

We address an open question, regarding whether a lattice code with lattice decoding (as opposed to maximum-likelihood (ML) decoding) can achieve the additive white Gaussian noise (AWGN) channel capacity. We first demonstrate how minimum mean-square error (MMSE) scaling along with dithering (lattice randomization) techniques can transform the power-constrained AWGN channel into a modulo-lattice additive noise channel, whose effective noise is reduced by a factor of /spl radic/(1+SNR/SNR). For the resulting channel, a uniform input maximizes mutual information, which in the limit of large lattice dimension becomes 1/2 log (1+SNR), i.e., the full capacity of the original power constrained AWGN channel. We then show that capacity may also be achieved using nested lattice codes, the coarse lattice serving for shaping via the modulo-lattice transformation, the fine lattice for channel coding. We show that such pairs exist for any desired nesting ratio, i.e., for any signal-to-noise ratio (SNR). Furthermore, for the modulo-lattice additive noise channel lattice decoding is optimal. Finally, we show that the error exponent of the proposed scheme is lower bounded by the Poltyrev exponent.


IEEE Transactions on Information Theory | 1992

On universal quantization by randomized uniform/lattice quantizers

Ram Zamir; Meir Feder

Uniform quantization with dither, or lattice quantization with dither in the vector case, followed by a universal lossless source encoder (entropy coder), is a simple procedure for universal coding with distortion of a source that may take continuously many values. The rate of this universal coding scheme is examined, and a general expression is derived for it. An upper bound for the redundancy of this scheme, defined as the difference between its rate and the minimal possible rate, given by the rate distortion function of the source, is derived. This bound holds for all distortion levels. Furthermore, a composite upper bound on the redundancy as a function of the quantizer resolution that leads to a tighter bound in the high rate (low distortion) case is presented. >


IEEE Transactions on Information Theory | 2006

Distortion Bounds for Broadcasting With Bandwidth Expansion

Zvi Reznic; Meir Feder; Ram Zamir

We consider the problem of broadcasting a single Gaussian source to two listeners over a Gaussian broadcast channel, with rho channel uses per source sample, where rho>1. A distortion pair (D1 ,D2) is said to be achievable if one can simultaneously achieve a mean-squared error (MSE) D1 at receiver 1 and D2 at receiver 2. The main result of this correspondence is an outer bound for the set of all achievable distortion pairs. That is, we find necessary conditions under which (D1,D2) is achievable. We then apply this result to the problem of point-to-point transmission over a Gaussian channel with unknown signal-to-noise ratio (SNR) and rho>1. We show that if a system must be optimal at a certain SNRmin, then, asymptotically, the system distortion cannot decay faster than O(1/SNR). As for achievability, we show that a previously reported scheme, due to Mittal and Phamdo (2002), is optimal at high SNR. We introduce two new schemes for broadcasting with bandwidth expansion, combining digital and analog transmissions. We finally show how a system with a partial feedback, returning from the bad receiver to the transmitter and to the good receiver, achieves a distortion pair that lies on the outer bound derived here


information theory and applications | 2009

Lattices are everywhere

Ram Zamir

As bees and crystals (and people selling oranges in the market) know it for many years, lattices provide efficient structures for packing, covering, quantization and channel coding. In the recent years, interesting links were found between lattices and coding schemes for multi-terminal networks. This tutorial paper covers close to 20 years of my research in the area; of enjoying the beauty of lattice codes, and discovering their power in dithered quantization, dirty paper coding, Wyner-Ziv DPCM, modulo-lattice modulation, distributed interference cancelation, and more.


IEEE Transactions on Information Theory | 1994

On the asymptotic tightness of the Shannon lower bound

Tamás Linder; Ram Zamir

New results are proved on the convergence of the Shannon (1959) lower bound to the rate distortion function as the distortion decreases to zero. The key convergence result is proved using a fundamental property of informational divergence. As a corollary, it is shown that the Shannon lower bound is asymptotically tight for norm-based distortions, when the source vector has a finite differential entropy and a finite /spl alpha/ th moment for some /spl alpha/>0, with respect to the given norm. Moreover, we derive a theorem of Linkov (1965) on the asymptotic tightness of the Shannon lower bound for general difference distortion measures with more relaxed conditions on the source density. We also show that the Shannon lower bound relative to a stationary source and single-letter difference distortion is asymptotically tight under very weak assumptions on the source distribution. >


IEEE Transactions on Information Theory | 1999

Multiterminal source coding with high resolution

Ram Zamir; Toby Berger

We consider separate encoding and joint decoding of correlated continuous information sources, subject to a difference distortion measure. We first derive a multiterminal extension of the Shannon lower bound for the rate region. Then we show that this Shannon outer bound is asymptotically tight for small distortions. These results imply that the loss in the sum of the coding rates due to the separation of the encoders vanishes in the limit of high resolution. Furthermore, lattice quantizers followed by Slepian-Wolf lossless encoding are asymptotically optimal. We also investigate the high-resolution rate region in the remote coding case, where the encoders observe only noisy versions of the sources. For the quadratic Gaussian case, we establish a separation result to the effect that multiterminal coding aimed at reconstructing the noisy sources subject to the rate constraints, followed by estimation of the remote sources from these reconstructions, is optimal under certain regularity conditions on the structure of the coding scheme.


IEEE Transactions on Information Theory | 2009

On the Loss of Single-Letter Characterization: The Dirty Multiple Access Channel

Tal Philosof; Ram Zamir

For general memoryless systems, the existing information-theoretic solutions have a ldquosingle-letterrdquo form. This reflects the fact that optimum performance can be approached by a random code (or a random binning scheme), generated using independent and identically distributed copies of some scalar distribution. Is that the form of the solution of any (information-theoretic) problem? In fact, some counter examples are known. The most famous one is the ldquotwo help onerdquo problem: Korner and Marton showed that if we want to decode the modulo-two sum of two correlated binary sources from their independent encodings, then linear coding is better than random coding. In this paper we provide another counter example, the ldquodoubly-dirtyrdquo multiple-access channel (MAC). Like the Korner-Marton problem, this is a multiterminal scenario where side information is distributed among several terminals; each transmitter knows part of the channel interference while the receiver only observes the channel output. We give an explicit solution for the capacity region of the binary doubly-dirty MAC, demonstrate how this region can be approached using a linear coding scheme, and prove that the ldquobest known single-letter regionrdquo is strictly contained in it. We also state a conjecture regarding the capacity loss of single-letter characterization in the Gaussian case.


IEEE Transactions on Information Theory | 2008

Achieving the Gaussian Rate–Distortion Function by Prediction

Ram Zamir; Yuval Kochman; Uri Erez

The ldquowater-fillingrdquo solution for the quadratic rate-distortion function of a stationary Gaussian source is given in terms of its power spectrum. This formula naturally lends itself to a frequency domain ldquotest-channelrdquo realization. We provide an alternative time-domain realization for the rate-distortion function, based on linear prediction. The predictive test channel has some interesting implications, including the optimality at all distortion levels of pre/post filtered vector-quantized differential pulse-code modulation (DPCM), and a duality relationship with decision-feedback equalization (DFE) for intersymbol interference (ISI) channels.


IEEE Transactions on Information Theory | 1995

Rate-distortion performance in coding bandlimited sources by sampling and dithered quantization

Ram Zamir; Meir Feder

The rate-distortion characteristics of a scheme for encoding continuous-time band limited stationary sources, with a prescribed band, is considered. In this coding procedure the input is sampled at Nyquists rate or faster, the samples undergo dithered uniform or lattice quantization, using subtractive dither, and the quantizer output is entropy-coded, The rate-distortion performance, and the tradeoff between the sampling rate and the quantization accuracy is investigated, utilizing the observation that the coding scheme is equivalent to an additive noise channel. It is shown that the mean-square error of the scheme is fixed as long as the product of the sampling period and the quantizer second moment is kept constant, while for a fixed distortion the coding rate generally increases when the sampling rate exceeds the Nyquist rate. Finally, as the lattice quantizer dimension becomes large, the equivalent additive noise channel of the scheme tends to be white Gaussian, and both the rate and the distortion performance become invariant to the sampling rate. >

Collaboration


Dive into the Ram Zamir's collaboration.

Top Co-Authors

Avatar

Uri Erez

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yuval Kochman

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shlomo Shamai

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge