Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alon Kipnis is active.

Publication


Featured researches published by Alon Kipnis.


IEEE Transactions on Information Theory | 2016

Distortion Rate Function of Sub-Nyquist Sampled Gaussian Sources

Alon Kipnis; Andrea J. Goldsmith; Yonina C. Eldar; Tsachy Weissman

The amount of information lost in sub-Nyquist sampling of a continuous-time Gaussian stationary process is quantified. We consider a combined source coding and sub-Nyquist reconstruction problem in which the input to the encoder is a noisy sub-Nyquist sampled version of the analog source. We first derive an expression for the mean squared error in the reconstruction of the process from a noisy and information rate-limited version of its samples. This expression is a function of the sampling frequency and the average number of bits describing each sample. It is given as the sum of two terms: minimum mean square error in estimating the source from its noisy but otherwise fully observed sub-Nyquist samples, and a second term obtained by reverse waterfilling over an average of spectral densities associated with the polyphase components of the source. We extend this result to multi-branch uniform sampling, where the samples are available through a set of parallel channels with a uniform sampler and a pre-sampling filter in each branch. Further optimization to reduce distortion is then performed over the pre-sampling filters, and an optimal set of pre-sampling filters associated with the statistics of the input signal and the sampling frequency is found. This results in an expression for the minimal possible distortion achievable under any analog-to-digital conversion scheme involving uniform sampling and linear filtering. These results thus unify the Shannon–Whittaker–Kotelnikov sampling theorem and Shannon rate-distortion theory for Gaussian sources.


information theory workshop | 2015

Sub-Nyquist sampling achieves optimal rate-distortion

Alon Kipnis; Andrea J. Goldsmith; Yonina C. Eldar

The minimal sampling frequency required to achieve the rate-distortion function of a Gaussian stationary process is analyzed. Although the Nyquist rate is the minimal sampling frequency that allows perfect reconstruction of a bandlimited signal from its samples, relaxing perfect reconstruction to a prescribed distortion may allow a lower sampling frequency to achieve the optimal rate-distortion trade-off. We consider a combined sampling and source coding problem in which an analog Gaussian source is reconstructed from its rate-limited sub-Nyquist samples. We show that each point on the distortion-rate curve of the source corresponds to a sampling frequency fDR smaller than the Nyquist rate, such that this point can be achieved by sampling at frequency fDR or above. This can be seen as an extension of the sampling theorem in the sense that it describes the minimal amount of excess distortion in the reconstruction due to lossy compression of the samples, and provides the minimal sampling frequency required in order to achieve that distortion.


allerton conference on communication, control, and computing | 2015

Optimal trade-off between sampling rate and quantization precision in A/D conversion

Alon Kipnis; Yonina C. Eldar; Andrea J. Goldsmith

The jointly optimized sampling rate and quantization precision in A/D conversion is studied. In particular, we consider a basic pulse code modulation A/D scheme in which a stationary process is sampled and quantized by a scalar quantizer. We derive an expression for the minimal mean squared error under linear estimation of the analog input from the digital output, which is also valid under sub-Nyquist sampling. This expression allows for the computation of the sampling rate that minimizes the error under a fixed bitrate at the output, which is the result of an interplay between the number of bits allocated to each sample and the distortion resulting from sampling. We illustrate the results for several examples, which demonstrate the optimality of sub-Nyquist sampling in certain cases.


international conference on sampling theory and applications | 2015

Optimal trade-off between sampling rate and quantization precision in Sigma-Delta A/D conversion

Alon Kipnis; Andrea J. Goldsmith; Yonina C. Eldar

The optimal sampling frequency in a Sigma-Delta analog-to-digital converter with a fixed bitrate at the output is studied. We consider the mean squared error performance metric where the input signal statistics are known. Fixing the output bitrate introduces a trade-off between the sampling rate and the number of bits used to quantize each sample. That is, while increasing the sampling rate reduces the in-band quantization noise, it also reduces the number of bits available to quantize each sample and therefore increases the magnitude of the quantization noise. The optimal sampling rate is the result of the interplay between these two phenomena. In this work we analyze the sampling rate of a Sigma-Delta modulator of arbitrary order under the approximation that the quantization error behaves like additive white noise that is uncorrelated with the signal. We show that for a signal with a spectrum that is constant over its bandwidth, the optimal sampling rate is either the Nyquist rate or the maximal sampling rate corresponding to the output bitrate. The choice between the two is approximately a function of the Sigma-Delta system order and the bitrate per unit bandwidth.


information theory workshop | 2015

The indirect rate-distortion function of a binary i.i.d source

Alon Kipnis; Stefano Rini; Andrea J. Goldsmith

The indirect source-coding problem in which a Bernoulli process is compressed in a lossy manner from its noisy observations is considered. These noisy observations are obtained by passing the source sequence through a binary symmetric channel so that the channel crossover probability controls the amount of information available about the source realization at the encoder. We use classic results in rate-distortion theory to compute the rate-distortion function for this model as a solution of an exponential equation. In addition, we derive an upper bound on the rate distortion which has a simple closed-form expression and investigate the coding scheme that attains it. These expressions capture precisely the expected behavior of the rate-distortion function: the noisier the source observations, the smaller the reduction in distortion obtained from increasing the compression rate.


international symposium on information theory | 2016

Multiterminal compress-and-estimate source coding

Alon Kipnis; Stefano Rini; Andrea J. Goldsmith

We consider a multiterminal source coding problem in which a random source signal is estimated from encoded versions of multiple noisy observations. Each encoded version, however, is compressed so as to minimize a local distortion measure, defined only with respect to the distribution of the corresponding noisy observation. The original source is then estimated from these compressed noisy observations. We denote the minimal distortion under this coding scheme as the compress-and-estimate distortion-rate function (CE-DRF). We derive a single-letter expression for the CE-DRF in the case of an i.i.d source. We evaluate this expression for the case of a Gaussian source observed through multiple parallel AWGN channels and quadratic distortion and in the case of a non-uniform binary i.i.d source observed through multiple binary symmetric channels under Hamming distortion. For the case of a Gaussian source, we compare the performance for centralized encoding versus that of distributed encoding. In the centralized encoding scenario, when the code rates are sufficiently small, there is no loss of performance compared to the indirect source coding distortion-rate function, whereas distributed encoding achieves distortion strictly larger then the optimal multiterminal source coding scheme. For the case of a binary source, we show that even with a single observation, the CE-DRF is strictly larger than that of indirect source coding.


allerton conference on communication, control, and computing | 2014

Gaussian distortion-rate function under sub-nyquist nonuniform sampling

Alon Kipnis; Andrea J. Goldsmith; Yonina C. Eldar

A bound on the amount of distortion in the reconstruction of a stationary Gaussian process from its rate-limited samples is derived. The bound is based on a combined sampling and source coding problem in which a Gaussian stationary process is described from a compressed version of its values on an infinite discrete set. We show that the distortion in reconstruction cannot be lower than the distortion-rate function based on optimal uniform filter-bank sampling using a sufficient number of sampling branches. This can be seen as an extension of Landaus theorem on a necessary condition for optimal recovery of a signal from its samples, in the sense that it describes both the error as a result of sub-sampling and the error incurred due to lossy compression of the samples.


allerton conference on communication, control, and computing | 2013

Distortion rate function of sub-Nyquist sampled Gaussian sources corrupted by noise

Alon Kipnis; Andrea J. Goldsmith; Tsachy Weissman; Yonina C. Eldar

The amount of information lost in sub-Nyquist uniform sampling of a continuous-time Gaussian stationary process is quantified. We first derive an expression for the mean square error in reconstruction of the process for a given sampling structure as a function of the sampling frequency and the average number of bits describing each sample. We define this function as the distortion-rate-frequency function. It is obtained by reverse water-filling over spectral density associated with the minimum variance reconstruction of an undersampled Gaussian process, plus the error in this reconstruction. Further optimization is then performed over the sampling structure, and an optimal pre-sampling filter associated with the statistic of the input signal and the sampling frequency is found. This results in an expression for the minimal possible distortion achievable under any uniform sampling scheme. This expression is calculated for several examples to illustrate the fundamental tradeoff between rate distortion and sampling frequency derived in this work that lies at the intersection of information theory and signal processing.


IEEE Transactions on Information Theory | 2018

The Distortion Rate Function of Cyclostationary Gaussian Processes

Alon Kipnis; Andrea J. Goldsmith; Yonina C. Eldar

A general expression for the quadratic distortion rate function (DRF) of cyclostationary Gaussian processes in terms of their spectral properties is derived. This expression can be seen as the result of orthogonalization over the different components in the polyphase decomposition of the process. We use this expression to derive, in a closed form, the DRF of several cyclostationary processes arising in practice. We first consider the DRF of a combined sampling and source coding problem. It is known that the optimal coding strategy for this problem involves source coding applied to a signal with the same structure as one resulting from pulse amplitude modulation (PAM). Since a PAM-modulated signal is cyclostationary, our DRF expression can be used to solve for the minimal distortion in the combined sampling and source coding problem. We also analyze in more detail the DRF of a source with the same structure as a PAM-modulated signal, and show that it is obtained by reverse waterfilling over an expression that depends on the energy of the pulse and the baseband process modulated to obtain the PAM signal. This result is then used to explore the effect of the symbol rate in PAM on the DRF of its output. In addition, we also study the DRF of sources with an amplitude-modulation structure, and show that the DRF of a narrow-band Gaussian stationary process modulated by either a deterministic or a random phase sine-wave equals the DRF of the baseband process.


IEEE Signal Processing Magazine | 2018

Analog-to-Digital Compression: A New Paradigm for Converting Signals to Bits

Alon Kipnis; Yonina C. Eldar; Andrea J. Goldsmith

Processing, storing, and communicating information that originates as an analog signal involves converting this information to bits. This conversion can be described by the combined effect of sampling and quantization, as shown in Figure 1. The digital representation is achieved by first sampling the analog signal to represent it by a set of discretetime samples and then quantizing these samples to a finite number of bits. Traditionally, these two operations are considered separately. The sampler is designed to minimize the information loss due to sampling based on characteristics of the continuoustime input. The quantizer is designed to represent the samples as accurately as possible, subject to a constraint on the number of bits that can be used in the representation. The goal of this article is to revisit this paradigm by illuminating the dependency between these two operations. In particular, we explore the requirements of the sampling system subject to the constraints on the available number of bits for storing, communicating, or processing the analog information.

Collaboration


Dive into the Alon Kipnis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yonina C. Eldar

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Stefano Rini

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Alpay

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge