Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tsachy Weissman is active.

Publication


Featured researches published by Tsachy Weissman.


international symposium on information theory | 2003

Universal discrete denoising: known channel

Tsachy Weissman; Erik Ordentlich; Gadiel Seroussi; Sergio Verdú; Marcelo J. Weinberger

A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given single-letter fidelity criterion, we propose a discrete denoising algorithm that does not assume knowledge of statistical properties of the input sequence. Yet, the algorithm is universal in the sense of asymptotically performing as well as the optimum denoiser that knows the input sequence distribution, which is only assumed to be stationary. Moreover, the algorithm is universal also in a semi-stochastic setting, in which the input is an individual sequence, and the randomness is due solely to the channel noise. The proposed denoising algorithm is practical, requiring a linear number of register-level operations and sublinear working storage size relative to the input data length.


IEEE Transactions on Information Theory | 2009

Finite State Channels With Time-Invariant Deterministic Feedback

Haim H. Permuter; Tsachy Weissman; Andrea J. Goldsmith

We consider capacity of discrete-time channels with feedback for the general case where the feedback is a time-invariant deterministic function of the output samples. Under the assumption that the channel states take values in a finite alphabet, we find a sequence of achievable rates and a sequence of upper bounds on the capacity. The achievable rates and the upper bounds are computable for any N, and the limits of the sequences exist. We show that when the probability of the initial state is positive for all the channel states, then the capacity is the limit of the achievable-rate sequence. We further show that when the channel is stationary, indecomposable, and has no intersymbol interference (ISI), its capacity is given by the limit of the maximum of the (normalized) directed information between the input XN and the output YN, i.e., C=limNrarrinfin(1/n)max I(XNrarrYN) where the maximization is taken over the causal conditioning probability Q(xNparzN-1) defined in this paper. The main idea for obtaining the results is to add causality into Gallagers results on finite state channels. The capacity results are used to show that the source-channel separation theorem holds for time-invariant determinist feedback, and if the state of the channel is known both at the encoder and the decoder, then feedback does not increase capacity.


IEEE Transactions on Information Theory | 2008

Capacity of the Trapdoor Channel With Feedback

Haim H. Permuter; Paul Cuff; B. Van Roy; Tsachy Weissman

We establish that the feedback capacity of the trapdoor channel is the logarithm of the golden ratio and provide a simple communication scheme that achieves capacity. As part of the analysis, we formulate a class of dynamic programs that characterize capacities of unifilar finite-state channels. The trapdoor channel is an instance that admits a simple closed-form solution.


IEEE Transactions on Information Theory | 2011

Interpretations of Directed Information in Portfolio Theory, Data Compression, and Hypothesis Testing

Haim H. Permuter; Young-Han Kim; Tsachy Weissman

We investigate the role of directed information in portfolio theory, data compression, and statistics with causality constraints. In particular, we show that directed information is an upper bound on the increment in growth rates of optimal portfolios in a stock market due to causal side information. This upper bound is tight for gambling in a horse race, which is an extreme case of stock markets. Directed information also characterizes the value of causal side information in instantaneous compression and quantifies the benefit of causal inference in joint compression of two stochastic processes. In hypothesis testing, directed information evaluates the best error exponent for testing whether a random process <i>Y</i> causally influences another process <i>X</i> or not. These results lead to a natural interpretation of directed information <i>I</i>(<i>Yn</i> → <i>Xn</i>) as the amount of information that a random sequence <i>Yn</i> = (<i>Y</i><sub>1</sub>,<i>Y</i><sub>2</sub>,..., <i>Yn</i>) causally provides about another random sequence <i>Xn</i> = (<i>X</i><sub>1</sub>,<i>X</i><sub>2</sub>,...,<i>Xn</i>). A new measure, directed lautum information, is also introduced and interpreted in portfolio theory, data compression, and hypothesis testing.


IEEE Transactions on Information Theory | 2010

Capacity of Channels With Action-Dependent States

Tsachy Weissman

We consider channels with action-dependent states: Given the message to be communicated, the transmitter chooses an action sequence that affects the formation of the channel states, and then creates the channel input sequence based on the state sequence. We characterize the capacity of such a channel both for the case where the channel inputs are allowed to depend noncausally on the state sequence and the case where they are restricted to causal dependence. Our setting covers previously considered scenarios involving transmission over channels with states known at the encoder, as well as various new coding scenarios for channels with a “rewrite” option that may arise naturally in storage for computer memories with defects or in magnetic recoding. A few examples are worked out in detail.


IEEE Transactions on Information Theory | 2014

Multiterminal Source Coding Under Logarithmic Loss

Thomas A. Courtade; Tsachy Weissman

We consider the classical two-encoder multiterminal source coding problem where distortion is measured under logarithmic loss. We provide a single-letter description of the achievable rate distortion region for all discrete memoryless sources with finite alphabets. By doing so, we also give the rate distortion region for the m-encoder CEO problem (also under logarithmic loss). Several applications and examples are given.


international symposium on information theory | 2001

On limited-delay lossy coding and filtering of individual sequences

Tsachy Weissman; Neri Merhav

We continue the study of adaptive schemes for the sequential lossy coding of individual sequences which was initiated by Linder and Lugosi (see ibid., p.2533-38, 2001). Specifically, we consider fixed-rate lossy coding systems of fixed (or zero) delay where the encoder (which is allowed to use randomization) and the decoder are connected via a noiseless channel of a given capacity. It is shown that for any finite set of such coding schemes of a given rate, there exists a source code (adhering to the same structural and delay limitations) with the same rate whose distortion is with high probability almost as small as that of the best scheme in that set, uniformly for all individual sequences. Applications of this result to reference classes of special interest are outlined. These include the class of scalar quantizers, trellis encoders with sliding block decoders, and differential pulse code modulator (DPCM)-based source codes. In particular, for the class of all scalar quantizers, a source code is obtained with (normalized) distortion redundancy relative to the best scheme in the reference class of order n/sup -1/3/ log n (where n is the sequence length). This improves the n/sup -1/5/ log n rate achieved by Linder and Lugosi. More importantly, the decoder here is deterministic and, in particular, does not assume a common randomization sequence available at both encoder and decoder. Finally, we consider the case where the individual sequence is corrupted by noise prior to reaching the coding system, whose goal now is to reconstruct a sequence with small distortion relative to the clean individual sequence. It is shown that for the case of a finite alphabet and an invertible channel transition probability matrix, for any finite set of sliding-window schemes of a given rate, there exists a source code (allowed to use randomization yet adhering to the same delay constraints) whose performance is, with high probability, essentially as good as the best scheme in the class, for all individual sequences.


IEEE Transactions on Information Theory | 2008

The Information Lost in Erasures

Sergio Verdú; Tsachy Weissman

We consider sources and channels with memory observed through erasure channels. In particular, we examine the impact of sporadic erasures on the fundamental limits of lossless data compression, lossy data compression, channel coding, and denoising. We define the erasure entropy of a collection of random variables as the sum of entropies of the individual variables conditioned on all the rest. The erasure entropy measures the information content carried by each symbol knowing its context. The erasure entropy rate is shown to be the minimal amount of bits per erasure required to recover the lost information in the limit of small erasure probability. When we allow recovery of the erased symbols within a prescribed degree of distortion, the fundamental tradeoff is described by the erasure rate-distortion function which we characterize. We show that in the regime of sporadic erasures, knowledge at the encoder of the erasure locations does not lower the rate required to achieve a given distortion. When no additional encoded information is available, the erased information is reconstructed solely on the basis of its context by a denoiser. Connections between erasure entropy and discrete denoising are developed. The decrease of the capacity of channels with memory due to sporadic memoryless erasures is also characterized in wide generality.


international conference on image processing | 2003

A discrete universal denoiser and its application to binary images

Erik Ordentlich; Gadiel Seroussi; Sergio Verdú; Marcelo J. Weinberger; Tsachy Weissman

This paper describes a discrete universal denoiser for two dimensional data and also presents an experimental results of its application to noisy binary images. A discrete universal denoiser (DUDE) is introduced for recovering a signal with finite-valued components corrupted by finite-valued, uncorrelated noise. The DUDE is asymptotically optimal and universal, in the sense of asymptotically achieving, without access to any information on the statistics of the clean signal, the same performance as the best denoiser that does have access to such information. It is also practical, and can be implemented in low complexity.


international symposium on information theory | 2007

The Gaussian Channel with Noisy Feedback

Young-Han Kim; Amos Lapidoth; Tsachy Weissman

Upper and lower bounds are derived on the reliability function of the additive white Gaussian noise channel with output fed back to the transmitter over an independent additive white Gaussian noise channel. Special attention is paid to the regime of very low feedback noise variance and it is shown that the reliability function is asymptotically inversely proportional to the feedback noise variance. This result shows that the noise in the feedback link, however small, renders the communication with noisy feedback fundamentally different from the perfect feedback case. For example, it is demonstrated that with noisy feedback, linear coding schemes fail to achieve any positive rate. In contrast, an asymptotically optimal coding scheme is devised, based on a three-phase detection/retransmission protocol, which achieves an error exponent inversely proportional to the feedback noise variance for any rate less than capacity.

Collaboration


Dive into the Tsachy Weissman's collaboration.

Top Co-Authors

Avatar

Haim H. Permuter

Ben-Gurion University of the Negev

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Neri Merhav

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shirin Jalali

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge