Haim H. Permuter
Ben-Gurion University of the Negev
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Haim H. Permuter.
IEEE Transactions on Information Theory | 2010
Paul Cuff; Haim H. Permuter; Thomas M. Cover
We develop elements of a theory of cooperation and coordination in networks. Rather than considering a communication network as a means of distributing information, or of reconstructing random processes at remote nodes, we ask what dependence can be established among the nodes given the communication constraints. Specifically, in a network with communication rates {Ri,j} between the nodes, we ask what is the set of all achievable joint distributions p(x1, ..., xm) of actions at the nodes of the network. Several networks are solved, including arbitrarily large cascade networks. Distributed cooperation can be the solution to many problems such as distributed games, distributed control, and establishing mutual information bounds on the influence of one part of a physical system on another.
IEEE Transactions on Information Theory | 2009
Haim H. Permuter; Tsachy Weissman; Andrea J. Goldsmith
We consider capacity of discrete-time channels with feedback for the general case where the feedback is a time-invariant deterministic function of the output samples. Under the assumption that the channel states take values in a finite alphabet, we find a sequence of achievable rates and a sequence of upper bounds on the capacity. The achievable rates and the upper bounds are computable for any N, and the limits of the sequences exist. We show that when the probability of the initial state is positive for all the channel states, then the capacity is the limit of the achievable-rate sequence. We further show that when the channel is stationary, indecomposable, and has no intersymbol interference (ISI), its capacity is given by the limit of the maximum of the (normalized) directed information between the input XN and the output YN, i.e., C=limNrarrinfin(1/n)max I(XNrarrYN) where the maximization is taken over the causal conditioning probability Q(xNparzN-1) defined in this paper. The main idea for obtaining the results is to add causality into Gallagers results on finite state channels. The capacity results are used to show that the source-channel separation theorem holds for time-invariant determinist feedback, and if the state of the channel is known both at the encoder and the decoder, then feedback does not increase capacity.
IEEE Transactions on Information Theory | 2008
Haim H. Permuter; Paul Cuff; B. Van Roy; Tsachy Weissman
We establish that the feedback capacity of the trapdoor channel is the logarithm of the golden ratio and provide a simple communication scheme that achieves capacity. As part of the analysis, we formulate a class of dynamic programs that characterize capacities of unifilar finite-state channels. The trapdoor channel is an instance that admits a simple closed-form solution.
IEEE Transactions on Information Theory | 2011
Haim H. Permuter; Young-Han Kim; Tsachy Weissman
We investigate the role of directed information in portfolio theory, data compression, and statistics with causality constraints. In particular, we show that directed information is an upper bound on the increment in growth rates of optimal portfolios in a stock market due to causal side information. This upper bound is tight for gambling in a horse race, which is an extreme case of stock markets. Directed information also characterizes the value of causal side information in instantaneous compression and quantifies the benefit of causal inference in joint compression of two stochastic processes. In hypothesis testing, directed information evaluates the best error exponent for testing whether a random process <i>Y</i> causally influences another process <i>X</i> or not. These results lead to a natural interpretation of directed information <i>I</i>(<i>Yn</i> → <i>Xn</i>) as the amount of information that a random sequence <i>Yn</i> = (<i>Y</i><sub>1</sub>,<i>Y</i><sub>2</sub>,..., <i>Yn</i>) causally provides about another random sequence <i>Xn</i> = (<i>X</i><sub>1</sub>,<i>X</i><sub>2</sub>,...,<i>Xn</i>). A new measure, directed lautum information, is also introduced and interpreted in portfolio theory, data compression, and hypothesis testing.
IEEE Transactions on Information Theory | 2011
Haim H. Permuter; Tsachy Weissman
We study source coding in the presence of side information, when the system can take actions that affect the availability, quality, or nature of the side information. We begin by extending the Wyner-Ziv problem of source coding with decoder side information to the case where the decoder is allowed to choose actions affecting the side information. We then consider the setting where actions are taken by the encoder, based on its observation of the source. Actions may have costs that are commensurate with the quality of the side information they yield, and an overall per-symbol cost constraint may be imposed. We characterize the achievable tradeoffs between rate, distortion, and cost in some of these problem settings. Among our findings is the fact that even in the absence of a cost constraint, greedily choosing the action associated with the “best” side information is, in general, suboptimal. A few examples are worked out.
IEEE Transactions on Information Theory | 2011
Haim H. Permuter; Shlomo Shamai; Anelia Somekh-Baruch
We investigate the capacity of a multiple access channel with cooperating encoders where partial state information is known to each encoder and full state information is known to the decoder. The cooperation between the encoders has a two-fold purpose: to generate empirical state coordination between the encoders, and to share information about the private messages that each encoder has. For two-way cooperation, this two-fold purpose is achieved by double-binning, where the first layer of binning is used to generate the state coordination similarly to the two-way source coding, and the second layer of binning is used to transmit information about the private messages. The complete result provides the framework and perspective for addressing a complex level of cooperation that mixes states and messages in an optimal way.
IEEE Transactions on Information Theory | 2013
Jiantao Jiao; Haim H. Permuter; Lei Zhao; Young-Han Kim; Tsachy Weissman
Four estimators of the directed information rate between a pair of jointly stationary ergodic finite-alphabet processes are proposed, based on universal probability assignments. The first one is a Shannon-McMillan-Breiman-type estimator, similar to those used by Verdú in 2005 and Cai in 2006 for estimation of other information measures. We show the almost sure and L1 convergence properties of the estimator for any underlying universal probability assignment. The other three estimators map universal probability assignments to different functionals, each exhibiting relative merits such as smoothness, nonnegativity, and boundedness. We establish the consistency of these estimators in almost sure and L1 senses, and derive near-optimal rates of convergence in the minimax sense under mild conditions. These estimators carry over directly to estimating other information measures of stationary ergodic finite-alphabet processes, such as entropy rate and mutual information rate, with near-optimal performance and provide alternatives to classical approaches in the existing literature. Guided by these theoretical results, the proposed estimators are implemented using the context-tree weighting algorithm as the universal probability assignment. Experiments on synthetic and real data are presented, demonstrating the potential of the proposed schemes in practice and the utility of directed information estimation in detecting and measuring causal influence and delay.
IEEE Transactions on Information Theory | 2013
Tsachy Weissman; Young-Han Kim; Haim H. Permuter
A notion of directed information between two continuous-time processes is proposed. A key component in the definition is taking an infimum over all possible partitions of the time interval, which plays a role no less significant than the supremum over “space” partitions inherent in the definition of mutual information. Properties and operational interpretations in estimation and communication are then established for the proposed notion of directed information. For the continuous-time additive white Gaussian noise channel, it is shown that Duncans classical relationship between causal estimation error and mutual information continues to hold in the presence of feedback upon replacing mutual information by directed information. A parallel result is established for the Poisson channel. The utility of this relationship is demonstrated in computing the directed information rate between the input and output processes of a continuous-time Poisson channel with feedback, where the channel input process is constrained to be constant between events at the channel output. Finally, the capacity of a wide class of continuous-time channels with feedback is established via directed information, characterizing the fundamental limit on reliable communication.
IEEE Transactions on Information Theory | 2009
Haim H. Permuter; Tsachy Weissman; Jun Chen
The capacity region of the finite-state multiple-access channel (FS-MAC) with feedback that may be an arbitrary time-invariant function of the channel output samples is considered. We characterize both an inner and an outer bound for this region, using Masseys directed information. These bounds are shown to coincide, and hence yield the capacity region, of indecomposable FS-MACs without feedback and of stationary and indecomposable FS-MACs with feedback, where the state process is not affected by the inputs. Though multiletter in general, our results yield explicit conclusions when applied to specific scenarios of interest. For example, our results allow us to do the following. 1. Identify a large class of FS-MACs, that includes the additive mod2 noise MAC where the noise may have memory, for which feedback does not enlarge the capacity region. 2. Deduce that, for a general FS-MAC with states that are not affected by the input, if the capacity (region) without feedback is zero, then so is the capacity (region) with feedback. 3. Deduce that the capacity region of a MAC that can be decomposed into a multiplexer concatenated by a point-to-point channel (with, without, or with partial feedback), the capacity region is given by Sigmam Rm les C, where C is the capacity of the point to point channel and m indexes the encoders. Moreover, we show that for this family of channels source-channel coding separation holds.
IEEE Transactions on Information Theory | 2010
Haim H. Permuter; Yossef Steinberg; Tsachy Weissman
Consider the two-way rate-distortion problem in which a helper sends a common limited-rate message to both users based on side information at its disposal. We characterize the region of achievable rates and distortions when the Markov relation (Helper)-(User 1)-(User 2) holds. The main insight of the result is that in order to achieve the optimal rate, the helper may use a binning scheme, as in Wyner-Ziv, where the side information at the decoder is the ¿further¿ user, namely, User 2. We derive these regions explicitly for the Gaussian sources with square error distortion, analyze a tradeoff between the rate from the helper and the rate from the source, and examine a special case where the helper has the freedom to send different messages, at different rates, to the encoder and the decoder. The converse proofs use a technique for verifying Markov relations via undirected graphs.