Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Gastpar is active.

Publication


Featured researches published by Michael Gastpar.


IEEE Transactions on Information Theory | 2005

Cooperative strategies and capacity theorems for relay networks

Gerhard Kramer; Michael Gastpar; Piyush Gupta

Coding strategies that exploit node cooperation are developed for relay networks. Two basic schemes are studied: the relays decode-and-forward the source message to the destination, or they compress-and-forward their channel outputs to the destination. The decode-and-forward scheme is a variant of multihopping, but in addition to having the relays successively decode the message, the transmitters cooperate and each receiver uses several or all of its past channel output blocks to decode. For the compress-and-forward scheme, the relays take advantage of the statistical dependence between their channel outputs and the destinations channel output. The strategies are applied to wireless channels, and it is shown that decode-and-forward achieves the ergodic capacity with phase fading if phase information is available only locally, and if the relays are near the source node. The ergodic capacity coincides with the rate of a distributed antenna array with full cooperation even though the transmitting antennas are not colocated. The capacity results generalize broadly, including to multiantenna transmission with Rayleigh fading, single-bounce fading, certain quasi-static fading problems, cases where partial channel knowledge is available at the transmitters, and cases where local user cooperation is permitted. The results further extend to multisource and multidestination networks such as multiaccess and broadcast relay channels.


IEEE Transactions on Information Theory | 2011

Compute-and-Forward: Harnessing Interference Through Structured Codes

Bobak Nazer; Michael Gastpar

Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information.


IEEE Transactions on Information Theory | 2003

To code, or not to code: lossy source-channel communication revisited

Michael Gastpar; Bixio Rimoldi; Martin Vetterli

What makes a source-channel communication system optimal? It is shown that in order to achieve an optimal cost-distortion tradeoff, the source and the channel have to be matched in a probabilistic sense. The match (or lack of it) involves the source distribution, the distortion measure, the channel conditional distribution, and the channel input cost function. Closed-form necessary and sufficient expressions relating the above entities are given. This generalizes both the separation-based approach as well as the two well-known examples of optimal uncoded communication. The condition of probabilistic matching is extended to certain nonergodic and multiuser scenarios. This leads to a result on optimal single-source broadcast communication.


allerton conference on communication, control, and computing | 2007

On Capacity Under Receive and Spatial Spectrum-Sharing Constraints

Michael Gastpar

Capacity is often studied under constraints on the channel input signals. This paper investigates the behavior of capacity when constraints are placed on the channel output signal (as well as generalizations thereof). While such a change in perspective leaves the point-to-point problem (essentially) unchanged, the main conclusion is that in certain network scenarios, including multiple-access and relay situations, both the structure of the problem and the conclusions change. For example, capacity results are found for the many-user Gaussian multiple-access channel (MAC) with arbitrarily dependent sources, cooperation, or feedback, and for the nondegraded Gaussian relay network. The investigations are motivated by recent questions arising in spectrum sharing and dynamic spectrum allocation: Multiple independent networks share the same frequency band, but are spatially mostly disjoint. One approach to grant coexistence is via spatial interference power restrictions, imposed at the network level, rather than at the device level. The corresponding capacity question is posed and partially answered in this paper


international symposium on information theory | 2007

Computation Over Multiple-Access Channels

Bobak Nazer; Michael Gastpar

The problem of reliably reconstructing a function of sources over a multiple-access channel (MAC) is considered. It is shown that there is no source-channel separation theorem even when the individual sources are independent. Joint source-channel strategies are developed that are optimal when the structure of the channel probability transition matrix and the function are appropriately matched. Even when the channel and function are mismatched, these computation codes often outperform separation-based strategies. Achievable distortions are given for the distributed refinement of the sum of Gaussian sources over a Gaussian multiple-access channel with a joint source-channel lattice code. Finally, computation codes are used to determine the multicast capacity of finite-field multiple-access networks, thus linking them to network coding.


information processing in sensor networks | 2003

Source-channel communication in sensor networks

Michael Gastpar; Martin Vetterli

Sensors acquire data, and communicate this to an interested party. The arising coding problem is often split into two parts: First, the sensors compress their respective acquired signals, potentially applying the concepts of distributed source coding. Then, they communicate the compressed version to the interested party, the goal being not to make any errors. This coding paradigm is inspired by Shannons separation theorem for point-to-point communication, but it leads to suboptimal performance in general network topologies. The optimal performance for the general case is not known. In this paper, we propose an alternative coding paradigm based on joint source-channel coding. This coding paradigm permits to determine the optimal performance for a class of sensor networks, and shows how to achieve it. For sensor networks outside this class, we argue that the goal of the coding system could be to approach our condition for optimal performance as closely as possible. This is supported by examples for which our coding paradigm significantly outperforms the traditional separation-based coding paradigm. In particular, for a Gaussian example considered in this paper, the distortion of the best coding scheme according to the separation paradigm decreases like 1/logM, while for our coding paradigm, it decreases like 1/M, where M is the total number of sensors.


international symposium on information theory | 2008

Compute-and-forward: Harnessing interference with structured codes

Bobak Nazer; Michael Gastpar

For a centralized encoder and decoder, a channel matrix is simply a set of linear equations that can be transformed into parallel channels. We develop a similar approach to multi-user networks: we view interference as creating linear equations of codewords and that a receiverpsilas goal is to collect a full rank set of such equations. Our new relaying technique, compute-and-forward, uses structured codes to reliably compute functions over channels. This allows the relays to efficiently recover a linear functions of codewords without recovering the individual codewords. Thus, our scheme can work with the structure of the interference while removing the effects of the noise at the relay. We apply our scheme to a Gaussian relay network with interference and achieve better rates than either compress-and-forward or decode-and-forward for certain regimes.


arXiv: Information Theory | 2011

Reliable Physical Layer Network Coding

Bobak Nazer; Michael Gastpar

When two or more users in a wireless network transmit simultaneously, their electromagnetic signals are linearly superimposed on the channel. As a result, a receiver that is interested in one of these signals sees the others as unwanted interference. This property of the wireless medium is typically viewed as a hindrance to reliable communication over a network. However, using a recently developed coding strategy, interference can in fact be harnessed for network coding. In a wired network, (linear) network coding refers to each intermediate node taking its received packets, computing a linear combination over a finite field, and forwarding the outcome towards the destinations. Then, given an appropriate set of linear combinations, a destination can solve for its desired packets. For certain topologies, this strategy can attain significantly higher throughputs over routing-based strategies. Reliable physical layer network coding takes this idea one step further: using judiciously chosen linear error-correcting codes, intermediate nodes in a wireless network can directly recover linear combinations of the packets from the observed noisy superpositions of transmitted signals. Starting with some simple examples, this paper explores the core ideas behind this new technique and the possibilities it offers for communication over interference-limited wireless networks.


IEEE Transactions on Information Theory | 2006

The Distributed Karhunen–Loève Transform

Michael Gastpar; Pier Luigi Dragotti; Martin Vetterli

The Karhunen-Loeve transform (KLT) is a key element of many signal processing and communication tasks. Many recent applications involve distributed signal processing, where it is not generally possible to apply the KLT to the entire signal; rather, the KLT must be approximated in a distributed fashion. This paper investigates such distributed approaches to the KLT, where several distributed terminals observe disjoint subsets of a random vector. We introduce several versions of the distributed KLT. First, a local KLT is introduced, which is the optimal solution for a given terminal, assuming all else is fixed. This local KLT is different and in general improves upon the marginal KLT which simply ignores other terminals. Both optimal approximation and compression using this local KLT are derived. Two important special cases are studied in detail, namely, the partial observation KLT which has access to a subset of variables, but aims at reconstructing them all, and the conditional KLT which has access to side information at the decoder. We focus on the jointly Gaussian case, with known correlation structure, and on approximation and compression problems. Then, the distributed KLT is addressed by considering local KLTs in turn at the various terminals, leading to an iterative algorithm which is locally convergent, sometimes reaching a global optimum, depending on the overall correlation structure. For compression, it is shown that the classical distributed source coding techniques admit a natural transform coding interpretation, the transform being the distributed KLT. Examples throughout illustrate the performance of the proposed distributed KLT. This distributed transform has potential applications in sensor networks, distributed image databases, hyper-spectral imagery, and data fusion


international symposium on information theory | 2009

Ergodic interference alignment

Bobak Nazer; Michael Gastpar; Syed Ali Jafar; Sriram Vishwanath

This paper develops a new communication strategy, ergodic interference alignment, for the K-user interference channel with time-varying fading. At any particular time, each receiver will see a superposition of the transmitted signals plus noise. The standard approach to such a scenario results in each transmitter-receiver pair achieving a rate proportional to 1/K its interference-free ergodic capacity. However, given two well-chosen time indices, the channel coefficients from interfering users can be made to exactly cancel. By adding up these two observations, each receiver can obtain its desired signal without any interference. If the channel gains have independent, uniform phases, this technique allows each user to achieve at least 1/2 its interference-free ergodic capacity at any signal-to-noise ratio. Prior interference alignment techniques were only able to attain this performance as the signal-to-noise ratio tended to infinity. Extensions are given for the case where each receiver wants a message from more than one transmitter as well as the “X channel” case (with two receivers) where each transmitter has an independent message for each receiver. Finally, it is shown how to generalize this strategy beyond Gaussian channel models. For a class of finite field interference channels, this approach yields the ergodic capacity region.

Collaboration


Dive into the Michael Gastpar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Vetterli

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chien-Yi Wang

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Sung Hoon Lim

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Jos H. Weber

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Sang-Woon Jeon

Andong National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiening Zhan

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge