Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Naveen Goela is active.

Publication


Featured researches published by Naveen Goela.


information theory workshop | 2010

On LP decoding of polar codes

Naveen Goela; Satish Babu Korada; Michael Gastpar

Polar codes are the first codes to provably achieve capacity on the symmetric binary-input discrete memoryless channel (B-DMC) with low encoding and decoding complexity. The parity check matrix of polar codes is high-density and we show that linear program (LP) decoding fails on the fundamental polytope of the parity check matrix. The recursive structure of the code permits a sparse factor graph representation. We define a new polytope based on the fundamental polytope of the sparse graph representation. This new polytope P is defined in a space of dimension O(N logN) where N is the block length. We prove that the projection of P in the original space is tighter than the fundamental polytope based on the parity check matrix. The LP decoder over P obtains the ML-certificate property. In the case of the binary erasure channel (BEC), the new LP decoder is equivalent to the belief propagation (BP) decoder operating on the sparse factor graph representation, and hence achieves capacity. Simulation results of SC (successive cancelation) decoding, LP decoding over tightened polytopes, and (ML) maximum likelihood decoding are provided. For channels other than the BEC, we discuss why LP decoding over P with a linear objective function is insufficient.


international symposium on information theory | 2013

Polar codes for broadcast channels

Naveen Goela; Emmanuel Abbe; Michael Gastpar

Building on polar code constructions proposed by the authors for deterministic broadcast channels, two theorems are introduced in the present paper for noisy two-user broadcast channels. The theorems establish polar code constructions for two important information-theoretic broadcast strategies: (1) Covers superposition strategy; (2) Martons construction. One aspect of the polar code constructions is the alignment of polarization indices via constraints placed on the auxiliary and channel-input distributions. The codes achieve capacity-optimal rates for several classes of broadcast channels (e.g., binary-input stochastically degraded channels). Applying Arikans original matrix kernel for polarization, it is shown that the average probability of error in decoding two private messages at the broadcast receivers decays as O(2(-nβ)) where 0 <; β <; 1/2 and n is the code length. The encoding and decoding complexities remain O(n log n). The error analysis is made possible by defining new polar code ensembles for broadcast channels.


IEEE Transactions on Information Theory | 2015

Polar Codes for Broadcast Channels

Naveen Goela; Emmanuel Abbe; Michael Gastpar

Polar codes are introduced for discrete memoryless broadcast channels. For m-user deterministic broadcast channels, polarization is applied to map uniformly random message bits from m-independent messages to one codeword while satisfying broadcast constraints. The polarization-based codes achieve rates on the boundary of the private-message capacity region. For two-user noisy broadcast channels, polar implementations are presented for two information-theoretic schemes: 1) Covers superposition codes and 2) Martons codes. Due to the structure of polarization, constraints on the auxiliary and channel-input distributions are identified to ensure proper alignment of polarization indices in the multiuser setting. The codes achieve rates on the capacity boundary of a few classes of broadcast channels (e.g., binary-input stochastically degraded). The complexity of encoding and decoding is O(n log n), where n is the block length. In addition, polar code sequences obtain a stretched-exponential decay of O(2-nβ) of the average block error probability where 0 <; β <; 1/2. Reproducible experiments for finite block lengths n = 512, 1024, 2048 corroborate the theory.


information theory workshop | 2011

A compressed sensing wire-tap channel

Galen Reeves; Naveen Goela; Nebojsa Milosavljevic; Michael Gastpar

A multiplicative Gaussian wire-tap channel inspired by compressed sensing is studied. Lower and upper bounds on the secrecy capacity are derived, and shown to be relatively tight in the large system limit for a large class of compressed sensing matrices. Surprisingly, it is shown that the secrecy capacity of this channel is nearly equal to the capacity without any secrecy constraint provided that the channel of the eavesdropper is strictly worse than the channel of the intended receiver. In other words, the eavesdropper can see almost everything and yet learn almost nothing. This behavior, which contrasts sharply with that of many commonly studied wiretap channels, is made possible by the fact that a small number of linear projections can make a crucial difference in the ability to estimate sparse vectors.


IEEE Transactions on Signal Processing | 2012

Reduced-Dimension Linear Transform Coding of Correlated Signals in Networks

Naveen Goela; Michael Gastpar

A model called the linear transform network (LTN) is proposed to analyze the compression and estimation of correlated signals transmitted over directed acyclic graphs (DAGs). An LTN is a DAG network with multiple source and receiver nodes. Source nodes transmit subspace projections of random correlated signals by applying reduced-dimension linear transforms. The subspace projections are linearly processed by multiple relays and routed to intended receivers. Each receiver applies a linear estimator to approximate a subset of the sources with minimum mean squared error (MSE) distortion. The model is extended to include noisy networks with power constraints on transmitters. A key task is to compute all local compression matrices and linear estimators in the network to minimize end-to-end distortion. The nonconvex problem is solved iteratively within an optimization framework using constrained quadratic programs (QPs). The proposed algorithm recovers as special cases the regular and distributed Karhunen-Loève transforms (KLTs). Cut-set lower bounds on the distortion region of multi-source, multi-receiver networks are given for linear coding based on convex relaxations. Cut-set lower bounds are also given for any coding strategy based on information theory. The distortion region and compression-estimation tradeoffs are illustrated for different communication demands (e.g., multiple unicast), and graph structures.


international symposium on information theory | 2012

Approximate feedback capacity of the Gaussian multicast channel

Changho Suh; Naveen Goela; Michael Gastpar

We characterize the capacity region to within log {2(M - 1)} bits/s/Hz for the M-transmitter K-receiver Gaussian multicast channel with feedback where each receiver wishes to decode every message from the M transmitters. Extending Cover-Leungs achievable scheme intended for (M, K) = (2, 1), we show that this generalized scheme achieves the cutset-based outer bound within log {2(M - 1)} bits per transmitter for all channel parameters. In contrast to the capacity in the nonfeedback case, the feedback capacity improves upon the naive intersection of the feedback capacities of K individual multiple access channels. We find that feedback provides unbounded multiplicative gain at high signal-to-noise ratios as was shown in the Gaussian interference channel. To complement the results, we establish the exact feedback capacity of the Avestimehr-Diggavi-Tse deterministic model, from which we make the observation that feedback can also be beneficial for function computation.


information theory workshop | 2012

Network coding with computation alignment

Naveen Goela; Changho Suh; Michael Gastpar

Determining the capacity of multi-receiver networks with arbitrary message demands is an open problem in the network coding literature. In this paper, we consider a multi-source, multi-receiver symmetric deterministic network model parameterized by channel coefficients (inspired by wireless network flow) in which the receivers compute a sum of the symbols generated at the sources. Scalar and vector linear coding strategies are analyzed. It is shown that computation alignment over finite field vector spaces is necessary to achieve the computation capacities in the network. To aid in the construction of coding strategies, network equivalence theorems are established for the decomposition of deterministic models into elementary sub-networks. The linear coding capacity for computation is characterized for all channel parameters considered in the model for a countably infinite class of networks. The constructive coding schemes introduced herein for a specific class of networks provide an optimistic viewpoint for the application of structured codes in network communication.


international symposium on information theory | 2009

Linear compressive networks

Naveen Goela; Michael Gastpar

A linear compressive network (LCN) is defined as a graph of sensors in which each encoding sensor compresses incoming jointly Gaussian random signals and transmits (potentially) low-dimensional linear projections to neighbors over a noisy uncoded channel. Each sensor has a maximum power to allocate over signal subspaces. The networks of focus are acyclic, directed graphs with multiple sources and multiple destinations. LCN pathways lead to decoding leaf nodes that estimate linear functions of the original high dimensional sources by minimizing a mean squared error (MSE) distortion cost function. An iterative optimization of local compressive matrices for all graph nodes is developed using an optimal quadratically constrained quadratic program (QCQP) step. The performance of the optimization is marked by power-compression-distortion spectra, with converse bounds based on cut-set arguments. Examples include single layer and multi-layer (e.g. p-layer tree cascades, butterfly) networks. The LCN is a generalization of the Karhunen-Loève Transform to noisy multi-layer networks, and extends previous approaches for point-to-point and distributed compression-estimation of Gaussian signals. The framework relates to network coding in the noiseless case, and uncoded transmission in the noisy case.


international symposium on information theory | 2014

Polarized random variables: Maximal correlations and common information

Naveen Goela

New theorems are established regarding polarized Bernoulli random variables: (i) The maximal correlations between polarized Bernoulli variables converge to zero or one as do the conditional entropy and Bhattacharyya parameters; (ii) The graphical model of polarized Bernoulli variables provides a way to compute pair-wise and higher-order correlations; (iii) The Wyner common information between two sequences of correlated random variables may be extracted using Arikans polar transform which leads to a low-complexity solution to the Wyner network. In addition, a joint polarization theorem is provided involving common information.


international symposium on information theory | 2012

Degrees of freedom of sparsely connected wireless networks

Sang-Woon Jeon; Naveen Goela; Michael Gastpar

We investigate how the network connectivity can affect the degrees of freedom (DoF) of wireless networks. We consider a network of n source-destination (SD) pairs and assume that any two nodes are connected with a positive probability p, independent of other node pairs. We show that, for any arbitrarily small p, a constant DoF is achievable for every SD pair with probability approaching one as n tends to infinity. The achievability is based on the two-hop transmission with decode-and-forward relaying and over each-hop we adopt interference alignment. Considering that an achievable per-user DoF for direct or one-hop transmission can be arbitrarily small as the connectivity probability p decreases, our result shows that, somewhat surprisingly, two-hop transmission is enough to guarantee non-vanishing per-user DoF for any p showing that sparsely connected networks can still provide non-vanishing per-user DoF.

Collaboration


Dive into the Naveen Goela's collaboration.

Top Co-Authors

Avatar

Michael Gastpar

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ajay Divakaran

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Feng Niu

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sang-Woon Jeon

Andong National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge