Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mayank Bakshi is active.

Publication


Featured researches published by Mayank Bakshi.


international symposium on information theory | 2013

Reliable deniable communication: Hiding messages in noise

Pak Hou Che; Mayank Bakshi; Sidharth Jaggi

Alice may wish to reliably send a message to Bob over a binary symmetric channel (BSC) while ensuring that her transmission is deniable from an eavesdropper Willie. That is, if Willie observes a “significantly noisier” transmission than Bob does, he should be unable to estimate even whether Alice is transmitting or not. Even when Alices (potential) communication scheme is publicly known to Willie (with no common randomness between Alice and Bob), we prove that over n channel uses Alice can transmit a message of length O(√n) bits to Bob, deniably from Willie. We also prove information-theoretically order-optimality of our results.


international symposium on information theory | 2010

Concatenated Polar codes

Mayank Bakshi; Sidharth Jaggi; Michelle Effros

Polar codes have attracted much recent attention as one of the first codes with low computational complexity that provably achieve optimal rate-regions for a large class of information-theoretic problems. One significant drawback, however, is that for current constructions the probability of error decays sub-exponentially in the block-length (more detailed designs improve the probability of error at the cost of significantly increased computational complexity. In this work we show how the the classical idea of code concatenation - using “short” polar codes as inner codes and a “high-rate” Reed-Solomon code as the outer code - results in substantially improved performance. In particular, code concatenation with a careful choice of parameters boosts the rate of decay of the probability of error to almost exponential in the block-length with essentially no loss in computational complexity. We demonstrate such performance improvements for three sets of information-theoretic problems - a classical point-to-point channel coding problem, a class of multiple-input multiple output channel coding problems, and some network source coding problems.


international symposium on information theory | 2008

On achievable rates for multicast in the presence of side information

Mayank Bakshi; Michelle Effros

We investigate the network source coding rate region for networks with multiple sources and multicast demands in the presence of side information, generalizing earlier results on multicast rate regions without side information. When side information is present only at the terminal nodes, we show that the rate region is precisely characterized by the cut-set bounds and that random linear coding suffices to achieve the optimal performance. When side information is present at a non-terminal node, we present an achievable region. Finally, we apply these results to obtain an inner bound on the rate region for networks with general source-demand structures.


information theory workshop | 2014

Reliable deniable communication with channel uncertainty

Pak Hou Che; Mayank Bakshi; Chung Chan; Sidharth Jaggi

Alice wishes to potentially communicate with Bob over a compound Binary Symmetric Channel while Willie listens in over a compound Binary Symmetric Channel that is noisier than Bobs. The channel noise parameters for both Bob and Willie are drawn according to uniform distribution over a range, but none of the three parties know their exact values. Willies goal is to infer whether or not Alice is communicating with Bob. We show that Alice can send her messages reliably to Bob while ensuring that even whether or not she is actively communicating is deniable to Willie. We find the best rate at which Alice can communicate both deniably and reliably using Shannons random coding and prove a converse.


allerton conference on communication, control, and computing | 2012

SHO-FA: Robust compressive sensing with order-optimal complexity, measurements, and bits

Mayank Bakshi; Sidharth Jaggi; Sheng Cai; Minghua Chen

Suppose x is any exactly k-sparse vector in Rn. We present a class of “sparse” matrices A, and a corresponding algorithm that we call SHO-FA (for Short and Fast1) that, with high probability over A, can reconstruct x from Ax. The SHO-FA algorithm is related to the Invertible Bloom Lookup Tables (IBLTs) recently introduced by Goodrich et al., with two important distinctions - SHO-FA relies on linear measurements, and is robust to noise and approximate sparsity. The SHO-FA algorithm is the first to simultaneously have the following properties: (a) it requires only O(k) measurements, (b) the bit-precision of each measurement and each arithmetic operation is O (log(n) + P) (here 2-P corresponds to the desired relative error in the reconstruction of x), (c) the computational complexity of decoding is O(k) arithmetic operations, and (d) if the reconstruction goal is simply to recover a single component of x instead of all of x, with high probability over A this can be done in constant time. All constants above are independent of all problem parameters other than the desired probability of success. For a wide range of parameters these properties are information-theoretically order-optimal. In addition, our SHO-FA algorithm is robust to random noise, and (random) approximate sparsity for a large range of k. In particular, suppose the measured vector equals A(x + z) +e, where z and e correspond respectively to the source tail and measurement noise. Under reasonable statistical assumptions on z and e our decoding algorithm reconstructs x with an estimation error of C(∥z∥1 + (log k)2 ∥e∥1). The SHO-FA algorithm works with high probability over A, z, and e, and still requires only O(k) steps and O(k) measurements over O(log(n))-bit numbers. This is in contrast to most existing algorithms which focus on the “worst-case” z model, where it is known Ω(k log(n/k)) measurements over O (log (n))-bit numbers are necessary.


international symposium on information theory | 2007

On Network Coding of Independent and Dependent Sources in Line Networks

Mayank Bakshi; Michelle Effros; WeiHsin Gu; Ralf Koetter

We investigate the network coding capacity for line networks. For independent sources and a special class of dependent sources, we fully characterize the capacity region of line networks for all possible demand structures (e.g., multiple unicast, mixtures of unicasts and multicasts, etc.) Our achievability bound is derived by first decomposing a line network into single-demand components and then adding the component rate regions to get rates for the parent network. For general dependent sources, we give an achievability result and provide examples where the result is and is not tight.


international symposium on information theory | 2014

Reliable, deniable, and hidable communication over multipath networks

Swanand Kadhe; Sidharth Jaggi; Mayank Bakshi; Alex Sprintson

We consider the scenario wherein a transmitter Alice wants to (potentially) communicate to the intended receiver Bob over a multipath network, i.e., a network consisting of multiple parallel links, in the presence of a passive eavesdropper Willie, who observes an unknown subset of links. A primary goal of our communication protocol is to make the communication “deniable”, i.e., Willie should not be able to reliably estimate whether or not Alice is transmitting any covert information to Bob. Moreover, if Alice is indeed actively communicating, her covert messages should be information-theoretically “hidable” in the sense that Willies observations should not leak any information about Alices (potential) message to Bob - our notion of hidability is slightly stronger than the notion of information-theoretic strong secrecy well-studied in the literature. We demonstrate that deniability does not imply either hidability or (weak or strong) information-theoretic secrecy; nor does information-theoretic secrecy imply deniability. We present matching inner and outer bounds on the capacity for deniable and hidable communication over multipath networks.


information theory workshop | 2014

Reliable, deniable and hidable communication: A quick survey

Pak Hou Che; Swanand Kadhe; Mayank Bakshi; Chung Chan; Sidharth Jaggi; Alex Sprintson

We survey here recent work pertaining to “deniable” communication - i.e., talking without being detected. We first highlight connections to other related notions (anonymity and secrecy). We then contrast the notions of deniability and secrecy. We highlight similarities and distinctions of deniability with a variety of related notions (LPD communications, stealth, channel resolvability) extant in the literature.


information theory and applications | 2014

Reliable, deniable and hidable communication

Pak Hou Che; Mayank Bakshi; Chung Chan; Sidharth Jaggi

Alice wishes to potentially communicate covertly with Bob over a Binary Symmetric Channel while Willie the wiretapper listens in over a channel that is noisier than Bobs. We show that Alice can send her messages reliably to Bob while ensuring that even whether or not she is actively communicating is (a) deniable to Willie, and (b) optionally, her message is also hidable from Willie. We consider two different variants of the problem depending on the Alices “default” behavior, i.e., her transmission statistics when she has no covert message to send: 1) When Alice has no covert message, she stays “silent”, i.e., her transmission is 0; 2) When has no covert message, she transmits “innocently”, i.e., her transmission is drawn uniformly from an innocent random codebook; We prove that the best rate at which Alice can communicate both deniably and hid ably in model 1 is O(1/√n). On the other hand, in model 2, Alice can communicate at a constant rate.


allerton conference on communication, control, and computing | 2013

GROTESQUE: Noisy Group Testing (Quick and Efficient)

Sheng Cai; Mohammad Jahangoshahi; Mayank Bakshi; Sidharth Jaggi

Group-testing refers to the problem of identifying (with high probability) a (small) subset of D defectives from a (large) set of N items via a “small” number of “pooled” tests (i.e., tests have a positive outcome if even one of the items being tested in the pool is defective, else they have a negative outcome). For ease of presentation in this work we focus the regime when the number of defectives is sublinear, i.e., D = O (N1-δ) for some δ > 0. The tests may be noiseless or noisy, and the testing procedure may be adaptive (the pool defining a test may depend on the outcome of a previous test), or non-adaptive (each test is performed independent of the outcome of other tests). A rich body of literature demonstrates that Θ(Dlog(N)) tests are information-theoretically necessary and sufficient for the group-testing problem, and provides algorithms that achieve this performance. However, it is only recently that reconstruction algorithms with computational complexity that is sub-linear in N have started being investigated (recent work by [1], [2], [3] gave some of the first such algorithms). In the scenario with adaptive tests with noisy outcomes, we present the first scheme that is simultaneously order-optimal (up to small constant factors) in both the number of tests and the decoding complexity (O(Dlog(N)) in both the performance metrics). The total number of stages of our adaptive algorithm is “small” (O(log(D))). Similarly, in the scenario with non-adaptive tests with noisy outcomes, we present the first scheme that is simultaneously near-optimal in both the number of tests and the decoding complexity (via an algorithm that requires O(Dlog(D) log(N)) tests and has a decoding complexity of O(D(logN + log2 D)). Finally, we present an adaptive algorithm that only requires 2 stages, and for which both the number of tests and the decoding complexity scale as O(D(logN + log2 D)). For all three settings the probability of error of our algorithms scales as O(1=(poly(D)).

Collaboration


Dive into the Mayank Bakshi's collaboration.

Top Co-Authors

Avatar

Sidharth Jaggi

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Sheng Cai

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Michelle Effros

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Minghua Chen

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Qiaosheng Eric Zhang

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Pak Hou Che

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Chung Chan

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Chun Lam Chan

The Chinese University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge