Sheng Cai
The Chinese University of Hong Kong
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sheng Cai.
allerton conference on communication, control, and computing | 2012
Mayank Bakshi; Sidharth Jaggi; Sheng Cai; Minghua Chen
Suppose x is any exactly k-sparse vector in Rn. We present a class of “sparse” matrices A, and a corresponding algorithm that we call SHO-FA (for Short and Fast1) that, with high probability over A, can reconstruct x from Ax. The SHO-FA algorithm is related to the Invertible Bloom Lookup Tables (IBLTs) recently introduced by Goodrich et al., with two important distinctions - SHO-FA relies on linear measurements, and is robust to noise and approximate sparsity. The SHO-FA algorithm is the first to simultaneously have the following properties: (a) it requires only O(k) measurements, (b) the bit-precision of each measurement and each arithmetic operation is O (log(n) + P) (here 2-P corresponds to the desired relative error in the reconstruction of x), (c) the computational complexity of decoding is O(k) arithmetic operations, and (d) if the reconstruction goal is simply to recover a single component of x instead of all of x, with high probability over A this can be done in constant time. All constants above are independent of all problem parameters other than the desired probability of success. For a wide range of parameters these properties are information-theoretically order-optimal. In addition, our SHO-FA algorithm is robust to random noise, and (random) approximate sparsity for a large range of k. In particular, suppose the measured vector equals A(x + z) +e, where z and e correspond respectively to the source tail and measurement noise. Under reasonable statistical assumptions on z and e our decoding algorithm reconstructs x with an estimation error of C(∥z∥1 + (log k)2 ∥e∥1). The SHO-FA algorithm works with high probability over A, z, and e, and still requires only O(k) steps and O(k) measurements over O(log(n))-bit numbers. This is in contrast to most existing algorithms which focus on the “worst-case” z model, where it is known Ω(k log(n/k)) measurements over O (log (n))-bit numbers are necessary.
allerton conference on communication, control, and computing | 2013
Sheng Cai; Mohammad Jahangoshahi; Mayank Bakshi; Sidharth Jaggi
Group-testing refers to the problem of identifying (with high probability) a (small) subset of D defectives from a (large) set of N items via a “small” number of “pooled” tests (i.e., tests have a positive outcome if even one of the items being tested in the pool is defective, else they have a negative outcome). For ease of presentation in this work we focus the regime when the number of defectives is sublinear, i.e., D = O (N1-δ) for some δ > 0. The tests may be noiseless or noisy, and the testing procedure may be adaptive (the pool defining a test may depend on the outcome of a previous test), or non-adaptive (each test is performed independent of the outcome of other tests). A rich body of literature demonstrates that Θ(Dlog(N)) tests are information-theoretically necessary and sufficient for the group-testing problem, and provides algorithms that achieve this performance. However, it is only recently that reconstruction algorithms with computational complexity that is sub-linear in N have started being investigated (recent work by [1], [2], [3] gave some of the first such algorithms). In the scenario with adaptive tests with noisy outcomes, we present the first scheme that is simultaneously order-optimal (up to small constant factors) in both the number of tests and the decoding complexity (O(Dlog(N)) in both the performance metrics). The total number of stages of our adaptive algorithm is “small” (O(log(D))). Similarly, in the scenario with non-adaptive tests with noisy outcomes, we present the first scheme that is simultaneously near-optimal in both the number of tests and the decoding complexity (via an algorithm that requires O(Dlog(D) log(N)) tests and has a decoding complexity of O(D(logN + log2 D)). Finally, we present an adaptive algorithm that only requires 2 stages, and for which both the number of tests and the decoding complexity scale as O(D(logN + log2 D)). For all three settings the probability of error of our algorithms scales as O(1=(poly(D)).
international symposium on information theory | 2014
Sheng Cai; Mayank Bakshi; Sidharth Jaggi; Minghua Chen
Compressive phase retrieval algorithms attempt to reconstruct a “sparse high-dimensional vector” from its “low-dimensional intensity measurements”. Suppose x is any length-n input vector over ℂ with exactly k non-zero entries, and A is an m × n (k <; m ≪ n) phase measurement matrix over ℂ. The decoder is handed m “intensity measurements” (|A1x|, ..., |Amx|) (corresponding to component-wise absolute values of the linear measurement Ax) - here Ais correspond to the rows of the measurement matrix A. In this work, we present a class of measurement matrices A, and a corresponding decoding algorithm that we call SUPER, which can reconstruct x up to a global phase from intensity measurements. The SUPER algorithm is the first to simultaneously have the following properties: (a) it requires only O(k) (order-optimal) measurements, (b) the computational complexity of decoding is O(k log k) (near order-optimal) arithmetic operations, (c) it succeeds with high probability over the design of A. Our results hold for all k ∈ {1, 2, ..., n}.
communication systems and networks | 2014
Sheng Cai; Mayank Bakshi; Sidharth Jaggi; Minghua Chen
We study the problem of link and node delay estimation in undirected networks when at most k out of n links or nodes in the network are congested. Our approach relies on end-to-end measurements of path delays across pre-specified paths in the network. We present a class of algorithms that we call FRANTIC. The FRANTIC algorithms are motivated by compressive sensing; however, unlike traditional compressive sensing, the measurement design here is constrained by the network topology and the matrix entries are constrained to be positive integers. A key component of our design is a new compressive sensing algorithm SHO-FA-INT that is related to the SHO-FA algorithm [1] for compressive sensing, but unlike SHO-FA, the matrix entries here are drawn from the set of integers {0, 1, ...,M}. We show that O(k log n/log M) measurements suffice both for SHO-FA-INT and FRANTIC. Further, we show that the computational complexity of decoding is also O(k log n/log M) for each of these algorithms. Finally, we look at efficient constructions of the measurement operations through Steiner Trees.
information theory workshop | 2013
Chun Lam Chan; Sheng Cai; Mayank Bakshi; Sidharth Jaggi; Venkatesh Saligrama
We formulate and analyze a stochastic threshold group testing problem motivated by biological applications. Here a set of n items contains a subset of d ≪ C n defective items. Subsets (pools) of the n items are tested. The test outcomes are negative if the number of defectives in a pool is no larger than l; positive if the pool contains more than u defectives, and stochastic (negative/positive with some probability) if the number of defectives in the pool is in the interval [l, u]. The goal of our stochastic threshold group testing scheme is to identify the set of d defective items via a “small” number of such tests with high probability. In the regime that l = o(d) we present schemes that are computationally feasible to design and implement, and require near-optimal number of tests. Our schemes are robust to a variety of models for probabilistic threshold group testing.
IEEE Transactions on Information Theory | 2016
Mayank Bakshi; Sidharth Jaggi; Sheng Cai; Minghua Chen
Suppose
arXiv: Information Theory | 2013
Chun Lam Chan; Sheng Cai; Mayank Bakshi; Sidharth Jaggi; Venkatesh Saligrama
x
arXiv: Information Theory | 2012
Sheng Cai; Jihang Ye; Minghua Chen; Jianxin Yan; Sidharth Jaggi
is any exactly
Archive | 2013
Sheng Cai; Mayank Bakshi; Sidharth Jaggi; Minghua Chen
{k}
IEEE Transactions on Information Theory | 2017
Sheng Cai; Mohammad Jahangoshahi; Mayank Bakshi; Sidharth Jaggi
-sparse vector in