Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Manqi Zhao is active.

Publication


Featured researches published by Manqi Zhao.


IEEE Transactions on Information Theory | 2010

Information Theoretic Bounds for Compressed Sensing

Shuchin Aeron; Venkatesh Saligrama; Manqi Zhao

In this paper, we derive information theoretic performance bounds to sensing and reconstruction of sparse phenomena from noisy projections. We consider two settings: output noise models where the noise enters after the projection and input noise models where the noise enters before the projection. We consider two types of distortion for reconstruction: support errors and mean-squared errors. Our goal is to relate the number of measurements, m , and SNR, to signal sparsity, k, distortion level, d, and signal dimension, n . We consider support errors in a worst-case setting. We employ different variations of Fanos inequality to derive necessary conditions on the number of measurements and SNR required for exact reconstruction. To derive sufficient conditions, we develop new insights on max-likelihood analysis based on a novel superposition property. In particular, this property implies that small support errors are the dominant error events. Consequently, our ML analysis does not suffer the conservatism of the union bound and leads to a tighter analysis of max-likelihood. These results provide order-wise tight bounds. For output noise models, we show that asymptotically an SNR of ((n)) together with (k (n/k)) measurements is necessary and sufficient for exact support recovery. Furthermore, if a small fraction of support errors can be tolerated, a constant SNR turns out to be sufficient in the linear sparsity regime. In contrast for input noise models, we show that support recovery fails if the number of measurements scales as o(n(n)/SNR), implying poor compression performance for such cases. Motivated by the fact that the worst-case setup requires significantly high SNR and substantial number of measurements for input and output noise models, we consider a Bayesian setup. To derive necessary conditions, we develop novel extensions to Fanos inequality to handle continuous domains and arbitrary distortions. We then develop a new max-likelihood analysis over the set of rate distortion quantization points to characterize tradeoffs between mean-squared distortion and the number of measurements using rate-distortion theory. We show that with constant SNR the number of measurements scales linearly with the rate-distortion function of the sparse phenomena.


asilomar conference on signals, systems and computers | 2006

Fundamental Tradeoffs between Sparsity, Sensing Diversity and Sensing Capacity

Shuchin Aeron; Manqi Zhao; Venkatesh Saligrama

A fundamental problem in sensor networks is to determine the sensing capacity, i.e., the minimum number of sensors required to monitor a given region to a desired degree of fidelity based on noisy sensor data. This question has direct bearing on the corresponding coverage problem, wherein the task is to determine the maximum coverage region with a given set of sensors. In this paper we show that sensing capacity is a function of SNR sparsity-the inherent complexity/dimensionality of the underlying signal/information space and its frequency of occurrence-and sensing diversity, i.e., the number of independent paths from the underlying signal space to the multiple sensors. We derive fundamental tradeoffs between SNR, sparsity, diversity and capacity. We show that the capacity is a monotonic function of SNR and diversity. A surprising result is that as sparsity approaches zero so does the sensing capacity irrespective of diversity. This implies for instance that to reliably monitor a small number of targets in a given region requires an disproportionally large number of sensors.


2007 IEEE/SP 14th Workshop on Statistical Signal Processing | 2007

On sensing capacity of sensor networks for a class of linear observation models

Shuchin Aeron; Manqi Zhao; Venkatesh Saligrama

In this paper we derive fundamental information theoretic upper and lower bounds to sensing capacity of sensor networks for several classes of linear observation models under fixed SNR. We define sensing capacity as the number of signal dimensions that can be reliably identified per sensor measurement. The signal sparsity plays an important role in this context. First we derive lower bounds to probability of error by extending the Fanos inequality to handle arbitrary distortion in reconstruction and continuous signal spaces. It turns out that a necessary condition for signal reconstruction to within an average distortion level is that the rate distortion at the given level of sparsity should be less than the mutual information between the signal and the observations. Through a suitable expansion of the mutual information term we isolate the effect of structure of the sensing matrix on sensing capacity. Subsequently we analyze this effect for several interesting classes of sensing matrices that arise naturally in the context of sensor networks under different scenarios. First we show the effect of sensing diversity - which is related to the field coverage per sensor- on sensing capacity for random ensembles of sensing matrices. We show that low diversity implies low sensing capacity. However sufficiently large diversity can be traded off for SNR and signal sparsity. Then we consider deterministic sensing matrices and evaluate a general upper bound to sensing capacity. As a special case we show that a random LTI filter type structure suffers from low diversity.


international conference on acoustics, speech, and signal processing | 2010

On compressed blind de-convolution of filtered sparse processes

Manqi Zhao; Venkatesh Saligrama

Suppose the signal x ∈ 葷n is realized by driving a k-sparse signal z ∈ 葷n through an arbitrary unknown stable discrete-linear time invariant system H, namely, x(t) = (h * z)(t), where h(·) is the impulse response of the operator H. Is x(·) compressible in the conventional sense of compressed sensing? Namely, can x(t) be reconstructed from small set of measurements obtained through suitable random projections? For the case when the unknown system H is auto-regressive (i.e. all pole) of a known order it turns out that x can indeed be reconstructed from O(k log(n)) measurements. We develop a novel LP optimization algorithm and show that both the unknown filter H and the sparse input z can be reliably estimated.


information theory and applications | 2008

Algorithms and bounds for sensing capacity and compressed sensing with applications to learning graphical models

Shuchin Aeron; Manqi Zhao; Venkatesh Saligrama

We consider the problem of recovering sparse phenomena from projections of noisy data, a topic of interest in compressed sensing. We describe the problem in terms of sensing capacity, which we define as the supremum of the ratio of the number of signal dimensions that can be identified per projection. This notion quantifies minimum number of observations required to estimate a signal as a function of sensing channel, SNR, sensed environment(sparsity) as well as desired distortion up to which the sensed phenomena must be reconstructed. We first present bounds for two different sensing channels: (A) i.i.d. Gaussian observations (B) Correlated observations. We then extend the results derived for the correlated case to the problem of learning sparse graphical models. We then present convex programming methods for the different distortions for the correlated case. We then comment on the differences between the achievable bounds and the performance of convex programming methods.


allerton conference on communication, control, and computing | 2009

Outlier detection via localized p-value estimation

Manqi Zhao; Venkatesh Saligrama

We propose a novel non-parametric adaptive outlier detection algorithm, called LPE, for high dimensional data based on score functions derived from nearest neighbor graphs on n-point nominal data. Outliers are predicted whenever the score of a test sample falls below α, which is supposed to be the desired false alarm level. The resulting outlier detector is shown to be asymptotically optimal in that it is uniformly most powerful for the specified false alarm level, α, for the case when the density associated with the outliers is a mixture of the nominal and a known density. Our algorithm is computationally efficient, being linear in dimension and quadratic in data size. The whole empirical Receiving Operating Characteristics (ROC) curve can be derived with almost no additional cost based on the estimated score function. It does not require choosing complicated tuning parameters or function approximation classes and it can adapt to local structure such as local change in dimensionality by incorporating the technique of manifold learning. We demonstrate the algorithm on both artificial and real data sets in high dimensional feature spaces.


Proceedings of SPIE | 2009

Compressed sensing of autoregressive processes

Venkatesh Saligrama; Manqi Zhao

Suppose the signal x ∈ Rn is realized by driving a d-sparse signal z ∈ Rn through an arbitrary unknown stable discrete-linear time invariant system H, namely, x(t) = (h * z)(t), where h(·) is the impulse response of the operator H. Is x(·) compressible in the conventional sense of compressed sensing? Namely, can x(t) be reconstructed from sparse set of measurements. For the case when the unknown system H is auto-regressive (i.e. all pole) of a known order it turns out that x can indeed be reconstructed from O(k log(n)) measurements. The main idea is to pass x through a linear time invariant system G and collect O(k log(n)) sequential measurements. The filter G is chosen suitably, namely, its associated Toeplitz matrix satisfies the RIP property. We develop a novel LP optimization algorithm and show that both the unknown filter H and the sparse input z can be reliably estimated. These types of processes arise naturally in Reflection Seismology.


conference on decision and control | 2010

Noisy filtered sparse processes: Reconstruction and compression

Manqi Zhao; Venkatesh Saligrama

In this paper we consider estimation and compression of filtered sparse processes. Specifically, the filtered sparse process is a signal x ∈ ℝn obtained by driving a k-sparse signal u ∈ ℝn through an arbitrary unknown stable discrete-linear time invariant system H of a known order. The signal x(t) is measured noisily. We consider estimation of x(t) from noisy measurements. We also consider compression of x(t) by means of random projections analogous to compressed sensing. For different cases including AR and MA systems we show that x can indeed be reconstructed from O(k log(n)) measurements. We develop a novel LP optimization algorithm and show that both the unknown filter H and the sparse input u can be reliably estimated.


neural information processing systems | 2009

Anomaly Detection with Score functions based on Nearest Neighbor Graphs

Manqi Zhao; Venkatesh Saligrama


Archive | 2008

Thresholded Basis Pursuit: Quantizing Linear Programming Solutions for Optimal Support Recovery and Approximation in Compressed Sensing

Venkatesh Saligrama; Manqi Zhao

Collaboration


Dive into the Manqi Zhao's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ye Wang

Mitsubishi Electric Research Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge