Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junan Zhu is active.

Publication


Featured researches published by Junan Zhu.


conference on information sciences and systems | 2013

Performance regions in compressed sensing from noisy measurements

Junan Zhu; Dror Baron

In this paper, compressed sensing with noisy measurements is addressed. The theoretically optimal reconstruction error is studied by evaluating Tanakas equation. The main contribution is to show that in several regions, which have different measurement rates and noise levels, the reconstruction error behaves differently. This paper also evaluates the performance of the belief propagation (BP) signal reconstruction method in the regions discovered. When the measurement rate and the noise level lie in a certain region, BP is suboptimal with respect to Tanakas equation, and it may be possible to develop reconstruction algorithms with lower error in that region.


IEEE Transactions on Signal Processing | 2015

Recovery From Linear Measurements With Complexity-Matching Universal Signal Estimation

Junan Zhu; Dror Baron; Marco F. Duarte

We study the compressed sensing (CS) signal estimation problem where an input signal is measured via a linear matrix multiplication under additive noise. While this setup usually assumes sparsity or compressibility in the input signal during recovery, the signal structure that can be leveraged is often not known a priori. In this paper, we consider universal CS recovery, where the statistics of a stationary ergodic signal source are estimated simultaneously with the signal itself. Inspired by Kolmogorov complexity and minimum description length, we focus on a maximum a posteriori (MAP) estimation framework that leverages universal priors to match the complexity of the source. Our framework can also be applied to general linear inverse problems where more measurements than in CS might be needed. We provide theoretical results that support the algorithmic feasibility of universal MAP estimation using a Markov chain Monte Carlo implementation, which is computationally challenging. We incorporate some techniques to accelerate the algorithm while providing comparable and in many cases better reconstruction quality than existing algorithms. Experimental results show the promise of universality in CS, particularly for low-complexity sources that do not exhibit standard sparsity or compressibility.


international conference on acoustics, speech, and signal processing | 2016

Multi-processor approximate message passing using lossy compression

Puxiao Han; Junan Zhu; Ruixin Niu; Dror Baron

In this paper, a communication-efficient multi-processor compressed sensing framework based on the approximate message passing algorithm is proposed. We perform lossy compression on the data being communicated between processors, resulting in a reduction in communication costs with a minor degradation in recovery quality. In the proposed framework, a new state evolution formulation takes the quantization error into account, and analytically determines the coding rate required in each iteration. Two approaches for allocating the coding rate, an online back-tracking heuristic and an optimal allocation scheme based on dynamic programming, provide significant reductions in communication costs.


IEEE Transactions on Signal Processing | 2016

Approximate Message Passing Algorithm With Universal Denoising and Gaussian Mixture Learning

Yanting Ma; Junan Zhu; Dror Baron

We study compressed sensing (CS) signal reconstruction problems where an input signal is measured via matrix multiplication under additive white Gaussian noise. Our signals are assumed to be stationary and ergodic, but the input statistics are unknown; the goal is to provide reconstruction algorithms that are universal to the input statistics. We present a novel algorithmic framework that combines: 1) the approximate message passing CS reconstruction framework, which solves the matrix channel recovery problem by iterative scalar channel denoising; 2) a universal denoising scheme based on context quantization, which partitions the stationary ergodic signal denoising into independent and identically distributed (i.i.d.) subsequence denoising; and 3) a density estimation approach that approximates the probability distribution of an i.i.d. sequence by fitting a Gaussian mixture (GM) model. In addition to the algorithmic framework, we provide three contributions: 1) numerical results showing that state evolution holds for nonseparable Bayesian sliding-window denoisers; 2) an i.i.d. denoiser based on a modified GM learning algorithm; and 3) a universal denoiser that does not need information about the range where the input takes values from or require the input signal to be bounded. We provide two implementations of our universal CS recovery algorithm with one being faster and the other being more accurate. The two implementations compare favorably with existing universal reconstruction algorithms in terms of both reconstruction quality and runtime.


international symposium on information theory | 2016

Performance trade-offs in multi-processor approximate message passing

Junan Zhu; Ahmad Beirami; Dror Baron

We consider large-scale linear inverse problems in Bayesian settings. Our general approach follows a recent line of work that applies the approximate message passing (AMP) framework in multi-processor (MP) computational systems by storing and processing a subset of rows of the measurement matrix along with corresponding measurements at each MP node. In each MP-AMP iteration, nodes of the MP system and its fusion center exchange lossily compressed messages pertaining to their estimates of the input. There is a trade-off between the physical costs of the reconstruction process including computation time, communication loads, and the reconstruction quality, and it is impossible to simultaneously minimize all the costs. We pose this minimization as a multi-objective optimization problem (MOP), and study the properties of the best trade-offs (Pareto optimality) in this MOP. We prove that the achievable region of this MOP is convex, and conjecture how the combined cost of computation and communication scales with the desired mean squared error. These properties are verified numerically.


ieee signal processing workshop on statistical signal processing | 2014

Complexity-adaptive universal signal estimation for compressed sensing

Junan Zhu; Dror Baron; Marco F. Duarte

We study the compressed sensing (CS) signal estimation problem where a signal is measured via a linear matrix multiplication under additive noise. While this setup usually assumes sparsity or compressibility in the signal during estimation, additional signal structure that can be leveraged is often not known a priori. For signals with independent and identically distributed (i.i.d.) entries, existing CS algorithms achieve optimal or near optimal estimation error without knowing the statistics of the signal. This paper addresses estimating stationary ergodic non-i.i.d. signals with unknown statistics. We have previously proposed a universal CS approach to simultaneously estimate the statistics of a stationary ergodic signal as well as the signal itself. This paper significantly improves on our previous work, especially for continuous-valued signals, by offering a four-stage algorithm called Complexity-Adaptive Universal Signal Estimation (CAUSE), where the alphabet size of the estimate adaptively matches the coding complexity of the signal. Numerical results show that the new approach offers comparable and in some cases, especially for non-i.i.d. signals, lower mean square error than the prior art, despite not knowing the signal statistics.


IEEE Transactions on Signal Processing | 2017

Performance Limits for Noisy Multimeasurement Vector Problems

Junan Zhu; Dror Baron; Florent Krzakala

Compressed sensing (CS) demonstrates that sparse signals can be estimated from underdetermined linear systems. Distributed CS (DCS) further reduces the number of measurements by considering joint sparsity within signal ensembles. DCS with jointly sparse signals has applications in multisensor acoustic sensing, magnetic resonance imaging with multiple coils, remote sensing, and array signal processing. Multimeasurement vector (MMV) problems consider the estimation of jointly sparse signals under the DCS framework. Two related MMV settings are studied. In the first setting, each signal vector is measured by a different independent and identically distributed (i.i.d.) measurement matrix, while in the second setting, all signal vectors are measured by the same i.i.d. matrix. Replica analysis is performed for these two MMV settings, and the minimum mean squared error (MMSE), which turns out to be identical for both settings, is obtained as a function of the noise variance and number of measurements. To showcase the application of MMV models, the MMSEs of complex CS problems with both real and complex measurement matrices are also analyzed. Multiple performance regions for MMV are identified where the MMSE behaves differently as a function of the noise variance and the number of measurements. Belief propagation (BP) is a CS signal estimation framework that often achieves the MMSE asymptotically. A phase transition for BP is identified. This phase transition, verified by numerical results, separates the regions where BP achieves the MMSE and where it is suboptimal. Numerical results also illustrate that more signal vectors in the jointly sparse signal ensemble lead to a better phase transition.


conference on information sciences and systems | 2017

An overview of multi-processor approximate message passing

Junan Zhu; Ryan Pilgrim; Dror Baron

Approximate message passing (AMP) is an algorithmic framework for solving linear inverse problems from noisy measurements, with exciting applications such as reconstructing images, audio, hyper spectral images, and various other signals, including those acquired in compressive signal acquisiton systems. The growing prevalence of big data systems has increased interest in large-scale problems, which may involve huge measurement matrices that are unsuitable for conventional computing systems. To address the challenge of large-scale processing, multi-processor (MP) versions of AMP have been developed. We provide an overview of two such MP-AMP variants. In row-MP-AMP, each computing node stores a subset of the rows of the matrix and processes corresponding measurements. In column-MP-AMP, each node stores a subset of columns, and is solely responsible for reconstructing a portion of the signal. We will discuss pros and cons of both approaches, summarize recent research results for each, and explain when each one may be a viable approach. Aspects that are highlighted include some recent results on state evolution for both MP-AMP algorithms, and the use of data compression to reduce communication in the MP network.


arXiv: Information Theory | 2014

Compressed Sensing via Universal Denoising and Approximate Message Passing

Yanting Ma; Junan Zhu; Dror Baron


arXiv: Information Theory | 2016

Optimal Trade-offs in Multi-Processor Approximate Message Passing

Junan Zhu; Dror Baron; Ahmad Beirami

Collaboration


Dive into the Junan Zhu's collaboration.

Top Co-Authors

Avatar

Dror Baron

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Marco F. Duarte

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Yanting Ma

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan Pilgrim

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Puxiao Han

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar

Ruixin Niu

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Florent Krzakala

École Normale Supérieure

View shared research outputs
Researchain Logo
Decentralizing Knowledge