Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Justin Ziniel is active.

Publication


Featured researches published by Justin Ziniel.


information theory and applications | 2008

Fast bayesian matching pursuit

Philip Schniter; Lee C. Potter; Justin Ziniel

A low-complexity recursive procedure is presented for minimum mean squared error (MMSE) estimation in linear regression models. A Gaussian mixture is chosen as the prior on the unknown parameter vector. The algorithm returns both an approximate MMSE estimate of the parameter vector and a set of high posterior probability mixing parameters. Emphasis is given to the case of a sparse parameter vector. Numerical simulations demonstrate estimation performance and illustrate the distinctions between MMSE estimation and MAP model selection. The set of high probability mixing parameters not only provides MAP basis selection, but also yields relative probabilities that reveal potential ambiguity in the sparse model.


IEEE Transactions on Signal Processing | 2013

Dynamic Compressive Sensing of Time-Varying Signals Via Approximate Message Passing

Justin Ziniel; Philip Schniter

In this work the dynamic compressive sensing (CS) problem of recovering sparse, correlated, time-varying signals from sub-Nyquist, non-adaptive, linear measurements is explored from a Bayesian perspective. While there has been a handful of previously proposed Bayesian dynamic CS algorithms in the literature, the ability to perform inference on high-dimensional problems in a computationally efficient manner remains elusive. In response, we propose a probabilistic dynamic CS signal model that captures both amplitude and support correlation structure, and describe an approximate message passing algorithm that performs soft signal estimation and support detection with a computational complexity that is linear in all problem dimensions. The algorithm, DCS-AMP, can perform either causal filtering or non-causal smoothing, and is capable of learning model parameters adaptively from the data through an expectation-maximization learning procedure. We provide numerical evidence that DCS-AMP performs within 3 dB of oracle bounds on synthetic data under a variety of operating conditions. We further describe the result of applying DCS-AMP to two real dynamic CS datasets, as well as a frequency estimation task, to bolster our claim that DCS-AMP is capable of offering state-of-the-art performance and speed on real-world high-dimensional problems.


IEEE Transactions on Signal Processing | 2013

Efficient High-Dimensional Inference in the Multiple Measurement Vector Problem

Justin Ziniel; Philip Schniter

In this work, a Bayesian approximate message passing algorithm is proposed for solving the multiple measurement vector (MMV) problem in compressive sensing, in which a collection of sparse signal vectors that share a common support are recovered from undersampled noisy measurements. The algorithm, AMP-MMV, is capable of exploiting temporal correlations in the amplitudes of non-zero coefficients, and provides soft estimates of the signal vectors as well as the underlying support. Central to the proposed approach is an extension of recently developed approximate message passing techniques to the amplitude-correlated MMV setting. Aided by these techniques, AMP-MMV offers a computational complexity that is linear in all problem dimensions. In order to allow for automatic parameter tuning, an expectation-maximization algorithm that complements AMP-MMV is described. Finally, a detailed numerical study demonstrates the power of the proposed approach and its particular suitability for application to high-dimensional problems.


asilomar conference on signals, systems and computers | 2010

Tracking and smoothing of time-varying sparse signals via approximate belief propagation

Justin Ziniel; Lee C. Potter; Philip Schniter

This paper considers the problem of recovering time-varying sparse signals from dramatically undersampled measurements. A probabilistic signal model is presented that describes two common traits of time-varying sparse signals: a support set that changes slowly over time, and amplitudes that evolve smoothly in time. An algorithm for recovering signals that exhibit these traits is then described. Built on the belief propagation framework, the algorithm leverages recently developed approximate message passing techniques to perform rapid and accurate estimation. The algorithm is capable of performing both causal tracking and non-causal smoothing to enable both online and offline processing of sparse time series, with a complexity that is linear in all problem dimensions. Simulation results illustrate the performance gains obtained through exploiting the temporal correlation of the time series relative to independent recoveries.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Sparse reconstruction for radar

Lee C. Potter; Philip Schniter; Justin Ziniel

Imaging is not itself a system goal, but is rather a means to support inference tasks. For data processing with linearized signal models, we seek to report all high-probability interpretations of the data and to report confidence labels in the form of posterior probabilities. A low-complexity recursive procedure is presented for Bayesian estimation in linear regression models. A Gaussian mixture is chosen as the prior on the unknown parameter vector. The algorithm returns both a set of high posterior probability mixing parameters and an approximate minimum mean squared error (MMSE) estimate of the parameter vector. Emphasis is given to the case of a sparse parameter vector. Numerical simulations demonstrate estimation performance and illustrate the distinctions between MMSE estimation and maximum a posteriori probability (MAP) model selection. The proposed tree-search algorithm provides exact ratios of posterior probabilities for a set of high probability solutions to the sparse reconstruction problem. These relative probabilities serve to reveal potential ambiguity among multiple candidate solutions that are ambiguous due to low signal-to-noise ratio and/or significant correlation among columns in the super-resolving regressor matrix.


ieee signal processing workshop on statistical signal processing | 2012

A generalized framework for learning and recovery of structured sparse signals

Justin Ziniel; Sundeep Rangan; Philip Schniter

We report on a framework for recovering single- or multi-timestep sparse signals that can learn and exploit a variety of probabilistic forms of structure. Message passing-based inference and empirical Bayesian parameter learning form the backbone of the recovery procedure. We further describe an object-oriented software paradigm for implementing our framework, which consists of assembling modular software components that collectively define a desired statistical signal model. Lastly, numerical results for synthetic and real-world structured sparse signal recovery are provided.


asilomar conference on signals, systems and computers | 2011

Efficient message passing-based inference in the multiple measurement vector problem

Justin Ziniel; Philip Schniter

In this work, a Bayesian approximate message passing algorithm is proposed for solving the multiple measurement vector (MMV) problem in compressive sensing, in which a collection of sparse signal vectors that share a common support are recovered from undersampled noisy measurements. The algorithm, AMP-MMV, is capable of exploiting temporal correlations in the amplitudes of non-zero coefficients, and provides soft estimates of the signal vectors as well as the underlying support. Central to the proposed approach is an extension of recently developed approximate message passing (AMP) techniques to the amplitude-correlated MMV setting. Aided by these techniques, AMP-MMV offers a computational complexity that is linear in all problem dimensions. In order to allow for automatic parameter tuning, an expectation-maximization algorithm that complements AMP-MMV is described. Finally, a numerical study demonstrates the power of the proposed approach and its particular suitability for application to high-dimensional problems.


conference on information sciences and systems | 2014

Binary linear classification and feature selection via generalized approximate message passing

Justin Ziniel; Philip Schniter; Per B. Sederberg

For the problem of binary linear classification and feature selection, we propose algorithmic approaches to classifier design based on the generalized approximate message passing (GAMP) algorithm, recently proposed in the context of compressive sensing. Our work focuses on the regime where the number of features greatly exceeds the number of training examples, but where only a few features suffice for accurate classification. We show that sum-product GAMP can be used to (approximately) minimize the classification error rate and max-sum GAMP can be used to minimize a wide variety of regularized loss functions. Furthermore, we describe an expectation-maximization (EM)-based scheme to learn the associated model parameters online, as an alternative to cross-validation, and we show that GAMPs state evolution framework can be used to accurately predict the misclassification rate. Finally, we present a brief numerical study to confirm the efficacy, flexibility, and speed afforded by our GAMP-based approaches to binary classification.


asilomar conference on signals, systems and computers | 2008

A fast posterior update for sparse underdetermined linear models

Lee C. Potter; Philip Schniter; Justin Ziniel

A Bayesian approach is adopted for linear regression, and a fast algorithm is given for updating posterior probabilities. Emphasis is given to the underdetermined and sparse case, i.e., fewer observations than regression coefficients and the belief that only a few regression coefficients are non-zero. The fast update allows for a low-complexity method of reporting a set of models with high posterior probability and their exact posterior odds. As a byproduct, this Bayesian model averaged approach yields the minimum mean squared error estimate of unknown coefficients. Algorithm complexity is linear in the number of unknown coefficients, the number of observations and the number of nonzero coefficients. For the case in which hyperparameters are unknown, a maximum likelihood estimate is found by a generalized expectation maximization algorithm.


Archive | 2009

Fast Bayesian Matching Pursuit: Model Uncertainty and Parameter Estimation for Sparse Linear Models

Philip Schniter; Lee C. Potter; Justin Ziniel

Collaboration


Dive into the Justin Ziniel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge