Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jared Tanner is active.

Publication


Featured researches published by Jared Tanner.


Philosophical Transactions of the Royal Society A | 2009

Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing

David L. Donoho; Jared Tanner

We review connections between phase transitions in high-dimensional combinatorial geometry and phase transitions occurring in modern high-dimensional data analysis and signal processing. In data analysis, such transitions arise as abrupt breakdown of linear model selection, robust data fitting or compressed sensing reconstructions, when the complexity of the model or the number of outliers increases beyond a threshold. In combinatorial geometry, these transitions appear as abrupt changes in the properties of face counts of convex polytopes when the dimensions are varied. The thresholds in these very different problems appear in the same critical locations after appropriate calibration of variables. These thresholds are important in each subject area: for linear modelling, they place hard limits on the degree to which the now ubiquitous high-throughput data analysis can be successful; for robustness, they place hard limits on the degree to which standard robust fitting methods can tolerate outliers before breaking down; for compressed sensing, they define the sharp boundary of the undersampling/sparsity trade-off curve in undersampling theorems. Existing derivations of phase transitions in combinatorial geometry assume that the underlying matrices have independent and identically distributed Gaussian elements. In applications, however, it often seems that Gaussianity is not required. We conducted an extensive computational experiment and formal inferential analysis to test the hypothesis that these phase transitions are universal across a range of underlying matrix ensembles. We ran millions of linear programs using random matrices spanning several matrix ensembles and problem sizes; visually, the empirical phase transitions do not depend on the ensemble, and they agree extremely well with the asymptotic theory assuming Gaussianity. Careful statistical analysis reveals discrepancies that can be explained as transient terms, decaying with problem size. The experimental results are thus consistent with an asymptotic large-n universality across matrix ensembles; finite-sample universality can be rejected.


Proceedings of the IEEE | 2010

Precise Undersampling Theorems

David L. Donoho; Jared Tanner

Undersampling theorems state that we may gather far fewer samples than the usual sampling theorem while exactly reconstructing the object of interest-provided the object in question obeys a sparsity condition, the samples measure appropriate linear combinations of signal values, and we reconstruct with a particular nonlinear procedure. While there are many ways to crudely demonstrate such undersampling phenomena, we know of only one mathematically rigorous approach which precisely quantifies the true sparsity-undersampling tradeoff curve of standard algorithms and standard compressed sensing matrices. That approach, based on combinatorial geometry, predicts the exact location in sparsity-undersampling domain where standard algorithms exhibit phase transitions in performance. We review the phase transition approach here and describe the broad range of cases where it applies. We also mention exceptions and state challenge problems for future research. Sample result: one can efficiently reconstruct a k-sparse signal of length N from n measurements, provided n ¿ 2k · log(N/n), for (k,n,N) large.k ¿ N.AMS 2000 subject classifications . Primary: 41A46, 52A22, 52B05, 62E20, 68P30, 94A20; Secondary: 15A52, 60F10, 68P25, 90C25, 94B20.


Siam Review | 2011

Compressed Sensing: How Sharp Is the Restricted Isometry Property?

Jeffrey D. Blanchard; Coralia Cartis; Jared Tanner

Compressed sensing (CS) seeks to recover an unknown vector with


IEEE Transactions on Signal Processing | 2008

Identification of Matrices Having a Sparse Representation

Goetz E. Pfander; Holger Rauhut; Jared Tanner

N


SIAM Journal on Matrix Analysis and Applications | 2010

Improved Bounds on Restricted Isometry Constants for Gaussian Matrices

Bubacarr Bah; Jared Tanner

entries by making far fewer than


SIAM Journal on Scientific Computing | 2013

Normalized Iterative Hard Thresholding for Matrix Completion

Jared Tanner; Ke Wei

N


SIAM Journal on Numerical Analysis | 2006

Fast Reconstruction Methods for Bandlimited Functions from Periodic Nonuniform Sampling

Thomas Strohmer; Jared Tanner

measurements; it posits that the number of CS measurements should be comparable to the information content of the vector, not simply


Mathematical Programming Computation | 2013

GPU accelerated greedy algorithms for compressed sensing

Jeffrey D. Blanchard; Jared Tanner

N


IEEE Transactions on Information Theory | 2013

Vanishingly Sparse Matrices and Expander Graphs, With Application to Compressed Sensing

Bubacarr Bah; Jared Tanner

. CS combines directly the important task of compression with the measurement task. Since its introduction in 2004 there have been hundreds of papers on CS, a large fraction of which develop algorithms to recover a signal from its compressed measurements. Because of the paradoxical nature of CS—exact reconstruction from seemingly undersampled measurements—it is crucial for acceptance of an algorithm that rigorous analyses verify the degree of undersampling the algorithm permits. The restricted isometry property (RIP) has become the dominant tool used for the analysis in such cases. We present here an asymmetric form of RIP that gives tighter bounds than the usual symmetric one. We give the best known bounds on the RIP constants for matrices from the Gaussian ensemble. Our derivations illustrate the way in which the combinatorial nature of CS is controlled. Our quantitative bounds on the RIP allow precise statements as to how aggressively a signal can be undersampled, the essential question for practitioners. We also document the extent to which RIP gives precise information about the true performance limits of CS, by comparison with approaches from high-dimensional geometry.


Numerical Linear Algebra With Applications | 2015

Performance comparisons of greedy algorithms in compressed sensing

Jeffrey D. Blanchard; Jared Tanner

We consider the problem of recovering a matrix from its action on a known vector in the setting where the matrix can be represented efficiently in a known matrix dictionary. Connections with sparse signal recovery allows for the use of efficient reconstruction techniques such as basis pursuit. Of particular interest is the dictionary of time-frequency shift matrices and its role for channel estimation and identification in communications engineering. We present recovery results for basis pursuit with the time-frequency shift dictionary and various dictionaries of random matrices.

Collaboration


Dive into the Jared Tanner's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ke Wei

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bubacarr Bah

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge