Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Galen Reeves is active.

Publication


Featured researches published by Galen Reeves.


very large data bases | 2009

Managing massive time series streams with multi-scale compressed trickles

Galen Reeves; Jie Liu; Suman Nath; Feng Zhao

We present Cypress, a novel framework to archive and query massive time series streams such as those generated by sensor networks, data centers, and scientific computing. Cypress applies multi-scale analysis to decompose time series and to obtain sparse representations in various domains (e.g. frequency domain and time domain). Relying on the sparsity, the time series streams can be archived with reduced storage space. We then show that many statistical queries such as trend, histogram and correlations can be answered directly from compressed data rather than from reconstructed raw data. Our evaluation with server utilization data collected from real data centers shows significant benefit of our framework.


international symposium on information theory | 2008

Sampling bounds for sparse support recovery in the presence of noise

Galen Reeves; Michael Gastpar

It is well known that the support of a sparse signal can be recovered from a small number of random projections. However, in the presence of noise all known sufficient conditions require that the per-sample signal-to-noise ratio (SNR) grows without bound with the dimension of the signal. If the noise is due to quantization of the samples, this means that an unbounded rate per sample is needed. In this paper, it is shown that an unbounded SNR is also a necessary condition for perfect recovery, but any fraction (less than one) of the support can be recovered with bounded SNP. This means that a finite rate per sample is sufficient for partial support recovery. Necessary and sufficient conditions are given for both stochastic and non-stochastic signal models. This problem arises in settings such as compressive sensing, model selection, and signal denoising.


IEEE Transactions on Information Theory | 2012

The Sampling Rate-Distortion Tradeoff for Sparsity Pattern Recovery in Compressed Sensing

Galen Reeves; Michael Gastpar

Recovery of the sparsity pattern (or support) of an unknown sparse vector from a limited number of noisy linear measurements is an important problem in compressed sensing. In the high-dimensional setting, it is known that recovery with a vanishing fraction of errors is impossible if the measurement rate and the per-sample signal-to-noise ratio (SNR) are finite constants, independent of the vector length. In this paper, it is shown that recovery with an arbitrarily small but constant fraction of errors is, however, possible, and that in some cases computationally simple estimators are near-optimal. Bounds on the measurement rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector for several different recovery algorithms. The tightness of the bounds, in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing information-theoretic necessary bounds. Near optimality is shown for a wide variety of practically motivated signal models.


IEEE Transactions on Information Theory | 2013

Approximate Sparsity Pattern Recovery: Information-Theoretic Lower Bounds

Galen Reeves; Michael Gastpar

Recovery of the sparsity pattern (or support) of an unknown sparse vector from a small number of noisy linear measurements is an important problem in compressed sensing. In this paper, the high-dimensional setting is considered. It is shown that if the measurement rate and per-sample signal-to-noise ratio (SNR) are finite constants independent of the length of the vector, then the optimal sparsity pattern estimate will have a constant fraction of errors. Lower bounds on the measurement rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector. The tightness of the bounds in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing achievable bounds. Near optimality is shown for a wide variety of practically motivated signal models.


international symposium on information theory | 2016

The replica-symmetric prediction for compressed sensing with Gaussian matrices is exact

Galen Reeves; Henry D. Pfister

This paper considers the fundamental limit of compressed sensing for i.i.d. signal distributions and i.i.d. Gaussian measurement matrices. Its main contribution is a rigorous characterization of the asymptotic mutual information (MI) and minimum mean-square error (MMSE) in this setting. Under mild technical conditions, our results show that the limiting MI and MMSE are equal to the values predicted by the replica method from statistical physics. This resolves a well-known problem that has remained open for over a decade.


asilomar conference on signals, systems and computers | 2009

A note on optimal support recovery in compressed sensing

Galen Reeves; Michael Gastpar

Recovery of the support set (or sparsity pattern) of a sparse vector from a small number of noisy linear projections (or samples) is a “compressed sensing” problem that arises in signal processing and statistics. Although many computationally efficient recovery algorithms have been studied, the optimality (or gap from optimality) of these algorithms is, in general, not well understood. In this note, approximate support recovery under a Gaussian prior is considered, and it is shown that optimal estimation depends on the recovery metric in general. By contrast, it is shown that in the SNR limits, there exist uniformly near-optimal estimators, namely, the ML estimate in the high SNR case, and a computationally trivial thresholding algorithm in the low SNR case.


IEEE Transactions on Information Theory | 2016

Classification and Reconstruction of High-Dimensional Signals From Low-Dimensional Features in the Presence of Side Information

Francesco Renna; Liming Wang; Xin Yuan; Jianbo Yang; Galen Reeves; A. Robert Calderbank; Lawrence Carin; Miguel R. D. Rodrigues

This paper offers a characterization of fundamental limits on the classification and reconstruction of high-dimensional signals from low-dimensional features, in the presence of side information. We consider a scenario where a decoder has access both to linear features of the signal of interest and to linear features of the side information signal; while the side information may be in a compressed form, the objective is recovery or classification of the primary signal, not the side information. The signal of interest and the side information are each assumed to have (distinct) latent discrete labels; conditioned on these two labels, the signal of interest and side information are drawn from a multivariate Gaussian distribution that correlates the two. With joint probabilities on the latent labels, the overall signal-(side information) representation is defined by a Gaussian mixture model. By considering bounds to the misclassification probability associated with the recovery of the underlying signal label, and bounds to the reconstruction error associated with the recovery of the signal of interest itself, we then provide sharp sufficient and/or necessary conditions for these quantities to approach zero when the covariance matrices of the Gaussians are nearly low rank. These conditions, which are reminiscent of the well-known Slepian-Wolf and Wyner-Ziv conditions, are the function of the number of linear features extracted from signal of interest, the number of linear features extracted from the side information signal, and the geometry of these signals and their interplay. Moreover, on assuming that the signal of interest and the side information obey such an approximately low-rank model, we derive the expansions of the reconstruction error as a function of the deviation from an exactly low-rank model; such expansions also allow the identification of operational regimes, where the impact of side information on signal reconstruction is most relevant. Our framework, which offers a principled mechanism to integrate side information in high-dimensional data problems, is also tested in the context of imaging applications. In particular, we report state-of-theart results in compressive hyperspectral imaging applications, where the accompanying side information is a conventional digital photograph.


information theory workshop | 2011

A compressed sensing wire-tap channel

Galen Reeves; Naveen Goela; Nebojsa Milosavljevic; Michael Gastpar

A multiplicative Gaussian wire-tap channel inspired by compressed sensing is studied. Lower and upper bounds on the secrecy capacity are derived, and shown to be relatively tight in the large system limit for a large class of compressed sensing matrices. Surprisingly, it is shown that the secrecy capacity of this channel is nearly equal to the capacity without any secrecy constraint provided that the channel of the eavesdropper is strictly worse than the channel of the intended receiver. In other words, the eavesdropper can see almost everything and yet learn almost nothing. This behavior, which contrasts sharply with that of many commonly studied wiretap channels, is made possible by the fact that a small number of linear projections can make a crucial difference in the ability to estimate sparse vectors.


international symposium on information theory | 2013

The minimax noise sensitivity in compressed sensing

Galen Reeves; David L. Donoho

Consider the compressed sensing problem of estimating an unknown k-sparse n-vector from a set of m noisy linear equations. Recent work focused on the noise sensitivity of particular algorithms - the scaling of the reconstruction error with added noise. In this paper, we study the minimax noise sensitivity - the minimum is over all possible recovery algorithms and the maximum is over all vectors obeying a sparsity constraint. This fundamental quantity characterizes the difficulty of recovery when nothing is known about the vector other than the fact that it has at most k nonzero entries. Assuming random sensing matrices (i.i.d. Gaussian), we obtain non-asymptotic bounds which show that the minimax noise sensitivity is finite if m ≥ k + 3 and infinite if m ≤ k + 1. We also study the large system behavior where δ = m/n ∈ (0,1) denotes the undersampling fraction and k/n = ε ∈ (0,1) denotes the sparsity fraction. There is a phase transition separating successful and unsuccessful recovery: the minimax noise sensitivity is bounded for any δ > ε and is unbounded for any δ <; ε. One consequence of our results is that the Bayes optimal phase transitions of Wu and Verdu can be obtained uniformly over the class of all sparse vectors.


2007 IEEE/SP 14th Workshop on Statistical Signal Processing | 2007

Differences Between Observation and Sampling Error in Sparse Signal Reconstruction

Galen Reeves; Michael Gastpar

The field of Compressed Sensing has shown that a relatively small number of random projections provide sufficient information to accurately reconstruct sparse signals. Inspired by applications in sensor networks in which each sensor is likely to observe a noisy version of a sparse signal and subsequently add sampling error through computation and communication, we investigate how the distortion differs depending on whether noise is introduced before sampling (observation error) or after sampling (sampling error). We analyze the optimal linear estimator (for known support) and an l1 constrained linear inverse (for unknown support). In both cases, observation noise is shown to be less detrimental than sampling noise and low sampling rates. We also provide sampling bounds for a non-stochastic l¿ bounded noise model.

Collaboration


Dive into the Galen Reeves's collaboration.

Top Co-Authors

Avatar

Michael Gastpar

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge