Alyson K. Fletcher
University of California, Los Angeles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alyson K. Fletcher.
IEEE Signal Processing Magazine | 2008
Vivek K Goyal; Alyson K. Fletcher; Sundeep Rangan
Recent results in compressive sampling have shown that sparse signals can be recovered from a small number of random measurements. This property raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. Through both theoretical and experimental results, we show that encoding a sparse signal through simple scalar quantization of random measurements incurs a significant penalty relative to direct or adaptive encoding of the sparse signal. Information theory provides alternative quantization strategies, but they come at the cost of much greater estimation complexity.
IEEE Transactions on Information Theory | 2012
Sundeep Rangan; Alyson K. Fletcher; Vivek K Goyal
The replica method is a nonrigorous but well-known technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method, under the assumption of replica symmetry, to study estimators that are maximum a posteriori (MAP) under a postulated prior distribution. It is shown that with random linear measurements and Gaussian noise, the replica-symmetric prediction of the asymptotic behavior of the postulated MAP estimate of an -dimensional vector “decouples” as scalar postulated MAP estimators. The result is based on applying a hardening argument to the replica analysis of postulated posterior mean estimators of Tanaka and of Guo and Verdú. The replica-symmetric postulated MAP analysis can be readily applied to many estimators used in compressed sensing, including basis pursuit, least absolute shrinkage and selection operator (LASSO), linear estimation with thresholding, and zero norm-regularized estimation. In the case of LASSO estimation, the scalar estimator reduces to a soft-thresholding operator, and for zero norm-regularized estimation, it reduces to a hard threshold. Among other benefits, the replica method provides a computationally tractable method for precisely predicting various performance metrics including mean-squared error and sparsity pattern recovery probability.
international symposium on information theory | 2014
Sundeep Rangan; Philip Schniter; Alyson K. Fletcher
Approximate message passing (AMP) methods and their variants have attracted considerable recent attention for the problem of estimating a random vector x observed through a linear transform A. In the case of large i.i.d. A, the methods exhibit fast convergence with precise analytic characterizations on the algorithm behavior. However, the convergence of AMP under general transforms is not fully understood. In this paper, we provide sufficient conditions for the convergence of a damped version of the generalized AMP (GAMP) algorithm in the case of Gaussian distributions. It is shown that, with sufficient damping the algorithm can be guaranteed to converge, but the amount of damping grows with peak-to-average ratio of the squared singular values of A. This condition explains the good performance of AMP methods on i.i.d. matrices, but also their difficulties with other classes of transforms. A related sufficient condition is then derived for the local stability of the damped GAMP method under more general (possibly non-Gaussian) distributions, assuming certain strict convexity conditions.
EURASIP Journal on Advances in Signal Processing | 2006
Alyson K. Fletcher; Sundeep Rangan; Vivek K Goyal; Kannan Ramchandran
If a signal is known to have a sparse representation with respect to a frame, it can be estimated from a noise-corrupted observation by finding the best sparse approximation to. Removing noise in this manner depends on the frame efficiently representing the signal while it inefficiently represents the noise. The mean-squared error (MSE) of this denoising scheme and the probability that the estimate has the same sparsity pattern as the original signal are analyzed. First an MSE bound that depends on a new bound on approximating a Gaussian signal as a linear combination of elements of an overcomplete dictionary is given. Further analyses are for dictionaries generated randomly according to a spherically-symmetric distribution and signals expressible with single dictionary elements. Easily-computed approximations for the probability of selecting the correct dictionary element and the MSE are given. Asymptotic expressions reveal a critical input signal-to-noise ratio for signal recovery.
information processing in sensor networks | 2004
Alyson K. Fletcher; Sundeep Rangan; Vivek K Goyal
Due to constraints in cost, power, and communication, losses often arise in large sensor networks. The sensor can be modeled as an output of a linear stochastic system with random losses of the sensor output samples. This paper considers the general problem of state estimation for jump linear systems where the discrete transitions are modeled as a Markov chain. Among other applications, this rich model can be used to analyze sensor networks. The sensor loss events are then modeled as Markov processes. Under the jump linear system model, many types of underlying losses can be easily considered, and the optimal estimator to be performed at the receiver in the presence of missing sensor data samples is given by a standard time-varying Kalman filter. We show that the asymptotic average estimation error variance converges and is given by a linear matrix inequality, which can be easily solved. Under this framework, any arbitrary Markov loss process can be modeled, and its average asymptotic error variance can be directly computed. We include a few illustrative examples including fixed-length burst errors, a two-state model, and partial losses due to multiple SNR states. Our analysis encompasses modeling discrete changes not only in the received data as stated above, but also in the underlying system. In the context of the lossy sensor model, the former allows for variation in sensor positioning, power control, and loss of data communications; the latter could allow for discrete changes in the dynamics of the variable monitored by the sensor. This freedom in modeling yields a tool that is potentially valuable in various scenarios in which entities that share information are subjected to challenging and time-varying network conditions.
neural information processing systems | 2014
Ulugbek S. Kamilov; Sundeep Rangan; Alyson K. Fletcher; Michael Unser
We consider the estimation of an independent and identically distributed (i.i.d.) (possibly non-Gaussian) vector x ∈ Rn from measurements y ∈ Rm obtained by a general cascade model consisting of a known linear transform followed by a probabilistic componentwise (possibly nonlinear) measurement channel. A novel method, called adaptive generalized approximate message passing (adaptive GAMP) is presented. It enables the joint learning of the statistics of the prior and measurement channel along with estimation of the unknown vector x. We prove that, for large i.i.d. Gaussian transform matrices, the asymptotic componentwise behavior of the adaptive GAMP is predicted by a simple set of scalar state evolution equations. In addition, we show that the adaptive GAMP yields asymptotically consistent parameter estimates, when a certain maximum-likelihood estimation can be performed in each step. This implies that the algorithm achieves a reconstruction quality equivalent to the oracle algorithm that knows the correct parameter values. Remarkably, this result applies to essentially arbitrary parametrizations of the unknown distributions, including nonlinear and non-Gaussian ones. The adaptive GAMP methodology thus provides a systematic, general and computationally efficient method applicable to a large range of linear-nonlinear models with provable guarantees.
international conference on acoustics, speech, and signal processing | 2007
Alyson K. Fletcher; Sundeep Rangan; Vivek K Goyal
Encouraging recent results in compressed sensing or compressive sampling suggest that a set of inner products with random measurement vectors forms a good representation of a source vector that is known to be sparse in some fixed basis. With quantization of these inner products, the encoding can be considered universal for sparse signals with known sparsity level. We analyze the operational rate-distortion performance of such source coding both with genie-aided knowledge of the sparsity pattern and maximum likelihood estimation of the sparsity pattern. We show that random measurements induce an additive logarithmic rate penalty, i.e., at high rates the performance with rate R + O(log R) and random measurements is equal to the performance with rate R and deterministic measurements matched to the source.
international symposium on information theory | 2013
Sundeep Rangan; Philip Schniter; Erwin Riegler; Alyson K. Fletcher; Volkan Cevher
The estimation of a random vector with independent components passed through a linear transform followed by a componentwise (possibly nonlinear) output map arises in a range of applications. Approximate message passing (AMP) methods, based on Gaussian approximations of loopy belief propagation, have recently attracted considerable attention for such problems. For large random transforms, these methods exhibit fast convergence and admit precise analytic characterizations with testable conditions for optimality, even for certain non-convex problem instances. However, the behavior of AMP under general transforms is not fully understood. In this paper, we consider the generalized AMP (GAMP) algorithm and relate the method to more common optimization techniques. This analysis enables a precise characterization of the GAMP algorithm fixed-points that applies to arbitrary transforms. In particular, we show that the fixed points of the so-called max-sum GAMP algorithm for MAP estimation are critical points of a constrained maximization of the posterior density. The fixed-points of the sum-product GAMP algorithm for estimation of the posterior marginals can be interpreted as critical points of a certain mean-field variational optimization.
IEEE Journal of Selected Topics in Signal Processing | 2007
Alyson K. Fletcher; Sundeep Rangan; Vivek K. Goyal; Kannan Ramchandran
Predictive quantization is a simple and effective method for encoding slowly-varying signals that is widely used in speech and audio coding. It has been known qualitatively that leaving correlation in the encoded samples can lead to improved estimation at the decoder when encoded samples are subject to erasure. However, performance estimation in this case has required Monte Carlo simulation. Provided here is a novel method for efficiently computing the mean-squared error performance of a predictive quantization system with erasures via a convex optimization with linear matrix inequality constraints. The method is based on jump linear system modeling and applies to any autoregressive moving average (ARMA) signal source and any erasure channel described by an aperiodic and irreducible Markov chain. In addition to this quantification for a given encoder filter, a method is presented to design the encoder filter to minimize the reconstruction error. Optimization of the encoder filter is a nonconvex problem, but we are able to parameterize with a single scalar a set of encoder filters that yield low MSE. The design method reduces the prediction gain in the filter, leaving the redundancy in the signal for robustness. This illuminates the basic tradeoff between compression and robustness.
international symposium on information theory | 2012
Sundeep Rangan; Alyson K. Fletcher
We consider the problem of estimating a rank-one matrix in Gaussian noise under a probabilistic model for the left and right factors of the matrix. The probabilistic model can impose constraints on the factors including sparsity and positivity that arise commonly in learning problems. We propose a simple iterative procedure that reduces the problem to a sequence of scalar estimation computations. The method is similar to approximate message passing techniques based on Gaussian approximations of loopy belief propagation that have been used recently in compressed sensing. Leveraging analysis methods by Bayati and Montanari, we show that the asymptotic behavior of the estimates from the proposed iterative procedure is described by a simple scalar equivalent model, where the distribution of the estimates is identical to certain scalar estimates of the variables in Gaussian noise. Moreover, the effective Gaussian noise level is described by a set of state evolution equations. The proposed method thus provides a computationally simple and general method for rank-one estimation problems with a precise analysis in certain high-dimensional settings.