Vamsi K. Potluru
University of New Mexico
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vamsi K. Potluru.
PLOS ONE | 2013
Vince D. Calhoun; Vamsi K. Potluru; Ronald Phlypo; Rogers F. Silva; Barak A. Pearlmutter; Arvind Caprihan; Sergey M. Plis; Tülay Adali
A recent paper by Daubechies et al. claims that two independent component analysis (ICA) algorithms, Infomax and FastICA, which are widely used for functional magnetic resonance imaging (fMRI) analysis, select for sparsity rather than independence. The argument was supported by a series of experiments on synthetic data. We show that these experiments fall short of proving this claim and that the ICA algorithms are indeed doing what they are designed to do: identify maximally independent sources.
international symposium on circuits and systems | 2008
Vamsi K. Potluru; Vince D. Calhoun
Non-negative matrix factorization (NMF) has increasingly been used as a tool in signal processing in the last couple of years. NMF, like independent component analysis (ICA) is useful for decomposing high dimensional data sets into a lower dimensional space. Here, we use NMF to learn the features of both structural and functional magnetic resonance imaging (sMRI/fMRI) data. NMF can be applied to perform group analysis of imaging data and we apply it to learn the spatial patterns which linearly covary among subjects for both sMRI and fMRI. We add an additional contrast term to NMF (called co-NMF) to identify features distinctive between two groups. We apply our approach to a dataset consisting of schizophrenia patients and healthy controls. The results from co-NMF make sense in light of expectations and are improved compared to the NMF results. Our method is general and may prove to be a useful tool for identifying differences between multiple groups.
southwest symposium on image analysis and interpretation | 2008
Vamsi K. Potluru; Sergey M. Plis; Vince D. Calhoun
Non-negative matrix factorization (NMF) has increasingly been used for efficiently decomposing multivariate data into a signal dictionary and corresponding activations. In this paper, we propose an algorithm called sparse shift-invariant NMF (ssiNMF) for learning possibly overcomplete shift- invariant features. This is done by incorporating a circulant property on the features and sparsity constraints on the activations. The circulant property allows us to capture shifts in the features and enables efficient computation by the Fast Fourier Transform. The ssiNMF algorithm turns out to be matrix-free for we need to store only a small number of features. We demonstrate this on a dataset generated from an overcomplete set of bars.
international symposium on neural networks | 2011
Vamsi K. Potluru; Sergey M. Plis; Shuang Luan; Vince D. Calhoun; Thomas P. Hayes
Nonnegative Least Squares (NNLS) is a general form for many important problems. We consider a special case of NNLS where the input is nonnegative. It is called Totally Nonnegative Least Squares (TNNLS) in the literature. We show a reduction of TNNLS to a single class Support Vector Machine (SVM), thus relating the sparsity of a TNNLS solution to the sparsity of supports in a SVM. This allows us to apply any SVM solver to the TNNLS problem. We get an order of magnitude improvement in running time by first obtaining a smaller version of our original problem with the same solution using a fast approximate SVM solver. Second, we use an exact NNLS solver to obtain the solution. We present experimental evidence that this approach improves the performance of state-of-the-art NNLS solvers by applying it to both randomly generated problems as well as to real datasets, calculating radiation therapy dosages for cancer patients.
signal processing systems | 2011
Sergey M. Plis; Vamsi K. Potluru; Terran Lane; Vince D. Calhoun
Non-negative matrix factorization (NMF) is a problem of decomposing multivariate data into a set of features and their corresponding activations. When applied to experimental data, NMF has to cope with noise, which is often highly correlated. We show that correlated noise can break the Donoho and Stodden separability conditions of a dataset and a regular NMF algorithm will fail to decompose it, even when given freedom to be able to represent the noise as a separate feature. To cope with this issue, we present an algorithm for NMF with a generalized least squares objective function (glsNMF) and derive multiplicative updates for the method together with proving their convergence. The new algorithm successfully recovers the true representation from the noisy data. Robust performance can make glsNMF a valuable tool for analyzing empirical data.
international conference on data mining | 2011
Vamsi K. Potluru
Support Vector Machines (SVM) and Nonnegative Matrix Factorization (NMF) are standard tools for data analysis. We explore the connections between these two problems, thereby enabling us to import algorithms from SVM world to solve NMF and vice-versa. In particular, one such algorithm developed to solve SVM is adapted to solve NMF. Empirical results show that this new algorithm is competitive with the state-of-the-art NMF solvers.
international workshop on machine learning for signal processing | 2009
Sergey M. Plis; Vamsi K. Potluru; Vince D. Calhoun; Terran Lane
Non-negative matrix factorization (NMF) is an algorithm for decomposing multivariate data into a signal dictionary and its corresponding activations. When applied to experimental data, NMF has to cope with noise, which is often highly correlated. We show that correlated noise can break the Donoho and Stodden separability conditions of a dataset and a regular NMF algorithm will fail to decompose it, even when given freedom to be able to represent the noise as a separate feature. To cope with this issue, we present an algorithm for NMF with a generalized least squares objective function (glsNMF) and derive multiplicative updates for the method together with proving their convergence. The new algorithm successfully recovers the true representation from the noisy data. Robust performance can make glsNMF a valuable tool for analyzing empirical data.
international conference on learning representations | 2013
Vamsi K. Potluru; Sergey M. Plis; Jonathan Le Roux; Barak A. Pearlmutter; Vince D. Calhoun; Thomas P. Hayes
PLOS ONE | 2013
Vince D. Calhoun; Vamsi K. Potluru; Ronald Phlypo; Rogers F. Silva; Barak A. Pearlmutter; Arvind Caprihan; Sergey M. Plis; Tülay Adali
NeuroImage | 2014
Sergey M. Plis; Jing Sui; Terran Lane; Sushmita Roy; Vincent P. Clark; Vamsi K. Potluru; Rene J. Huster; Andrew M. Michael; Scott R. Sponheim; Michael P. Weisend; Vince D. Calhoun