Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vamsi K. Ithapu is active.

Publication


Featured researches published by Vamsi K. Ithapu.


Human Brain Mapping | 2014

Extracting and summarizing white matter hyperintensities using supervised segmentation methods in Alzheimer’s disease risk and aging studies

Vamsi K. Ithapu; Vikas Singh; Christopher Lindner; Benjamin P. Austin; Chris Hinrichs; Cynthia M. Carlsson; Barbara B. Bendlin; Sterling C. Johnson

Precise detection and quantification of white matter hyperintensities (WMH) observed in T2‐weighted Fluid Attenuated Inversion Recovery (FLAIR) Magnetic Resonance Images (MRI) is of substantial interest in aging, and age‐related neurological disorders such as Alzheimers disease (AD). This is mainly because WMH may reflect co‐morbid neural injury or cerebral vascular disease burden. WMH in the older population may be small, diffuse, and irregular in shape, and sufficiently heterogeneous within and across subjects. Here, we pose hyperintensity detection as a supervised inference problem and adapt two learning models, specifically, Support Vector Machines and Random Forests, for this task. Using texture features engineered by texton filter banks, we provide a suite of effective segmentation methods for this problem. Through extensive evaluations on healthy middle‐aged and older adults who vary in AD risk, we show that our methods are reliable and robust in segmenting hyperintense regions. A measure of hyperintensity accumulation, referred to as normalized effective WMH volume, is shown to be associated with dementia in older adults and parental family history in cognitively normal subjects. We provide an open source library for hyperintensity detection and accumulation (interfaced with existing neuroimaging tools), that can be adapted for segmentation problems in other neuroimaging studies. Hum Brain Mapp 35:4219–4235, 2014.


Alzheimers & Dementia | 2015

Imaging-based enrichment criteria using deep learning algorithms for efficient clinical trials in mild cognitive impairment.

Vamsi K. Ithapu; Vikas Singh; Ozioma C. Okonkwo; Rick Chappell; N. Maritza Dowling; Sterling C. Johnson

The mild cognitive impairment (MCI) stage of Alzheimers disease (AD) may be optimal for clinical trials to test potential treatments for preventing or delaying decline to dementia. However, MCI is heterogeneous in that not all cases progress to dementia within the time frame of a trial and some may not have underlying AD pathology. Identifying those MCIs who are most likely to decline during a trial and thus most likely to benefit from treatment will improve trial efficiency and power to detect treatment effects. To this end, using multimodal, imaging‐derived, inclusion criteria may be especially beneficial. Here, we present a novel multimodal imaging marker that predicts future cognitive and neural decline from [F‐18]fluorodeoxyglucose positron emission tomography (PET), amyloid florbetapir PET, and structural magnetic resonance imaging, based on a new deep learning algorithm (randomized denoising autoencoder marker, rDAm). Using ADNI2 MCI data, we show that using rDAm as a trial enrichment criterion reduces the required sample estimates by at least five times compared with the no‐enrichment regime and leads to smaller trials with high statistical power, compared with existing methods.


Scientific Reports | 2016

Relative vascular permeability and vascularity across different regions of the rat nasal mucosa: implications for nasal physiology and drug delivery.

Niyanta N. Kumar; Mohan Gautam; Jeffrey J. Lochhead; Daniel J. Wolak; Vamsi K. Ithapu; Vikas Singh; Robert G. Thorne

Intranasal administration provides a non-invasive drug delivery route that has been proposed to target macromolecules either to the brain via direct extracellular cranial nerve-associated pathways or to the periphery via absorption into the systemic circulation. Delivering drugs to nasal regions that have lower vascular density and/or permeability may allow more drug to access the extracellular cranial nerve-associated pathways and therefore favor delivery to the brain. However, relative vascular permeabilities of the different nasal mucosal sites have not yet been reported. Here, we determined that the relative capillary permeability to hydrophilic macromolecule tracers is significantly greater in nasal respiratory regions than in olfactory regions. Mean capillary density in the nasal mucosa was also approximately 5-fold higher in nasal respiratory regions than in olfactory regions. Applying capillary pore theory and normalization to our permeability data yielded mean pore diameter estimates ranging from 13–17 nm for the nasal respiratory vasculature compared to <10 nm for the vasculature in olfactory regions. The results suggest lymphatic drainage for CNS immune responses may be favored in olfactory regions due to relatively lower clearance to the bloodstream. Lower blood clearance may also provide a reason to target the olfactory area for drug delivery to the brain.


international conference on computer vision | 2015

An NMF Perspective on Binary Hashing

Lopamudra Mukherjee; Sathya N. Ravi; Vamsi K. Ithapu; Tyler Holmes; Vikas Singh

The pervasiveness of massive data repositories has led to much interest in efficient methods for indexing, search, and retrieval. For image data, a rapidly developing body of work for these applications shows impressive performance with methods that broadly fall under the umbrella term of Binary Hashing. Given a distance matrix, a binary hashing algorithm solves for a binary code for the given set of examples, whose Hamming distance nicely approximates the original distances. The formulation is non-convex -- so existing solutions adopt spectral relaxations or perform coordinate descent (or quantization) on a surrogate objective that is numerically more tractable. In this paper, we first derive an Augmented Lagrangian approach to optimize the standard binary Hashing objective (i.e.,maintain fidelity with a given distance matrix). With appropriate step sizes, we find that this scheme already yields results that match or substantially outperform state of the art methods on most benchmarks used in the literature. Then, to allow the model to scale to large datasets, we obtain an interesting reformulation of the binary hashing objective as a non negative matrix factorization. Later, this leads to a simple multiplicative updates algorithm -- whose parallelization properties are exploited to obtain a fast GPU based implementation. We give a probabilistic analysis of our initialization scheme and present a range of experiments to show that the method is simple to implement and competes favorably with available methods (both for optimization and generalization).


medical image computing and computer-assisted intervention | 2014

Randomized denoising autoencoders for smaller and efficient imaging based AD clinical trials.

Vamsi K. Ithapu; Vikas Singh; Ozioma C. Okonkwo; Sterling C. Johnson

There is growing body of research devoted to designing imaging-based biomarkers that identify Alzheimers disease (AD) in its prodromal stage using statistical machine learning methods. Recently several authors investigated how clinical trials for AD can be made more efficient (i.e., smaller sample size) using predictive measures from such classification methods. In this paper, we explain why predictive measures given by such SVM type objectives may be less than ideal for use in the setting described above. We give a solution based on a novel deep learning model, randomized denoising autoencoders (rDA), which regresses on training labels y while also accounting for the variance, a property which is very useful for clinical trial design. Our results give strong improvements in sample size estimates over strategies based on multi-kernel learning. Also, rDA predictions appear to more accurately correlate to stages of disease. Separately, our formulation empirically shows how deep architectures can be applied in the large d, small n regime--the default situation in medical imaging. This result is of independent interest.


computer vision and pattern recognition | 2017

Decoding the Deep: Exploring Class Hierarchies of Deep Representations Using Multiresolution Matrix Factorization

Vamsi K. Ithapu

The necessity of depth in efficient neural network learning has led to a family of designs referred to as very deep networks (e.g., GoogLeNet has 22 layers). As the depth increases even further, the need for appropriate tools to explore the space of hidden representations becomes paramount. For instance, beyond the gain in generalization, one may be interested in checking the change in class compositions as additional layers are added. Classical PCA or eigen-spectrum based global approaches do not model the complex inter-class relationships. In this work, we propose a novel decomposition referred to as multiresolution matrix factorization that models hierarchical and compositional structure in symmetric matrices. This new decomposition efficiently infers semantic relationships among deep representations of multiple classes, even when they are not explicitly trained to do so. We show that the proposed factorization is a valuable tool in understanding the landscape of hidden representations, in adapting existing architectures for new tasks and also for designing new architectures using interpretable, human-releatable, class-by-class relationships that we hope the network to learn.


computer vision and pattern recognition | 2017

The Incremental Multiresolution Matrix Factorization Algorithm

Vamsi K. Ithapu; Risi Kondor; Sterling C. Johnson; Vikas Singh

Multiresolution analysis and matrix factorization are foundational tools in computer vision. In this work, we study the interface between these two distinct topics and obtain techniques to uncover hierarchical block structure in symmetric matrices &#x2013; an important aspect in the success of many vision problems. Our new algorithm, the incremental multiresolution matrix factorization, uncovers such structure one feature at a time, and hence scales well to large matrices. We describe how this multiscale analysis goes much farther than what a direct global factorization of the data can identify. We evaluate the efficacy of the resulting factorizations for relative leveraging within regression tasks using medical imaging data. We also use the factorization on representations learned by popular deep networks, providing evidence of their ability to infer semantic relationships even when they are not explicitly trained to do so. We show that this algorithm can be used as an exploratory tool to improve the network architecture, and within numerous other settings in vision.


allerton conference on communication, control, and computing | 2016

On the interplay of network structure and gradient convergence in deep learning

Vamsi K. Ithapu; Sathya N. Ravi; Vikas Singh

The regularization and output consistency behavior of dropout and layer-wise pretraining for learning deep networks have been fairly well studied. However, our understanding of how the asymptotic convergence of backpropagation in deep architectures is related to the structural properties of the network and other design choices (like denoising and dropout rate) is less clear at this time. An interesting question one may ask is whether the network architecture and input data statistics may guide the choices of learning parameters and vice versa. In this work, we explore the association between such structural, distributional and learnability aspects vis-à-vis their interaction with parameter convergence rates. We present a framework to address these questions based on convergence of backpropagation for general nonconvex objectives using first-order information. This analysis suggests an interesting relationship between feature denoising and dropout. Building upon these results, we obtain a setup that provides systematic guidance regarding the choice of learning parameters and network sizes that achieve a certain level of convergence (in the optimization sense) often mediated by statistical attributes of the inputs. Our results are supported by a set of experimental evaluations as well as independent empirical observations reported by other groups.


NeuroImage | 2017

Accelerating permutation testing in voxel-wise analysis through subspace tracking: A new plugin for SnPM

Felipe Gutierrez-Barragan; Vamsi K. Ithapu; Chris Hinrichs; Camille Maumet; Sterling C. Johnson; Thomas E. Nichols; Vikas Singh

Abstract Permutation testing is a non‐parametric method for obtaining the max null distribution used to compute corrected p‐values that provide strong control of false positives. In neuroimaging, however, the computational burden of running such an algorithm can be significant. We find that by viewing the permutation testing procedure as the construction of a very large permutation testing matrix, Symbol, one can exploit structural properties derived from the data and the test statistics to reduce the runtime under certain conditions. In particular, we see that Symbol is low‐rank plus a low‐variance residual. This makes Symbol a good candidate for low‐rank matrix completion, where only a very small number of entries of Symbol (Symbol of all entries in our experiments) have to be computed to obtain a good estimate. Based on this observation, we present RapidPT, an algorithm that efficiently recovers the max null distribution commonly obtained through regular permutation testing in voxel‐wise analysis. We present an extensive validation on a synthetic dataset and four varying sized datasets against two baselines: Statistical NonParametric Mapping (SnPM13) and a standard permutation testing implementation (referred as NaivePT). We find that RapidPT achieves its best runtime performance on medium sized datasets (Symbol), with speedups of 1.5× ‐ 38× (vs. SnPM13) and 20x‐1000× (vs. NaivePT). For larger datasets (Symbol) RapidPT outperforms NaivePT (6× ‐ 200×) on all datasets, and provides large speedups over SnPM13 when more than 10000 permutations (2× ‐ 15×) are needed. The implementation is a standalone toolbox and also integrated within SnPM13, able to leverage multi‐core architectures when available. Symbol. No caption available. Symbol. No caption available. Symbol. No caption available. Symbol. No caption available. HighlightsA fast and robust permutation testing approach for multiple hypothesis testing is proposed.Our permutation testing approach is ˜20× faster then current methods.The proposed model is in the developing (soon to be released) version of SnPM.


Deep Learning for Medical Image Analysis | 2017

Randomized Deep Learning Methods for Clinical Trial Enrichment and Design in Alzheimer's Disease

Vamsi K. Ithapu; Vikas Singh; Sterling C. Johnson

Abstract There is growing body of research devoted to designing imaging-based biomarkers that identify Alzheimers disease (AD) in its prodromal stage using statistical machine learning methods. Recently several authors investigated how AD clinical trials can be made more efficient using predictive measures from such computational methods. For instance, identifying Mild Cognitive Impaired who are most likely to decline during a trial, and thus most likely to benefit from the treatment, will improve trial efficiency and power to detect treatment effects. To this end, using multi-modal, imaging-derived, inclusion criteria may be especially beneficial. In this paper, we explain why predictive measures given by ROI-based summaries or SVM type objectives may be less than ideal for use in the setting described above. We give a solution based on novel deep learning models referred to as a randomized neural networks. We show that our proposed models trained on multiple imaging modalities correspond to a minimum variance unbiased estimator of the decline and are ideal for clinical trial enrichment and design. The resulting predictions appear to more accurately correlate to the disease stages, and extensive evaluations on Alzheimers Disease Neuroimaging Initiative (ADNI) indicate strong improvements in sample size estimates over existing strategies including those based on multi-kernel learning. From the modeling perspective, we evaluate several architectural choices including denoising autoencoders and dropout learning. Separately, our formulation empirically shows how deep architectures can be applied in the large d , small n regime – the default situation in medical imaging and bioinformatics. This result is of independent interest.

Collaboration


Dive into the Vamsi K. Ithapu's collaboration.

Top Co-Authors

Avatar

Vikas Singh

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Sterling C. Johnson

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Sathya N. Ravi

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Ozioma C. Okonkwo

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Chris Hinrichs

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Cynthia M. Carlsson

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Barbara B. Bendlin

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Grace Wahba

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Rebecca L. Koscik

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Rick Chappell

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge