Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rudrasis Chakraborty is active.

Publication


Featured researches published by Rudrasis Chakraborty.


computer vision and pattern recognition | 2016

A Nonlinear Regression Technique for Manifold Valued Data with Applications to Medical Image Analysis

Monami Banerjee; Rudrasis Chakraborty; Edward Ofori; Michael S. Okun; David E. Vaillancourt; Baba C. Vemuri

Regression is an essential tool in Statistical analysis of data with many applications in Computer Vision, Machine Learning, Medical Imaging and various disciplines of Science and Engineering. Linear and nonlinear regression in a vector space setting has been well studied in literature. However, generalizations to manifold-valued data are only recently gaining popularity. With the exception of a few, most existing methods of regression for manifold valued data are limited to geodesic regression which is a generalization of the linear regression in vector-spaces. In this paper, we present a novel nonlinear kernel-based regression method that is applicable to manifold valued data. Our method is applicable to cases when the independent and dependent variables in the regression model are both manifold-valued or one is manifold-valued and the other is vector or scalar valued. Further, unlike most methods, our method does not require any imposed ordering on the manifold-valued data. The performance of our model is tested on a large number of real data sets acquired from Alzhiemers and movement disorder (Parkinsons and Essential Tremor) patients. We present an extensive set of results along with statistical validation and comparisons.


medical image computing and computer assisted intervention | 2015

Nonlinear Regression on Riemannian Manifolds and Its Applications to Neuro-Image Analysis

Monami Banerjee; Rudrasis Chakraborty; Edward Ofori; David E. Vaillancourt; Baba C. Vemuri

Regression in its most common form where independent and dependent variables are in ℝ n is a ubiquitous tool in Sciences and Engineering. Recent advances in Medical Imaging has lead to a wide spread availability of manifold-valued data leading to problems where the independent variables are manifold-valued and dependent are real-valued or vice-versa. The most common method of regression on a manifold is the geodesic regression, which is the counterpart of linear regression in Euclidean space. Often, the relation between the variables is highly complex, and existing most commonly used geodesic regression can prove to be inaccurate. Thus, it is necessary to resort to a non-linear model for regression. In this work we present a novel Kernel based non-linear regression method when the mapping to be estimated is either from M → ℝ n or ℝ n → M, where M is a Riemannian manifold. A key advantage of this approach is that there is no requirement for the manifold-valued data to necessarily inherit an ordering from the data in ℝ n . We present several synthetic and real data experiments along with comparisons to the state-of-the-art geodesic regression method in literature and thus validating the effectiveness of the proposed algorithm.


international conference on computer vision | 2015

Recursive Fréchet Mean Computation on the Grassmannian and Its Applications to Computer Vision

Rudrasis Chakraborty; Baba C. Vemuri

In the past decade, Grassmann manifolds (Grassmannian) have been commonly used in mathematical formulations of many Computer Vision tasks. Averaging points on a Grassmann manifold is a very common operation in many applications including but not limited to, tracking, action recognition, video-face recognition, face recognition, etc. Computing the intrinsic/Fréchet mean (FM) of a set of points on the Grassmann can be cast as finding the global optimum (if it exists) of the sum of squared geodesic distances cost function. A common approach to solve this problem involves the use of the gradient descent method. An alternative way to compute the FM is to develop a recursive/inductive definition that does not involve optimizing the aforementioned cost function. In this paper, we propose one such computationally efficient algorithm called the Grassmann inductive Fréchet mean estimator (GiFME). In developing the recursive solution to find the FM of the given set of points, GiFME exploits the fact that there is a closed form solution to find the FM of two points on the Grassmann. In the limit as the number of samples tends to infinity, we prove that GiFME converges to the FM (this is called the weak consistency result on the Grassmann manifold). Further, for the finite sample case, in the limit as the number of sample paths (trials) goes to infinity, we show that GiFME converges to the finite sample FM. Moreover, we present a bound on the geodesic distance between the estimate from GiFME and the true FM. We present several experiments on synthetic and real data sets to demonstrate the performance of GiFME in comparison to the gradient descent based (batch mode) technique. Our goal in these applications is to demonstrate the computational advantage and achieve comparable accuracy to the state-of-the-art.


international symposium on biomedical imaging | 2017

Statistics on the space of trajectories for longitudinal data analysis

Rudrasis Chakraborty; Monami Banerjee; Baba C. Vemuri

Statistical analysis of longitudinal data is a significant problem in Biomedical imaging applications. In the recent past, several researchers have developed mathematically rigorous methods based on differential geometry and statistics to tackle the problem of statistical analysis of longitudinal neuroimaging data. In this paper, we present a novel formulation of the longitudinal data analysis problem by identifying the structural changes over time (describing the trajectory of change) to a product Riemannian manifold endowed with a Riemannian metric and a probability measure. We present theoretical results showing that the maximum likelihood estimate of the mean and median of a Gaussian and Laplace distribution respectively on the product manifold yield the Fréchet mean and median respectively. We then present efficient recursive estimators for these intrinsic parameters and use them in conjunction with a nearest neighbor (NN) classifier to classify MR brain scans (acquired from the publicly available OASIS database) of patients with and without dementia.


computer vision and pattern recognition | 2016

An Efficient Exact-PGA Algorithm for Constant Curvature Manifolds

Rudrasis Chakraborty; Dohyung Seo; Baba C. Vemuri

Manifold-valued datasets are widely encountered in many computer vision tasks. A non-linear analog of the PCA algorithm, called the Principal Geodesic Analysis (PGA) algorithm suited for data lying on Riemannian manifolds was reported in literature a decade ago. Since the objective function in the PGA algorithm is highly non-linear and hard to solve efficiently in general, researchers have proposed a linear approximation. Though this linear approximation is easy to compute, it lacks accuracy especially when the data exhibits a large variance. Recently, an alternative called the exact PGA was proposed which tries to solve the optimization without any linearization. For general Riemannian manifolds, though it yields a better accuracy than the original (linearized) PGA, for data that exhibit large variance, the optimization is not computationally efficient. In this paper, we propose an efficient exact PGA algorithm for constant curvature Riemannian manifolds (CCM-EPGA). The CCM-EPGA algorithm differs significantly from existing PGA algorithms in two aspects, (i) the distance between a given manifold-valued data point and the principal submanifold is computed analytically and thus no optimization is required as in the existing methods. (ii) Unlike the existing PGA algorithms, the descent into codimension-1 submanifolds does not require any optimization but is accomplished through the use of the Rimeannian inverse Exponential map and the parallel transport operations. We present theoretical and experimental results for constant curvature Riemannian manifolds depicting favorable performance of the CCM-EPGA algorithm compared to existing PGA algorithms. We also present data reconstruction from the principal components which has not been reported in literature in this setting.


computer vision and pattern recognition | 2017

Intrinsic Grassmann Averages for Online Linear and Robust Subspace Learning

Rudrasis Chakraborty; Søren Hauberg; Baba C. Vemuri

Principal Component Analysis (PCA) is a fundamental method for estimating a linear subspace approximation to high-dimensional data. Many algorithms exist in literature to achieve a statistically robust version of PCA called RPCA. In this paper, we present a geometric framework for computing the principal linear subspaces in both situations that amounts to computing the intrinsic average on the space of all subspaces (the Grassmann manifold). Points on this manifold are defined as the subspaces spanned by K-tuples of observations. We show that the intrinsic Grassmann average of these subspaces coincide with the principal components of the observations when they are drawn from a Gaussian distribution. Similar results are also shown to hold for the RPCA. Further, we propose an efficient online algorithm to do subspace averaging which is of linear complexity in terms of number of samples and has a linear convergence rate. When the data has outliers, our proposed online robust subspace averaging algorithm shows significant performance (accuracy and computation time) gain over a recently published RPCA methods with publicly accessible code. We have demonstrated competitive performance of our proposed online subspace algorithm method on one synthetic and two real data sets. Experimental results depicting stability of our proposed method are also presented. Furthermore, on two real outlier corrupted datasets, we present comparison experiments showing lower reconstruction error using our online RPCA algorithm. In terms of reconstruction error and time required, both our algorithms outperform the competition.


arXiv: Learning | 2017

Intrinsic Grassmann Averages for Online Linear, Robust and Nonlinear Subspace Learning

Rudrasis Chakraborty; Søren Hauberg; Baba C. Vemuri


neural information processing systems | 2018

Statistical Recurrent Models on Manifold valued Data

Rudrasis Chakraborty; Chun-Hao Yang; Xingjian Zhen; Monami Banerjee; Derek B. Archer; David E. Vaillancourt; Vikas Singh; Baba C. Vemuri


computer vision and pattern recognition | 2018

A Mixture Model for Aggregation of Multiple Pre-Trained Weak Classifiers

Rudrasis Chakraborty; Chun-Hao Yang; Baba C. Vemuri


arXiv: Learning | 2018

A Statistical Recurrent Model on the Manifold of Symmetric Positive Definite Matrices.

Rudrasis Chakraborty; Chun-Hao Yang; Xingjian Zhen; Monami Banerjee; Derek B. Archer; David E. Vaillancourt; Vikas Singh; Baba C. Vemuri

Collaboration


Dive into the Rudrasis Chakraborty's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vikas Singh

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xingjian Zhen

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Søren Hauberg

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge