Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raghuraman Gopalan is active.

Publication


Featured researches published by Raghuraman Gopalan.


international conference on computer vision | 2011

Domain adaptation for object recognition: An unsupervised approach

Raghuraman Gopalan; Ruonan Li; Rama Chellappa

Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets.


IEEE Signal Processing Magazine | 2015

Visual Domain Adaptation: A survey of recent advances

Vishal M. Patel; Raghuraman Gopalan; Ruonan Li; Rama Chellappa

In pattern recognition and computer vision, one is often faced with scenarios where the training data used to learn a model have different distribution from the data on which the model is applied. Regardless of the cause, any distributional change that occurs after learning a classifier can degrade its performance at test time. Domain adaptation tries to mitigate this degradation. In this article, we provide a survey of domain adaptation methods for visual recognition. We discuss the merits and drawbacks of existing domain adaptation approaches and identify promising avenues for research in this rapidly evolving field.


european conference on computer vision | 2010

Articulation-invariant representation of non-planar shapes

Raghuraman Gopalan; Pavan K. Turaga; Rama Chellappa

Given a set of points corresponding to a 2D projection of a non-planar shape, we would like to obtain a representation invariant to articulations (under no self-occlusions). It is a challenging problem since we need to account for the changes in 2D shape due to 3D articulations, viewpoint variations, as well as the varying effects of imaging process on different regions of the shape due to its non-planarity. By modeling an articulating shape as a combination of approximate convex parts connected by non-convex junctions, we propose to preserve distances between a pair of points by (i) estimating the parts of the shape through approximate convex decomposition, by introducing a robust measure of convexity and (ii) performing part-wise affine normalization by assuming a weak perspective camera model, and then relating the points using the inner distance which is insensitive to planar articulations. We demonstrate the effectiveness of our representation on a dataset with non-planar articulations, and on standard shape retrieval datasets like MPEG-7.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014

Unsupervised Adaptation Across Domain Shifts by Generating Intermediate Data Representations

Raghuraman Gopalan; Ruonan Li; Rama Chellappa

With unconstrained data acquisition scenarios widely prevalent, the ability to handle changes in data distribution across training and testing data sets becomes important. One way to approach this problem is through domain adaptation, and in this paper we primarily focus on the unsupervised scenario where the labeled source domain training data is accompanied by unlabeled target domain test data. We present a two-stage data-driven approach by generating intermediate data representations that could provide relevant information on the domain shift. Starting with a linear representation of domains in the form of generative subspaces of same dimensions for the source and target domains, we first utilize the underlying geometry of the space of these subspaces, the Grassmann manifold, to obtain a `shortest geodesic path between the two domains. We then sample points along the geodesic to obtain intermediate cross-domain data representations, using which a discriminative classifier is learnt to estimate the labels of the target data. We subsequently incorporate non-linear representation of domains by considering a Reproducing Kernel Hilbert Space representation, and a low-dimensional manifold representation using Laplacian Eigenmaps, and also examine other domain adaptation settings such as (i) semi-supervised adaptation where the target domain is partially labeled, and (ii) multi-domain adaptation where there could be more than one domain in source and/or target data sets. Finally, we supplement our adaptation technique with (i) fine-grained reference domains that are created by blending samples from source and target data sets to provide some evidence on the actual domain shift, and (ii) a multi-class boosting analysis to obtain robustness to the choice of algorithm parameters. We evaluate our approach for object recognition problems and report competitive results on two widely used Office and Bing adaptation data sets.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

A Blur-Robust Descriptor with Applications to Face Recognition

Raghuraman Gopalan; Sima Taheri; Pavan K. Turaga; Rama Chellappa

Understanding the effect of blur is an important problem in unconstrained visual analysis. We address this problem in the context of image-based recognition by a fusion of image-formation models and differential geometric tools. First, we discuss the space spanned by blurred versions of an image and then, under certain assumptions, provide a differential geometric analysis of that space. More specifically, we create a subspace resulting from convolution of an image with a complete set of orthonormal basis functions of a prespecified maximum size (that can represent an arbitrary blur kernel within that size), and show that the corresponding subspaces created from a clean image and its blurred versions are equal under the ideal case of zero noise and some assumptions on the properties of blur kernels. We then study the practical utility of this subspace representation for the problem of direct recognition of blurred faces by viewing the subspaces as points on the Grassmann manifold and present methods to perform recognition for cases where the blur is both homogenous and spatially varying. We empirically analyze the effect of noise, as well as the presence of other facial variations between the gallery and probe images, and provide comparisons with existing approaches on standard data sets.


Computer Vision and Image Understanding | 2010

Comparing and combining lighting insensitive approaches for face recognition

Raghuraman Gopalan; David W. Jacobs

Face recognition under changing lighting conditions is a challenging problem in computer vision. In this paper, we analyze the relative strengths of different lighting insensitive representations, and propose efficient classifier combination schemes that result in better recognition rates. We consider two experimental settings, wherein we study the performance of different algorithms with (and without) prior information on the different illumination conditions present in the scene. In both settings, we focus on the problem of having just one exemplar per person in the gallery. Based on these observations, we design algorithms for integrating the individual classifiers to capture the significant aspects of each representation. We then illustrate the performance improvement obtained through our classifier combination algorithms on the illumination subset of the PIE dataset, and on the extended Yale-B dataset. Throughout, we consider galleries with both homogenous and heterogeneous lighting conditions.


International Journal of Computer Vision | 2014

Model-Driven Domain Adaptation on Product Manifolds for Unconstrained Face Recognition

Huy Tho Ho; Raghuraman Gopalan

Many classification algorithms see a reduction in performance when tested on data with properties different from that used for training. This problem arises very naturally in face recognition where images corresponding to the source domain (gallery, training data) and the target domain (probe, testing data) are acquired under varying degree of factors such as illumination, expression, blur and alignment. In this paper, we account for the domain shift by deriving a latent subspace or domain, which jointly characterizes the multifactor variations using appropriate image formation models for each factor. We formulate the latent domain as a product of Grassmann manifolds based on the underlying geometry of the tensor space, and perform recognition across domain shift using statistics consistent with the tensor geometry. More specifically, given a face image from the source or target domain, we first synthesize multiple images of that subject under different illuminations, blur conditions and 2D perturbations to form a tensor representation of the face. The orthogonal matrices obtained from the decomposition of this tensor, where each matrix corresponds to a factor variation, are used to characterize the subject as a point on a product of Grassmann manifolds. For cases with only one image per subject in the source domain, the identity of target domain faces is estimated using the geodesic distance on product manifolds. When multiple images per subject are available, an extension of kernel discriminant analysis is developed using a novel kernel based on the projection metric on product spaces. Furthermore, a probabilistic approach to the problem of classifying image sets on product manifolds is introduced. We demonstrate the effectiveness of our approach through comprehensive evaluations on constrained and unconstrained face datasets, including still images and videos.


Foundations and Trends in Computer Graphics and Vision | 2015

Domain Adaptation for Visual Recognition

Raghuraman Gopalan; Ruonan Li; Vishal M. Patel; Rama Chellappa

Domain adaptation is an active, emerging research area that attemptsto address the changes in data distribution across training and testingdatasets. With the availability of a multitude of image acquisition sensors,variations due to illumination, and viewpoint among others, computervision applications present a very natural test bed for evaluatingdomain adaptation methods. In this monograph, we provide a comprehensiveoverview of domain adaptation solutions for visual recognitionproblems. By starting with the problem description and illustrations,we discuss three adaptation scenarios namely, i unsupervised adaptationwhere the source domain training data is partially labeledand the target domain test data is unlabeled, ii semi-supervisedadaptation where the target domain also has partial labels, and iiimulti-domain heterogeneous adaptation which studies the previous twosettings with the source and/or target having more than one domain,and accounts for cases where the features used to represent the datain each domain are different. For all these topics we discuss existingadaptation techniques in the literature, which are motivated by theprinciples of max-margin discriminative learning, manifold learning,sparse coding, as well as low-rank representations. These techniqueshave shown improved performance on a variety of applications suchas object recognition, face recognition, activity analysis, concept classification,and person detection. We then conclude by analyzing thechallenges posed by the realm of big visual data, in terms of thegeneralization ability of adaptation algorithms to unconstrained dataacquisition as well as issues related to their computational tractability,and draw parallels with the efforts from vision community on imagetransformation models, and invariant descriptors so as to facilitate improvedunderstanding of vision problems under uncertainty.


international conference on biometrics | 2009

Robust Human Detection under Occlusion by Integrating Face and Person Detectors

William Robson Schwartz; Raghuraman Gopalan; Rama Chellappa; Larry S. Davis

Human detection under occlusion is a challenging problem in computer vision. We address this problem through a framework which integrates face detection and person detection. We first investigate how the response of a face detector is correlated with the response of a person detector. From these observations, we formulate hypotheses that capture the intuitive feedback between the responses of face and person detectors and use it to verify if the individual detectors outputs are true or false. We illustrate the performance of our integration framework on challenging images that have considerable amount of occlusion, and demonstrate its advantages over individual face and person detectors.


computer vision and pattern recognition | 2015

Hierarchical sparse coding with geometric prior for visual geo-location

Raghuraman Gopalan

We address the problem of estimating location information of an image using principles from automated representation learning. We pursue a hierarchical sparse coding approach that learns features useful in discriminating images across locations, by initializing it with a geometric prior corresponding to transformations between image appearance space and their corresponding location grouping space using the notion of parallel transport on manifolds. We then extend this approach to account for the availability of heterogeneous data modalities such as geo-tags and videos pertaining to different locations, and also study a relatively under-addressed problem of transferring knowledge available from certain locations to infer the grouping of data from novel locations. We evaluate our approach on several standard datasets such as im2gps, San Francisco and MediaEval2010, and obtain state-of-the-art results.

Collaboration


Dive into the Raghuraman Gopalan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William Robson Schwartz

Universidade Federal de Minas Gerais

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Archana Sapkota

University of Colorado Colorado Springs

View shared research outputs
Researchain Logo
Decentralizing Knowledge