Suranjana Samanta
Indian Institute of Technology Madras
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Suranjana Samanta.
Iete Technical Review | 2010
Utthara Gosa Mangai; Suranjana Samanta; Sukhendu Das; Pinaki Roy Chowdhury
Abstract For any pattern classification task, an increase in data size, number of classes, dimension of the feature space, and interclass separability affect the performance of any classifier. A single classifier is generally unable to handle the wide variability and scalability of the data in any problem domain. Most modern techniques of pattern classification use a combination of classifiers and fuse the decisions provided by the same, often using only a selected set of appropriate features for the task. The problem of selection of a useful set of features and discarding the ones which do not provide class separability are addressed in feature selection and fusion tasks. This paper presents a review of the different techniques and algorithms used in decision fusion and feature fusion strategies, for the task of pattern classification. A survey of the prominent techniques used for decision fusion, feature selection, and fusion techniques has been discussed separately. The different techniques used for fusion have been categorized based on the applicability and methodology adopted for classification. A novel framework has been proposed by us, combining both the concepts of decision fusion and feature fusion to increase the performance of classification. Experiments have been done on three benchmark datasets to prove the robustness of combining feature fusion and decision fusion techniques.
pacific-rim symposium on image and video technology | 2010
Utthara Gosa Mangai; Suranjana Samanta; Sukhendu Das; Pinaki Roy Chowdhury; Koshy Varghese; Manisha Kalra
There is an increasing need for automatically segmenting the regions of different landforms from a multispectral satellite image. The problem of Landform classification using data only from a 3-band optical sensor (IRS-series), in the absence of DEM (Digital Elevation Model) data, is complex due to overlapping and confusing spectral reflectance from several different landform classes. We propose a hierarchical method for landform classification for identifying a wide variety of landforms occurring over parts of the Indian subcontinent. At the first stage, the image is classified into one of three broad categories: Desertic, Coastal or Fluvial, using decision fusion of three SVMs (Support Vector Machine). In the second stage, the image is then segmented into different regions of landforms, specifically belonging to the class (category) identified at stage 1. To show the improvement in accuracy of our classification method, the results are compared with two other methods of classification.
indian conference on computer vision, graphics and image processing | 2014
Samik Banerjee; Suranjana Samanta; Sukhendu Das
Face Recognition (FR) in surveillance scenarios has attracted the attention of researchers over the last few years. The bottleneck as a large gap in both resolution and contrast between training (high-resolution gallery) and testing (degraded, low quality probes) sets, must be overcome using efficient statistical learning methods. In this paper, we propose a Bag-of-Words (BOW) based approach for face recognition combined with Domain Adaptation (DA), to overcome this challenging task of FR in degraded conditions. The dictionary of BOW is formed using dense-SIFT features, using an adaptive spatially varying density. The sampling of the keypoints is denser in the discriminative parts of the face, while it is loosely sampled at some less-interesting (pre-decided) zones of the face. FR using BOW-based face representation is made more efficient using an unsupervised method of DA. Proposed method of DA considers the training set to be the source and the test set to be the target domains. Transformation from source to target is estimated using eigen-analysis of the BOW-based features, which is the novelty and contribution of our proposed work on FR for surveillance applications. Results on the two-real world surveillance face datasets shows the efficiency of the proposed method using ROC and CMC measures.
computer analysis of images and patterns | 2013
Suranjana Samanta; Sukhendu Das
Domain adaptation DA is a method used to obtain better classification accuracy, when the training and testing datasets have different distributions. This paper describes an algorithm for DA to transform data from source domain to match the distribution of the target domain. We use eigen-analysis of data on both the domains, to estimate the transformation along each dimension separately. In order to parameterize the distributions in both the domains, we perform clustering separately along every dimension, prior to the transformation. The proposed algorithm of DA when applied to the task of object categorization, gives better results than a few state of the art methods.
Iet Image Processing | 2015
Suranjana Samanta; Sukhendu Das
This study describes a new technique of unsupervised domain adaptation based on eigenanalysis in kernel space, for the purpose of categorisation tasks. The authors propose a transformation of data in source domain, such that the eigenvectors and eigenvalues of the transformed source domain become similar to that of the target domain. They extend this idea to the reproducing kernel Hilbert space, which enables to deal with non-linear transformation of source domain. They also propose a measure to obtain the appropriate number of eigenvectors needed for transformation. Results on object, video and text categorisations tasks using real-world datasets show that the proposed method produces better results when compared with a few recent state-of-art methods of domain adaptation.
computer vision and pattern recognition | 2013
Suranjana Samanta; A. Tirumarai Selvan; Sukhendu Das
In this paper, we propose a method to improve the results of clustering in a target domain, using significant information from an auxiliary (source) domain dataset. The applicability of this method concerns the field of transfer learning (or domain adaptation), where the performance of a task (say, classification using clustering) in one domain is improved using knowledge obtained from a similar domain. We propose two unsupervised methods of cross-domain clustering and show results on two different categories of benchmark datasets, both having difference in density distributions over the pair of domains. In the first method, we propose an iterative framework, where the clustering in the target domain is influenced by the clusters formed in the source domain and vice-versa. Similarity/dissimilarity measures have been appropriately formulated using Euclidean distance and Bregman Divergence, for cross-domain clustering. In the second method, we perform clustering in the target domain by estimating local density computed using a non-parametric (NP) density estimator (due to less number of samples). Prior to clustering, the NP-density scattering in the target domain is modified using information of cluster density distribution in source domain. Results shown on real-world datasets suggest that the proposed methods of cross-domain clustering are comparable to the recent start-of-the-art work.
PerMIn'12 Proceedings of the First Indo-Japan conference on Perception and Machine Intelligence | 2012
Gyanesh Dwivedi; Sukhendu Das; Subrata Rakshit; Megha Vora; Suranjana Samanta
In traditional content-based image retrieval (CBIR) methods, features are extracted from the entire image for computing similarity with query. It is necessary to design a smart object-centric CBIR to retrieve images from the gallery, having objects similar to that present in the foreground of the query image. We propose a model for a novel SLAR (Simultaneous Localization And Recognition) framework for solving this problem of smart CBIR, to simultaneously: (i) detect the location and (ii) recognize the type (ID or class) of the foreground object in a scene. The framework integrates both unsupervised and supervised methods of foreground segmentation and object classification. This model is motivated by the cognitive models of human visual perception, which generalizes from examples to simultaneously locate and categorize objects. Experimentation has been done on six categories of objects and the results have been compared with a contemporary work on CBIR.
pattern recognition and machine intelligence | 2009
Suranjana Samanta; Sukhendu Das
This paper describes a fast, non-parametric algorithm for feature ranking and selection for better classification accuracy. In real world cases, some of the features are noisy or redundant, which leads to the question - which features must be selected to obtain the best classification accuracy? We propose a supervised feature selection method, where features forming distinct class-wise distributions are given preference. Number of features selected for final classification is adaptive, but depends on the dataset used for training. We validate our proposed method by comparing with an existing method using real world datasets.
international conference on image processing | 2014
Suranjana Samanta; Sukhendu Das
This paper describes a method of cross-domain object and event categorization, using the concept of domain adaptation. Here, a classifier is trained using samples from the source/ auxiliary domain and performance is observed on a set of test samples taken from a different domain, termed as the target domain. To overcome the difference between the two domains, we aim to find an optimal sub-space such that the instances from both the domains follow similar distributions when projected onto the sub-space. Along with the distributions, the underlying manifolds of the two domains are aligned in the sub-space to reduce the difference in structure of the data from the two domains. The local spatial arrangement of the instances in both the domains are also preserved in the optimal sub-space. Results show that the proposed method of unsupervised domain adaptation provides better classification accuracy than a few state of the art methods.
british machine vision conference | 2014
Suranjana Samanta; Tirumarai Selvan; Sukhendu Das
Domain adaptation (DA) is the process in which labeled training samples available from one domain is used to improve the performance of statistical tasks performed on test samples drawn from a different domain. The domain from which the training samples are obtained is termed as the source domain, and the counterpart consisting of the test samples is termed as the target domain. Few unlabeled training samples are also taken from the target domain in order to approximate its distribution. In this paper, we propose a new method of unsupervised DA, where a set of domain invariant sub-spaces are estimated using the geometrical and statistical properties of the source and target domains. This is a modification of the work done by Gopalan et al. [2], where the geodesic path from the principal components of the source to that of the target is considered in the Grassmann manifold, and the intermediary points are sampled to represent the incremental change in the geometric properties of the data in source and target domains. Instead of the geodesic path, we consider an alternate path of shortest length between the principal components of source and target, with the property that the intermediary sample points on the path form domain invariant sub-spaces using the concept of Maximum Mean Discrepancy (MMD) [3]. Thus we model the change in the geometric properties of data in both the domains sequentially, in a manner such that the distributions of projected data from both the domains always remain similar along the path. The entire formulation is done in the kernel space which makes it more robust to non-linear transformations. Let X and Y be the source and target domains having nX and nY number of instances respectively. If Φ(.) is a universal kernel function, then in kernel space the source and target domains are Φ(X) ∈ RnX×d and Φ(Y ) ∈ RnX×d respectively. Let KXX and KYY be the kernel gram matrices of Φ(X) and Φ(Y ) respectively. Let D = [X ;Y ] denote the combined source and target domain data, and the corresponding data in kernel space is given as Φ(D). The kernel gram matrix formed using D is given by