Soma Biswas
Indian Institute of Science
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Soma Biswas.
Journal of Visual Languages and Computing | 2009
Narayanan Ramanathan; Rama Chellappa; Soma Biswas
Facial aging, a new dimension that has recently been added to the problem of face recognition, poses interesting theoretical and practical challenges to the research community. The problem which originally generated interest in the psychophysics and human perception community has recently found enhanced interest in the computer vision community. How do humans perceive age? What constitutes an age-invariant signature that can be derived from faces? How compactly can the facial growth event be described? How does facial aging impact recognition performance? In this paper, we give a thorough analysis on the problem of facial aging and further provide a complete account of the many interesting studies that have been performed on this topic from different fields. We offer a comparative analysis of various approaches that have been proposed for problems such as age estimation, appearance prediction, face verification, etc. and offer insights into future research on this topic.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009
Soma Biswas; Gaurav Aggarwal; Rama Chellappa
In this paper, we propose a non-stationary stochastic filtering framework for the task of albedo estimation from a single image. There are several approaches in literature for albedo estimation, but few include the errors in estimates of surface normals and light source directions to improve the albedo estimate. The proposed approach effectively utilizes the error statistics of surface normals and illumination direction for robust estimation of albedo. The albedo estimate obtained is further used to generate albedo-free normalized images for recovering the shape of an object. Illustrations and experiments are provided to show the efficacy of the approach and its application to illumination-invariant matching and shape recovery.
IEEE Transactions on Information Forensics and Security | 2012
Vishal M. Patel; Tao Wu; Soma Biswas; P J. Phillips; Rama Chellappa
We present a face recognition algorithm based on simultaneous sparse approximations under varying illumination and pose. A dictionary is learned for each class based on given training examples which minimizes the representation error with a sparseness constraint. A novel test image is projected onto the span of the atoms in each learned dictionary. The resulting residual vectors are then used for classification. To handle variations in lighting conditions and pose, an image relighting technique based on pose-robust albedo estimation is used to generate multiple frontal images of the same person with variable lighting. As a result, the proposed algorithm has the ability to recognize human faces with high accuracy even when only a single or a very few images per person are provided for training. The efficiency of the proposed method is demonstrated using publicly available databases available databases and it is shown that this method is efficient and can perform significantly better than many competitive face recognition algorithms.
International Journal of Pattern Recognition and Artificial Intelligence | 2012
Jeremiah R. Barr; Kevin W. Bowyer; Patrick J. Flynn; Soma Biswas
Driven by key law enforcement and commercial applications, research on face recognition from video sources has intensified in recent years. The ensuing results have demonstrated that videos possess unique properties that allow both humans and automated systems to perform recognition accurately in difficult viewing conditions. However, significant research challenges remain as most video-based applications do not allow for controlled recordings. In this survey, we categorize the research in this area and present a broad and deep review of recently proposed methods for overcoming the difficulties encountered in unconstrained settings. We also draw connections between the ways in which humans and current algorithms recognize faces. An overview of the most popular and difficult publicly available face video databases is provided to complement these discussions. Finally, we cover key research challenges and opportunities that lie ahead for the field as a whole.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012
Soma Biswas; Kevin W. Bowyer; Patrick J. Flynn
Face recognition performance degrades considerably when the input images are of Low Resolution (LR), as is often the case for images taken by surveillance cameras or from a large distance. In this paper, we propose a novel approach for matching low-resolution probe images with higher resolution gallery images, which are often available during enrollment, using Multidimensional Scaling (MDS). The ideal scenario is when both the probe and gallery images are of high enough resolution to discriminate across different subjects. The proposed method simultaneously embeds the low-resolution probe images and the high-resolution gallery images in a common space such that the distance between them in the transformed space approximates the distance had both the images been of high resolution. The two mappings are learned simultaneously from high-resolution training images using an iterative majorization algorithm. Extensive evaluation of the proposed approach on the Multi-PIE data set with probe image resolution as low as 8 × 6 pixels illustrates the usefulness of the method. We show that the proposed approach improves the matching performance significantly as compared to performing matching in the low-resolution domain or using super-resolution techniques to obtain a higher resolution test image prior to recognition. Experiments on low-resolution surveillance images from the Surveillance Cameras Face Database further highlight the effectiveness of the approach.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013
Soma Biswas; Gaurav Aggarwal; Patrick J. Flynn; Kevin W. Bowyer
Face images captured by surveillance cameras usually have poor resolution in addition to uncontrolled poses and illumination conditions which adversely affect performance of face matching algorithms. In this paper, we develop a novel approach for matching surveillance quality facial images to high resolution images in frontal pose which are often available during enrollment. The proposed approach uses Multidimensional Scaling to simultaneously transform the features from the poor quality probe images and the high quality gallery images in such a manner that the distances between them approximate the distances had the probe images been captured in the same conditions as the gallery images. Thorough evaluation on the Multi-PIE dataset and comparisons with state-of-the-art super-resolution and classifier based approaches are performed to illustrate the usefulness of the proposed approach. Experiments on real surveillance images further signify the applicability of the framework.
workshop on applications of computer vision | 2012
Gaurav Aggarwal; Soma Biswas; Patrick J. Flynn; Kevin W. Bowyer
Plastic surgery procedures can significantly alter facial appearance, thereby posing a serious challenge even to the state-of-the-art face matching algorithms. In this paper, we propose a novel approach to address the challenges involved in automatic matching of faces across plastic surgery variations. In the proposed formulation, part-wise facial characterization is combined with the recently popular sparse representation approach to address these challenges. The sparse representation approach requires several images per subject in the gallery to function effectively which is often not available in several use-cases, as in the problem we address in this work. The proposed formulation utilizes images from sequestered non-gallery subjects with similar local facial characteristics to fulfill this requirement. Extensive experiments conducted on a recently introduced plastic surgery database [17] consisting of 900 subjects highlight the effectiveness of the proposed approach.
international conference on biometrics theory applications and systems | 2008
Soma Biswas; Gaurav Aggarwal; Narayanan Ramanathan; Rama Chellappa
Human faces undergo a lot of change in appearance as they age. Though facial aging has been studied for decades, it is only recently that attempts have been made to address the problem from a computational point of view. Most of these early efforts follow a simulation approach in which matching is performed by synthesizing face images at the target age. Given the innumerable different ways in which a face can potentially age, the synthesized aged image may not be similar to the actual aged image. In this paper, we bypass the synthesis step and directly analyze the drifts of facial features with aging from a purely matching perspective. Our analysis is based on the observation that facial appearance changes in a coherent manner as people age. We provide measures to capture this coherency in feature drifts. Illustrations and experimental results show the efficacy of such an approach for matching faces across age progression.
IEEE Transactions on Multimedia | 2010
Soma Biswas; Gaurav Aggarwal; Rama Chellappa
Many shape matching methods are either fast but too simplistic to give the desired performance or promising as far as performance is concerned but computationally demanding. In this paper, we present a very simple and efficient approach that not only performs almost as good as many state-of-the-art techniques but also scales up to large databases. In the proposed approach, each shape is indexed based on a variety of simple and easily computable features which are invariant to articulations, rigid transformations, etc. The features characterize pairwise geometric relationships between interest points on the shape. The fact that each shape is represented using a number of distributed features instead of a single global feature that captures the shape in its entirety provides robustness to the approach. Shapes in the database are ordered according to their similarity with the query shape and similar shapes are retrieved using an efficient scheme which does not involve costly operations like shape-wise alignment or establishing correspondences. Depending on the application, the approach can be used directly for matching or as a first step for obtaining a short list of candidate shapes for more rigorous matching. We show that the features proposed to perform shape indexing can be used to perform the rigorous matching as well, to further improve the retrieval performance.
international conference on biometrics theory applications and systems | 2010
Soma Biswas; Kevin W. Bowyer; Patrick J. Flynn
Face recognition performance degrades considerably when the input images are of poor resolution as is often the case for images taken by surveillance cameras or from a large distance. In this paper, we propose a novel approach for the recognition of low resolution images using multidimensional scaling. From a resolution point of view, the scenario yielding the best performance is when both the probe and gallery images are of high enough resolution to discriminate across different subjects. The proposed method embeds the low resolution images in an Euclidean space such that the distances between them in the transformed space approximates the best distances had both the images been of high resolution. The mapping is learned from high resolution training images and their corresponding low resolution images using iterative majorization algorithm. Extensive evaluation of the proposed approach on different datasets like PIE and FRGC with resolution as low as 7 × 6 pixels illustrates the usefulness of the method. We show that the proposed approach significantly improves the matching performance as compared to performing standard matching in the low-resolution domain. Performance comparison with different super-resolution techniques which obtains higher-resolution images prior to recognition further signifies the effectiveness of our approach.