Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chandrasekhar Bhagavatula is active.

Publication


Featured researches published by Chandrasekhar Bhagavatula.


international conference on biometrics theory applications and systems | 2012

Gait-ID on the move: Pace independent human identification using cell phone accelerometer dynamics

Felix Juefei-Xu; Chandrasekhar Bhagavatula; Aaron Jaech; Unni Prasad; Marios Savvides

In this paper, we have proposed a robust, acceleration based, pace independent gait recognition framework using Android smartphones. From our extensive experiments using cyclostationarity and continuous wavelet transform spectrogram analysis on our gait acceleration database with both normal and fast paced data, our proposed algorithm has outperformed the state-of-the-art by a great margin. To be more specific, for normal to normal pace matching, we are able to achieve 99.4% verification rate (VR) at 0.1% false accept rate (FAR); for fast vs. fast, we are able to achieve 96.8% VR at 0.1% FAR; for the challenging normal vs. fast, we are still able to achieve 61.1% VR at 0.1% FAR. The findings have laid the foundation of pace independent gait recognition using mobile devices with high accuracy.


international conference on image processing | 2015

Investigating the feasibility of image-based nose biometrics

Niv Zehngut; Felix Juefei-Xu; Rishabh Bardia; Dipan K. Pal; Chandrasekhar Bhagavatula; Marios Savvides

The search for new biometrics is never ending. In this work, we investigate the use of image based nasal features as a biometric. In many real-world recognition scenarios, partial occlusions on the face leave the nose region visible (e.g. sunglasses). Face recognition systems often fail or perform poorly in such settings. Furthermore, the nose region naturally contain more invariance to expression than features extracted from other parts of the face. In this study, we extract discriminative nasal features using Kernel Class-Dependence Feature Analysis (KCFA) based on Optimal Trade-off Synthetic Discriminant Function (OTSDF) filters. We evaluate this technique on the FRGC ver2.0 database and the AR Face database, training and testing exclusively on nasal features and have compared the results to the full face recognition using KCFA features. We find that the between-subject discriminability in nasal features is comparable to that found in facial features. This shows that nose biometrics have a potential to support and boost biometric identification, that has largely been under utilized. Moreover, our extracted KCFA nose features have significantly outperformed the PittPatt face matcher which works with the original JPEG images on the AR facial occlusion database. This shows that nose biometrics can be used as a stand-alone biometric trait when the subjects are under occlusions.


international conference on biometrics theory applications and systems | 2016

Towards a deep learning framework for unconstrained face detection

Yutong Zheng; Chenchen Zhu; Khoa Luu; Chandrasekhar Bhagavatula; T. Hoang Ngan Le; Marios Savvides

Robust face detection is one of the most important preprocessing steps to support facial expression analysis, facial landmarking, face recognition, pose estimation, building of 3D facial models, etc. Although this topic has been intensely studied for decades, it is still challenging due to numerous variants of face images in real-world scenarios. In this paper, we present a novel approach named Multiple Scale Faster Region-based Convolutional Neural Network (MS-FRCNN) to robustly detect human facial regions from images collected under various challenging conditions, e.g. large occlusions, extremely low resolutions, facial expressions, strong illumination variations, etc. The proposed approach is benchmarked on two challenging face detection databases, i.e. the Wider Face database and the Face Detection Dataset and Benchmark (FDDB), and compared against recent other face detection methods, e.g. Two-stage CNN, Multi-scale Cascade CNN, Faceness, Aggregate Chanel Features, HeadHunter, Multi-view Face Detection, Cascade CNN, etc. The experimental results show that our proposed approach consistently achieves highly competitive results with the state-of-the-art performance against other recent face detection methods.


international conference on image processing | 2012

Automatic segmentation of cardiosynchronous waveforms using cepstral analysis and continuous wavelet transforms

Chandrasekhar Bhagavatula; Aaron Jaech; Marios Savvides; Vijayakumar Bhagavatula; Robert M. Friedman; Rebecca Blue; Marc O. Griofa

The cardiosynchronous signal obtained through Radio Frequency Impedance Interrogation (RFII) is a non-invasive method for monitoring hemodynamics with potential applications in combat triage and biometric identification. The RFII signal is periodic in nature dominated by the heart beat cycle. The first step in both of these applications is to segment the signal by identifying a fiducial point in each heart beat cycle. A continuous wavelet transform was utilized to locate the fiducial points with high temporal resolution. Cepstral Analysis was used to estimate the average heart rate to focus on the appropriate portion of the time-frequency spectrum. Robust heartbeats from RFII signals collected from four subjects were segmented using this method.


computer vision and pattern recognition | 2016

Weakly Supervised Facial Analysis with Dense Hyper-Column Features

Chenchen Zhu; Yutong Zheng; Khoa Luu; T. Hoang Ngan Le; Chandrasekhar Bhagavatula; Marios Savvides

Weakly supervised methods have recently become one of the most popular machine learning methods since they are able to be used on large-scale datasets without the critical requirement of richly annotated data. In this paper, we present a novel, self-taught, discriminative facial feature analysis approach in the weakly supervised framework. Our method can find regions which are discriminative across classes yet consistent within a class and can solve many face related problems. The proposed method first trains a deep face model with high discriminative capability to extract facial features. The hypercolumn features are then used to give pixel level representation for better classification performance along with discriminative region detection. In addition, calibration approaches are proposed to enable the system to deal with multi-class and mixed-class problems. The system is also able to detect multiple discriminative regions from one image. Our uniform method is able to achieve competitive results in various face analysis applications, such as occlusion detection, face recognition, gender classification, twins verification and facial attractiveness analysis.


Archive | 2016

A Deep Learning Approach to Joint Face Detection and Segmentation

Khoa Luu; Chenchen Zhu; Chandrasekhar Bhagavatula; T. Hoang Ngan Le; Marios Savvides

Robust face detection and facial segmentation are crucial pre-processing steps to support facial recognition, expression analysis, pose estimation, building of 3D facial models, etc. In previous approaches, the process of face detection and facial segmentation are usually implemented as sequential, mostly separated modules. In these methods, face detection algorithms are usually first implemented so that facial regions can be located in given images. Segmentation algorithms are then carried out to find the facial boundaries and other facial features, such as the eyebrows, eyes, nose, mouth, etc. However, both of these tasks are challenging due to numerous variations of face images in the wild, e.g. facial expressions, illumination variations, occlusions, resolution, etc. In this chapter, we present a novel approach to detect human faces and segment facial features from given images simultaneously. Our proposed approach performs accurate facial feature segmentation and demonstrates its effectiveness on images from two challenging face databases, i.e. Multiple Biometric Grand Challenge (MBGC) and Labeled Faces in the Wild (LFW).


international conference of the ieee engineering in medicine and biology society | 2012

Biometric identification of cardiosynchronous waveforms utilizing person specific continuous and discrete wavelet transform features

Chandrasekhar Bhagavatula; Shreyas Venugopalan; Rebecca S. Blue; Robert Friedman; Marc O Griofa; Marios Savvides; B. V. K. Vijaya Kumar

In this paper we explore how a Radio Frequency Impedance Interrogation (RFII) signal may be used as a biometric feature. This could allow the identification of subjects in operational and potentially hostile environments. Features extracted from the continuous and discrete wavelet decompositions of the signal are investigated for biometric identification. In the former case, the most discriminative features in the wavelet space were extracted using a Fisher ratio metric. Comparisons in the wavelet space were done using the Euclidean distance measure. In the latter case, the signal was decomposed at various levels using different wavelet bases, in order to extract both low frequency and high frequency components. Comparisons at each decomposition level were performed using the same distance measure as before. The data set used consists of four subjects, each with a 15 minute RFII recording. The various data samples for our experiments, corresponding to a single heart beat duration, were extracted from these recordings. We achieve identification rates of up to 99% using the CWT approach and rates of up to 100% using the DWT approach. While the small size of the dataset limits the interpretation of these results, further work with larger datasets is expected to develop better algorithms for subject identification.


Archive | 2017

Unconstrained Biometric Identification in Real World Environments

Marios Savvides; Felix Juefei-Xu; Utsav Prabhu; Chandrasekhar Bhagavatula

In this work, we introduce four topics that cover the most important problems and challenges for unconstrained face biometrics identification in real world environment. They are (1) off angle and occluded face recognition, (2) low resolution face recognition, (3) full craniofacial 3D modeling, and (4) hallucinating the full face from the periocular region. We will show the state-of-the-art results accordingly.


international conference on computer vision | 2017

Faster than Real-Time Facial Alignment: A 3D Spatial Transformer Network Approach in Unconstrained Poses

Chandrasekhar Bhagavatula; Chenchen Zhu; Khoa Luu; Marios Savvides


Archive | 2015

REAL-TIME VIDEO ANALYSIS FOR SECURITY SURVEILLANCE

Andy Lin; Kyle Neblett; Marios Savvides; Karanhaar Singh; Chandrasekhar Bhagavatula

Collaboration


Dive into the Chandrasekhar Bhagavatula's collaboration.

Top Co-Authors

Avatar

Marios Savvides

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Chenchen Zhu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Khoa Luu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Felix Juefei-Xu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

T. Hoang Ngan Le

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Aaron Jaech

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Yutong Zheng

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dipan K. Pal

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Karanhaar Singh

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge