Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bappaditya Mandal is active.

Publication


Featured researches published by Bappaditya Mandal.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2008

Eigenfeature Regularization and Extraction in Face Recognition

Xudong Jiang; Bappaditya Mandal; Alex C. Kot

This work proposes a subspace approach that regularizes and extracts eigenfeatures from the face image. Eigenspace of the within-class scatter matrix is decomposed into three subspaces: a reliable subspace spanned mainly by the facial variation, an unstable subspace due to noise and finite number of training samples, and a null subspace. Eigenfeatures are regularized differently in these three subspaces based on an eigenspectrum model to alleviate problems of instability, overfitting, or poor generalization. This also enables the discriminant evaluation performed in the whole space. Feature extraction or dimensionality reduction occurs only at the final stage after the discriminant assessment. These efforts facilitate a discriminative and a stable low-dimensional feature representation of the face image. Experiments comparing the proposed approach with some other popular subspace methods on the FERET, ORL, AR, and GT databases show that our method consistently outperforms others.


machine vision applications | 2008

Complete discriminant evaluation and feature extraction in kernel space for face recognition

Xudong Jiang; Bappaditya Mandal; Alex C. Kot

This work proposes a method to decompose the kernel within-class eigenspace into two subspaces: a reliable subspace spanned mainly by the facial variation and an unreliable subspace due to limited number of training samples. A weighting function is proposed to circumvent undue scaling of eigenvectors corresponding to the unreliable small and zero eigenvalues. Eigenfeatures are then extracted by the discriminant evaluation in the whole kernel space. These efforts facilitate a discriminative and stable low-dimensional feature representation of the face image. Experimental results on FERET, ORL and GT databases show that our approach consistently outperforms other kernel based face recognition methods.


asian conference on computer vision | 2014

A Wearable Face Recognition System on Google Glass for Assisting Social Interactions

Bappaditya Mandal; Shue-Ching Chia; Liyuan Li; Vijay Chandrasekhar; Cheston Tan; Joo-Hwee Lim

In this paper, we present a wearable face recognition (FR) system on Google Glass (GG) to assist users in social interactions. FR is the first step towards face-to-face social interactions. We propose a wearable system on GG, which acts as a social interaction assistant, the application includes face detection, eye localization, face recognition and a user interface for personal information display. To be useful in natural social interaction scenarios, the system should be robust to changes in face pose, scale and lighting conditions. OpenCV face detection is implemented in GG. We exploit both OpenCV and ISG (Integration of Sketch and Graph patterns) eye detectors to locate a pair of eyes on the face, between them the former is stable for frontal view faces and the latter performs better for oblique view faces. We extend the eigenfeature regularization and extraction (ERE) face recognition approach by introducing subclass discriminant analysis (SDA) to perform within-subclass discriminant analysis for face feature extraction. The new approach improves the accuracy of FR over varying face pose, expression and lighting conditions. A simple user interface (UI) is designed to present relevant personal information of the recognized person to assist in the social interaction. A standalone independent system on GG and a Client-Server (CS) system via Bluetooth to connect GG with a smart phone are implemented, for different levels of privacy protection. The performance on database created using GG is evaluated and comparisons with baseline approaches are performed. Numerous experimental studies show that our proposed system on GG can perform better real-time FR as compared to other methods.


IEEE Transactions on Biomedical Engineering | 2013

Quantifying Limb Movements in Epileptic Seizures Through Color-Based Video Analysis

Haiping Lu; Yaozhang Pan; Bappaditya Mandal; How-Lung Eng; Cuntai Guan; Derrick Wei Shih Chan

This paper proposes a color-based video analytic system for quantifying limb movements in epileptic seizure monitoring. The system utilizes colored pyjamas to facilitate limb segmentation and tracking. Thus, it is unobtrusive and requires no sensor/marker attached to patients body. We employ Gaussian mixture models in background/foreground modeling and detect limbs through a coarse-to-fine paradigm with graph-cut-based segmentation. Next, we estimate limb parameters with domain knowledge guidance and extract displacement and oscillation features from movement trajectories for seizure detection/analysis. We report studies on sequences captured in an epilepsy monitoring unit. Experimental evaluations show that the proposed system has achieved comparable performance to EEG-based systems in detecting motor seizures.


international conference on signal processing | 2007

Dimensionality reduction in subspace face recognition

Bappaditya Mandal; Xudong Jiang; Alex C. Kot

Numerous face recognition algorithms use principal component analysis (PCA) as the first step for dimensionality reduction (DR) followed by linear discriminant analysis (LDA). PCA is applied in the beginning because it performs the DR in the minimum square error sense and achieves the most compact representation of data. However, they lack discrimination ability. To optimize classification, LDA and its variants are applied to the PCA reduced subspace so that the transformed data achieves minimum within-class variation and maximum between-class variations. In this paper, we study total, within-class and between-class scatter matrices and their roles in DR or feature extraction with good discrimination ability. The number of dimensions retained in DR plays a very crucial role for subsequent discriminant analysis. We reveal some important aspect of how recognition rate varies using different scatter matrices and their stepwise DR. Experimental results on popular face databases are provided to support our findings.


IEEE Intelligent Systems | 2012

Regularized Discriminant Analysis for Holistic Human Activity Recognition

Bappaditya Mandal; How-Lung Eng

A holistic or appearance-based eigenfeature regularization methodology based on a three-parameter eigenmodel improves computer recognition of human activity.


international conference on pattern recognition | 2010

Prediction of eigenvalues and regularization of eigenfeatures for human face verification

Bappaditya Mandal; Xudong Jiang; How-Lung Eng; Alex C. Kot

We present a prediction and regularization strategy for alleviating the conventional problems of LDA and its variants. A procedure is proposed for predicting eigenvalues using few reliable eigenvalues from the range space. Entire eigenspectrum is divided using two control points, however, the effective low-dimensional discriminative vectors are extracted from the whole eigenspace. The estimated eigenvalues are used for regularization of eigenfeatures in the eigenspace. These prediction and regularization enable to perform discriminant evaluation in the full eigenspace. The proposed method is evaluated and compared with eight popular subspace based methods for face verification task. Experimental results on popular face databases show that our method consistently outperforms others.


conference on industrial electronics and applications | 2006

Multi-Scale Feature Extraction for Face Recognition

Bappaditya Mandal; Xudong Jiang; Alex C. Kot

Face recognition has been a very active research area in the past two decades. Many attempts have been made to understand the process how human beings perceive human faces. It is widely accepted that face recognition may rely on both componential cues (such as eyes, mouth, nose, cheeks) and non-componential/holistic information (the spatial relations between these features), though how these cues should be optimally integrated remains unclear. In this paper, we present a new different observers view approach using multi-scale feature extraction from face images. The basic idea of the proposed method is to construct facial features from multi-scale image patches from different face components and then employ a subspace PCA method for further dimensionality reduction and good representation of facial features. Finally, combining the contributions of each component features draws the recognition decision. 2,388 frontal face images of FERET face database are used for evaluating the proposed method and results are encouraging


international conference on acoustics, speech, and signal processing | 2007

Face Recognition Based on Discriminant Evaluation in the Whole Space

Xudong Jiang; Bappaditya Mandal; Alex C. Kot

This paper proposes a face recognition approach that performs linear discriminant analysis in the whole eigenspace. It decomposes the eigenspace into two subspaces: a reliable subspace spanned mainly by the facial variation and an unstable subspace due to finite number of training samples. Eigenvalues in the unstable subspace are replaced by a constant. This alleviates the over-fitting problem and enables the discriminant evaluation in the whole space. Feature extraction or dimensionality reduction occurs only at the final stage after the discriminant assessment. These efforts facilitate a discriminative and stable low-dimensional feature representation of the face image. Experimental results comparing some popular subspace methods on FERET and ORL databases show that our approach consistently outperforms others.


IEEE Transactions on Intelligent Transportation Systems | 2017

Towards Detection of Bus Driver Fatigue Based on Robust Visual Analysis of Eye State

Bappaditya Mandal; Liyuan Li; Gang Sam Wang; Jie Lin

Drivers fatigue is one of the major causes of traffic accidents, particularly for drivers of large vehicles (such as buses and heavy trucks) due to prolonged driving periods and boredom in working conditions. In this paper, we propose a vision-based fatigue detection system for bus driver monitoring, which is easy and flexible for deployment in buses and large vehicles. The system consists of modules of head-shoulder detection, face detection, eye detection, eye openness estimation, fusion, drowsiness measure percentage of eyelid closure (PERCLOS) estimation, and fatigue level classification. The core innovative techniques are as follows: 1) an approach to estimate the continuous level of eye openness based on spectral regression; and 2) a fusion algorithm to estimate the eye state based on adaptive integration on the multimodel detections of both eyes. A robust measure of PERCLOS on the continuous level of eye openness is defined, and the driver states are classified on it. In experiments, systematic evaluations and analysis of proposed algorithms, as well as comparison with ground truth on PERCLOS measurements, are performed. The experimental results show the advantages of the system on accuracy and robustness for the challenging situations when a camera of an oblique viewing angle to the drivers face is used for driving state monitoring.

Collaboration


Dive into the Bappaditya Mandal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex C. Kot

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Xudong Jiang

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge