Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hatice Cinar Akakin is active.

Publication


Featured researches published by Hatice Cinar Akakin.


Image and Vision Computing | 2011

Robust classification of face and head gestures in video

Hatice Cinar Akakin; Bülent Sankur

Automatic analysis of head gestures and facial expressions is a challenging research area and it has significant applications in human-computer interfaces. We develop a face and head gesture detector in video streams. The detector is based on face landmark paradigm in that appearance and configuration information of landmarks are used. First we detect and track accurately facial landmarks using adaptive templates, Kalman predictor and subspace regularization. Then the trajectories (time series) of facial landmark positions during the course of the head gesture or facial expression are converted in various discriminative features. Features can be landmark coordinate time series, facial geometric features or patches on expressive regions of the face. We use comparatively, two feature sequence classifiers, that is, Hidden Markov Models (HMM) and Hidden Conditional Random Fields (HCRF), and various feature subspace classifiers, that is, ICA (Independent Component Analysis) and NMF (Non-negative Matrix Factorization) on the spatiotemporal data. We achieve 87.3% correct gesture classification on a seven-gesture test database, and the performance reaches 98.2% correct detection under a fusion scheme. Promising and competitive results are also achieved on classification of naturally occurring gesture clips of LIlir TwoTalk Corpus.


electronic imaging | 2006

2D/3D facial feature extraction

Hatice Cinar Akakin; Albert Ali Salah; Lale Akarun; Bülent Sankur

We propose and compare three different automatic landmarking methods for near-frontal faces. The face information is provided as 480x640 gray-level images in addition to the corresponding 3D scene depth information. All three methods follow a coarse-to-fine suite and use the 3D information in an assist role. The first method employs a combination of principal component analysis (PCA) and independent component analysis (ICA) features to analyze the Gabor feature set. The second method uses a subset of DCT coefficients for template-based matching. These two methods employ SVM classifiers with polynomial kernel functions. The third method uses a mixture of factor analyzers to learn Gabor filter outputs. We contrast the localization performance separately with 2D texture and 3D depth information. Although the 3D depth information per se does not perform as well as texture images in landmark localization, the 3D information has still a beneficial role in eliminating the background and the false alarms.


ieee international conference on automatic face & gesture recognition | 2008

Multi-attribute robust facial feature localization

Oya Celiktutan; Hatice Cinar Akakin; Bülent Sankur

In this paper, we focus on the reliable detection of facial fiducial points, such as eye, eyebrow and mouth corners. The proposed algorithm aims to improve automatic land-marking performance in challenging realistic face scenarios subject to pose variations, high-valence facial expressions and occlusions. We explore the potential of several feature modalities, namely, gabor wavelets, independent component analysis (ICA), non-negative matrix factorization (NMF), and discrete cosine transform (DCT), both singly and jointly. We show that the selection of the highest scoring face patch as the corresponding landmark is not always the best, but that there is considerable room for improvement with the cooperation among several high scoring candidates and also using a graph-based post-processing method. We present our experimental results on Bosphorus face database, a new challenging database.


digital television conference | 2007

Robust 2D/3D Face Landmarking

Hatice Cinar Akakin; Lale Akarun; Bülent Sankur

Localization of facial feature points is an important step for registration and normalization of both two and three-dimensional (2D and 3D) face images. Challenging conditions, especially illumination and pose variations decrease the accuracy and robustness of facial feature landmarking. We can deal with these challenges on the one hand by resorting to graph-based methods that incorporate some anthropometrical information and on the other hand by using jointly 2D and 3D face data. We evaluate the contributions to the accuracy of facial feature localization of graph-based methods and of joint usage of 2D and 3D information for feature localization.


BioID_MultiComm'09 Proceedings of the 2009 joint COST 2101 and 2102 international conference on Biometric ID management and multimodal communication | 2009

Analysis of head and facial gestures using facial landmark trajectories

Hatice Cinar Akakin; Bülent Sankur

Automatic analysis of head and facial gestures is a significant and challenging research area for human-computer interfaces. We propose a robust face-and head gesture analyzer. The analyzer exploits trajectories of facial landmark positions during the course of the head gesture or facial expression. The trajectories themselves are obtained as the output of an accurate feature detector and tracker algorithm, which uses a combination of appearance- and model-based approaches. A multi-pose deformable shape model is trained in order to handle shape variations under varying head rotations and facial expressions. Discriminative observation symbols extracted from the landmark trajectories drive a continuous HMM with mixture of Gaussian outputs and is used to recognize a subset of head gestures and facial expressions. For seven gesture classes we achieve 86.4 % recognition rate.


european conference on computer vision | 2010

Spatiotemporal features for effective facial expression recognition

Hatice Cinar Akakin; Bülent Sankur

We consider two novel representations and feature extraction schemes for automatic recognition of emotion related facial expressions. In one scheme facial landmark points are tracked over successive video frames using an effective detector and tracker to extract landmark trajectories. Features are extracted from landmark trajectories using Independent Component Analysis (ICA) method. In the alternative scheme, the evolution of the emotion expression on the face is captured by stacking normalized and aligned faces into a spatiotemporal face cube. Emotion descriptors are then 3D Discrete Cosine Transform (DCT) features from this prism or DCT & ICA features. Several classifier configurations are used and their performance determined in detecting the 6 basic emotions. Decision fusion applied to classifiers improved the recognition performance of best classifier by 9 percentage points. The proposed method was evaluated user independently on the Cohn-Kanade facial expression database and a state-of-the-art 95.34 % recognition performance is achieved.


HBU'10 Proceedings of the First international conference on Human behavior understanding | 2010

Spatiotemporal-boosted DCT features for head and face gesture analysis

Hatice Cinar Akakin; Bülent Sankur

Automatic analysis of head gestures and facial expressions is a challenging research area and it has significant applications in human-computer interfaces. In this study, facial landmark points are detected and tracked over successive video frames using a robust method based on subspace regularization, Kalman prediction and refinement. The trajectories (time series) of facial landmark positions during the course of the head gesture or facial expression are organized in a spatiotemporal matrix and discriminative features are extracted from the trajectory matrix. Alternatively, appearance based features are extracted from DCT coefficients of several face patches. Finally Adaboost algorithm is performed to learn a set of discriminating spatiotemporal DCT features for face and head gesture (FHG) classification. We report the classification results obtained by using the Support Vector Machines (SVM) on the outputs of the features learned by Adaboost. We achieve 94.04% subject independent classification performance over seven FHG.


signal processing and communications applications conference | 2010

Classification performance of different classifiers on head gestures and facial expressions

Hatice Cinar Akakin; Bülent Sankur

In this study, we analyze head gestures and facial expressions in face video streams. Facial landmark trajectories, which are the tracked coordinates of the landmarks in x and y directions, are extracted via an automatic and robust facial landmark tracking algorithm. Both raw features and features intuitively selected to reflect mimics are used. Examples of the latter category are mutual distances, angles and ratios of landmarks. The analyzer exploits the trajectories of facial landmark features during the course of the head and facial gesture. The feature trajectories are handled both via matrix subspace methods NMF, ICA, and via dynamic classifiers such as HMM, HCRF. The classification results on the seven different head and face gesture classes are satisfactory.


signal processing and communications applications conference | 2007

Automatic and Robust 2D/3D Human Face Landmarking

Hatice Cinar Akakin; Bülent Sankur

Face landmarking is a critical step for registration and normalization of both two and three-dimensional (2D and 3D) face images. Illumination and pose variations create challenging conditions for the accuracy and robustness of facial landmarking. These challenges can be managed by utilizing graph-based methods that incorporate some anthropometrical information and by using 2D jointly with 3D data. In this work, we propose two graph techniques to ruggedize facial landmarking and evaluate the contributions of various parameters to the accuracy. Proposed algorithms are cross-tested in different datasets to evaluate the performances.


signal processing and communications applications conference | 2006

DCT Based Facial Feature Extraction

Hatice Cinar Akakin; Bülent Sankur

In this paper we introduced an automatic landmarking method for near-frontal face images based on DCT coefficients. The face information is provided as 480times640 gray-level images with 3D scene depth data. Range data is used to eliminate the background data from the face. The proposed facial landmarking algorithm uses a coarse-to-fine searching algorithm. In coarse level the images are downsampled to 80times60 pixels resolution. Both in coarse and fine levels SVM classifiers are trained using the DCT coefficients extracted from the manually landmarked training data. Coarse level candidate facial points are searched within the whole face image. Once the candidate locations are established, we revert back to the higher resolution image and refine the accuracy by using search windows around the coarse landmark locations

Collaboration


Dive into the Hatice Cinar Akakin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oya Celiktutan

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge