Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hamdi Dibeklioglu is active.

Publication


Featured researches published by Hamdi Dibeklioglu.


Biometrics and Identity Management | 2008

Bosphorus Database for 3D Face Analysis

Arman Savran; Nese Alyuz; Hamdi Dibeklioglu; Oya Celiktutan; Berk Gökberk; Bülent Sankur; Lale Akarun

A new 3D face database that includes a rich set of expressions, systematic variation of poses and different types of occlusions is presented in this paper. This database is unique from three aspects: i) the facial expressions are composed of judiciously selected subset of Action Units as well as the six basic emotions, and many actors/actresses are incorporated to obtain more realistic expression data; ii) a rich set of head pose variations are available; and iii) different types of face occlusions are included. Hence, this new database can be a very valuable resource for development and evaluation of algorithms on face recognition under adverse conditions and facial expression analysis as well as for facial expression synthesis.


international conference on biometrics theory applications and systems | 2008

3D Facial Landmarking under Expression, Pose, and Occlusion Variations

Hamdi Dibeklioglu; Albert Ali Salah; Lale Akarun

Automatic localization of 3D facial features is important for face recognition, tracking, modeling and expression analysis. Methods developed for 2D images were shown to have problems working across databases acquired with different illumination conditions. Expression variations, pose variations and occlusions also hamper accurate detection of landmarks. In this paper we assess a fully automatic 3D facial landmarking algorithm that relies on accurate statistical modeling of facial features. This algorithm can be employed to model any facial landmark, provided that the facial poses present in the training and test conditions are similar. We test this algorithm on the recently acquired Bosphorus 3D face database, and also inspect cross-database performance by using the FRGC database. Then, a curvature-based method for localizing the nose tip is introduced and shown to perform well under severe conditions.


international conference on computer vision | 2013

Like Father, Like Son: Facial Expression Dynamics for Kinship Verification

Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers

Kinship verification from facial appearance is a difficult problem. This paper explores the possibility of employing facial expression dynamics in this problem. By using features that describe facial dynamics and spatio-temporal appearance over smile expressions, we show that it is possible to improve the state of the art in this problem, and verify that it is indeed possible to recognize kinship by resemblance of facial expressions. The proposed method is tested on different kin relationships. On the average, 72.89% verification accuracy is achieved on spontaneous smiles.


Biometrics and Identity Management | 2008

3D Face Recognition Benchmarks on the Bosphorus Database with Focus on Facial Expressions

Nese Alyuz; Berk Gökberk; Hamdi Dibeklioglu; Arman Savran; Albert Ali Salah; Lale Akarun; Bülent Sankur

This paper presents an evaluation of several 3D face recognizers on the Bosphorus database which was gathered for studies on expression and pose invariant face analysis. We provide identification results of three 3D face recognition algorithms, namely generic face template based ICP approach, one-to-all ICP approach, and depth image-based Principal Component Analysis (PCA) method. All of these techniques treat faces globally and are usually accepted as baseline approaches. In addition, 2D texture classifiers are also incorporated in a fusion setting. Experimental results reveal that even though global shape classifiers achieve almost perfect identification in neutral-to-neutral comparisons, they are sub-optimal under extreme expression variations. We show that it is possible to boost the identification accuracy by focusing on the rigid facial regions and by fusing complementary information coming from shape and texture modalities.


IEEE Transactions on Image Processing | 2012

A Statistical Method for 2-D Facial Landmarking

Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers

Many facial-analysis approaches rely on robust and accurate automatic facial landmarking to correctly function. In this paper, we describe a statistical method for automatic facial-landmark localization. Our landmarking relies on a parsimonious mixture model of Gabor wavelet features, computed in coarse-to-fine fashion and complemented with a shape prior. We assess the accuracy and the robustness of the proposed approach in extensive cross-database conditions conducted on four face data sets (Face Recognition Grand Challenge, Cohn-Kanade, Bosphorus, and BioID). Our method has 99.33% accuracy on the Bosphorus database and 97.62% accuracy on the BioID database on the average, which improves the state of the art. We show that the method is not significantly affected by low-resolution images, small rotations, facial expressions, and natural occlusions such as beard and mustache. We further test the goodness of the landmarks in a facial expression recognition application and report landmarking-induced improvement over baseline on two separate databases for video-based expression recognition (Cohn-Kanade and BU-4DFE).


IEEE Transactions on Multimedia | 2015

Recognition of Genuine Smiles

Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers

Automatic distinction between genuine (spontaneous) and posed expressions is important for visual analysis of social signals. In this paper, we describe an informative set of features for the analysis of face dynamics, and propose a completely automatic system to distinguish between genuine and posed enjoyment smiles. Our system incorporates facial landmarking and tracking, through which features are extracted to describe the dynamics of eyelid, cheek, and lip corner movements. By fusing features over different regions, as well as over different temporal phases of a smile, we obtain a very accurate smile classifier. We systematically investigate age and gender effects, and establish that age-specific classification significantly improves the results, even when the age is automatically estimated. We evaluate our system on the 400-subject UvA-NEMO database we have recently collected, as well as on three other smile databases from the literature . Through an extensive experimental evaluation, we show that our system improves the state of the art in smile classification and provides useful insights in smile psychophysics.


IEEE Transactions on Image Processing | 2015

Combining Facial Dynamics With Appearance for Age Estimation

Hamdi Dibeklioglu; Fares Alnajar; Albert Ali Salah; Theo Gevers

Estimating the age of a human from the captured images of his/her face is a challenging problem. In general, the existing approaches to this problem use appearance features only. In this paper, we show that in addition to appearance information, facial dynamics can be leveraged in age estimation. We propose a method to extract and use dynamic features for age estimation, using a persons smile. Our approach is tested on a large, gender-balanced database with 400 subjects, with an age range between 8 and 76. In addition, we introduce a new database on posed disgust expressions with 324 subjects in the same age range, and evaluate the reliability of the proposed approach when used with another expression. State-of-the-art appearance-based age estimation methods from the literature are implemented as baseline. We demonstrate that for each of these methods, the addition of the proposed dynamic features results in statistically significant improvement. We further propose a novel hierarchical age estimation architecture based on adaptive age grouping. We test our approach extensively, including an exploration of spontaneous versus posed smile dynamics, and gender-specific age estimation. We show that using spontaneity information reduces the mean absolute error by up to 21%, advancing the state of the art for facial age estimation.


acm multimedia | 2010

Eyes do not lie: spontaneous versus posed smiles

Hamdi Dibeklioglu; Roberto Valenti; Albert Ali Salah; Theo Gevers

Automatic detection of spontaneous versus posed facial expressions received a lot of attention in recent years. However, almost all published work in this area use complex facial features or multiple modalities, such as head pose and body movements with facial features. Besides, the results of these studies are not given on public databases. In this paper, we focus on eyelid movements to classify spontaneous versus posed smiles and propose distance-based and angular features for eyelid movements. We assess the reliability of these features with continuous HMM, k-NN and naive Bayes classifiers on two different public datasets. Experimentation shows that our system provides classification rates up to 91 per cent for posed smiles and up to 80 per cent for spontaneous smiles by using only eyelid movements. We additionally compare the discrimination power of movement features from different facial regions for the same task.


international conference on pattern recognition | 2014

Graph-Based Kinship Recognition

Yuanhao Guo; Hamdi Dibeklioglu; Laurens van der Maaten

Image-based kinship recognition is an important problem in the reconstruction and analysis of social networks. Prior studies on image-based kinship recognition have focused solely on pair wise kinship verification, i.e. on the question of whether or not two people are kin. Such approaches fail to exploit the fact that many real-world photographs contain several family members, for instance, the probability of two people being brothers increases when both people are recognized to have the same father. In this work, we propose a graph-based approach that incorporates facial similarities between all family members in a photograph in order to improve the performance of kinship recognition. In addition, we introduce a database of group photographs with kinship annotations.


acm multimedia | 2012

A smile can reveal your age: enabling facial dynamics in age estimation

Hamdi Dibeklioglu; Theo Gevers; Albert Ali Salah; Roberto Valenti

Estimation of a persons age from the facial image has many applications, ranging from biometrics and access control to cosmetics and entertainment. Many image-based methods have been proposed for this problem. In this paper, we propose a method for the use of dynamic features in age estimation, and show that 1) the temporal dynamics of facial features can be used to improve image-based age estimation; 2) considered alone, static image-based features are more accurate than dynamic features. We have collected and annotated an extensive database of face videos from 400 subjects with an age range between 8 and 76, which allows us to extensively analyze the relevant aspects of the problem. The proposed system, which fuses facial appearance and expression dynamics, performs with a mean absolute error of 4.81 (4.87) years. This represents a significant improvement of accuracy in comparison to the sole use of appearance-based features.

Collaboration


Dive into the Hamdi Dibeklioglu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Theo Gevers

University of Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laurens van der Maaten

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

David M. J. Tax

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wenjie Pei

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge