Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lale Akarun is active.

Publication


Featured researches published by Lale Akarun.


Biometrics and Identity Management | 2008

Bosphorus Database for 3D Face Analysis

Arman Savran; Nese Alyuz; Hamdi Dibeklioglu; Oya Celiktutan; Berk Gökberk; Bülent Sankur; Lale Akarun

A new 3D face database that includes a rich set of expressions, systematic variation of poses and different types of occlusions is presented in this paper. This database is unique from three aspects: i) the facial expressions are composed of judiciously selected subset of Action Units as well as the six basic emotions, and many actors/actresses are incorporated to obtain more realistic expression data; ii) a rich set of head pose variations are available; and iii) different types of face occlusions are included. Hence, this new database can be a very valuable resource for development and evaluation of algorithms on face recognition under adverse conditions and facial expression analysis as well as for facial expression synthesis.


international conference on computer vision | 2011

Real time hand pose estimation using depth sensors

Cem Keskin; Furkan Kıraç; Yunus Emre Kara; Lale Akarun

This paper describes a depth image based real-time skeleton fitting algorithm for the hand, using an object recognition by parts approach, and the use of this hand modeler in an American Sign Language (ASL) digit recognition application. In particular, we created a realistic 3D hand model that represents the hand with 21 different parts. Random decision forests (RDF) are trained on synthetic depth images generated by animating the hand model, which are then used to perform per pixel classification and assign each pixel to a hand part. The classification results are fed into a local mode finding algorithm to estimate the joint locations for the hand skeleton. The system can process depth images retrieved from Kinect in real-time at 30 fps. As an application of the system, we also describe a support vector machine (SVM) based recognition module for the ten digits of ASL based on our method, which attains a recognition rate of 99.9% on live depth images in real-time1.


european conference on computer vision | 2012

Hand pose estimation and hand shape classification using multi-layered randomized decision forests

Cem Keskin; Furkan K; ra; Yunus Emre Kara; Lale Akarun

Vision based articulated hand pose estimation and hand shape classification are challenging problems. This paper proposes novel algorithms to perform these tasks using depth sensors. In particular, we introduce a novel randomized decision forest (RDF) based hand shape classifier, and use it in a novel multi---layered RDF framework for articulated hand pose estimation. This classifier assigns the input depth pixels to hand shape classes, and directs them to the corresponding hand pose estimators trained specifically for that hand shape. We introduce two novel types of multi---layered RDFs: Global Expert Network (GEN) and Local Expert Network (LEN), which achieve significantly better hand pose estimates than a single---layered skeleton estimator and generalize better to previously unseen hand poses. The novel hand shape classifier is also shown to be accurate and fast. The methods run in real---time on the CPU, and can be ported to the GPU for further increase in speed.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

The Multiscenario Multienvironment BioSecure Multimodal Database (BMDB)

Javier Ortega-Garcia; Julian Fierrez; Fernando Alonso-Fernandez; Javier Galbally; Manuel Freire; Joaquin Gonzalez-Rodriguez; Carmen García-Mateo; Jose-Luis Alba-Castro; Elisardo González-Agulla; Enrique Otero-Muras; Sonia Garcia-Salicetti; Lorene Allano; Bao Ly-Van; Bernadette Dorizzi; Josef Kittler; Thirimachos Bourlai; Norman Poh; Farzin Deravi; Ming Wah R. Ng; Michael C. Fairhurst; Jean Hennebert; Andrea Monika Humm; Massimo Tistarelli; Linda Brodo; Jonas Richiardi; Andrzej Drygajlo; Harald Ganster; Federico M. Sukno; Sri-Kaushik Pavani; Alejandro F. Frangi

A new multimodal biometric database designed and acquired within the framework of the European BioSecure Network of Excellence is presented. It is comprised of more than 600 individuals acquired simultaneously in three scenarios: 1 over the Internet, 2 in an office environment with desktop PC, and 3 in indoor/outdoor environments with mobile portable hardware. The three scenarios include a common part of audio/video data. Also, signature and fingerprint data have been acquired both with desktop PC and mobile portable hardware. Additionally, hand and iris data were acquired in the second scenario using desktop PC. Acquisition has been conducted by 11 European institutions. Additional features of the BioSecure Multimodal Database (BMDB) are: two acquisition sessions, several sensors in certain modalities, balanced gender and age distributions, multimodal realistic scenarios with simple and quick tasks per modality, cross-European diversity, availability of demographic data, and compatibility with other multimodal databases. The novel acquisition conditions of the BMDB allow us to perform new challenging research and evaluation of either monomodal or multimodal biometric systems, as in the recent BioSecure Multimodal Evaluation campaign. A description of this campaign including baseline results of individual modalities from the new database is also given. The database is expected to be available for research purposes through the BioSecure Association during 2008.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2002

A selective attention-based method for visual pattern recognition with application to handwritten digit recognition and face recognition

Albert Ali Salah; Ethem Alpaydin; Lale Akarun

Parallel pattern recognition requires great computational resources; it is NP-complete. From an engineering point of view it is desirable to achieve good performance with limited resources. For this purpose, we develop a serial model for visual pattern recognition based on the primate selective attention mechanism. The idea in selective attention is that not all parts of an image give us information. If we can attend only to the relevant parts, we can recognize the image more quickly and using less resources. We simulate the primitive, bottom-up attentive level of the human visual system with a saliency scheme and the more complex, top-down, temporally sequential associative level with observable Markov models. In between, there is a neural network that analyses image parts and generates posterior probabilities as observations to the Markov model. We test our model first on a handwritten numeral recognition problem and then apply it to a more complex face recognition problem. Our results indicate the promise of this approach in complicated vision applications.


Pattern Recognition | 2002

A fuzzy algorithm for color quantization of images

Dogan Özdemir; Lale Akarun

In this paper, we review a number of techniques for fuzzy color quantization. We show that the fuzzy membership paradigm is particularly suited to color quantization, where color cluster boundaries are not well defined. We propose a new fuzzy color quantization technique which incorporates a term for partition index. This algorithm produces better results than fuzzy C-means at a reduced computational cost. We test the results of the fuzzy algorithms using quality metrics which model the perception of the human visual system and illustrate that substantial quality improvements are achieved.


international conference on pattern recognition | 2004

3D shape-based face recognition using automatically registered facial surfaces

M.O. Irfanoglu; Berk Gökberk; Lale Akarun

We address the use of three-dimensional facial shape information for human face identification. We propose a new method to represent faces as 3D registered point clouds. Fine registration of facial surfaces is done by first automatically finding important facial landmarks and then, establishing a dense correspondence between points on the facial surface with the help of a 3D face template-aided thin plate spline algorithm. After the registration of facial surfaces, similarity between two faces is defined as a discrete approximation of the volume difference between facial surfaces. Experiments done on the 3D RMA dataset show that the proposed algorithm performs as good as the point signature method, and it is statistically superior to the point distribution model-based method and the 2D depth imagery technique. In terms of computational complexity, the proposed algorithm is faster than the point signature method.


Image and Vision Computing | 2006

3D shape-based face representation and feature extraction for face recognition

Berk Gökberk; M. Okan İrfanoğlu; Lale Akarun

Abstract In this paper, we review and compare 3D face registration and recognition algorithms, which are based solely on 3D shape information and analyze methods based on the fusion of shape features. We have analyzed two different registration algorithms, which produce a dense correspondence between faces. The first algorithm non-linearly warps faces to obtain registration, while the second algorithm allows only rigid transformations. Registration is handled with the use of an average face model, which significantly fastens the registration process. As 3D facial features, we compare the use of 3D point coordinates, surface normals, curvature-based descriptors, 2D depth images, and facial profile curves. Except for surface normals, these feature descriptors are frequently used in state-of-the-art 3D face recognizers. We also perform an in-depth analysis of decision-level fusion techniques such as fixed-rules, voting schemes, rank-based combination rules, and novel serial fusion architectures. The results of the recognition and authentication experiments conducted on the 3D_RMA database indicate that: (i) in terms of face registration method, registration of faces without warping preserves more discriminatory information, (ii) in terms of 3D facial features, surface normals attain the best recognition performance, and (iii) fusion schemes such as product rules, improved consensus voting and proposed serial fusion schemes improve the classification accuracy. Experimental results on the 3D_RMA confirm these findings by obtaining %0.1 misclassification rate in recognition experiments, and %8.06 equal error rate in authentication experiments using surface normal-based features. It is also possible to improve the classification accuracy by %2.38 using fixed fusion rules when moderate-level classifiers are used.


international conference on biometrics theory applications and systems | 2008

A 3D Face Recognition System for Expression and Occlusion Invariance

Nese Alyuz; Berk Gökberk; Lale Akarun

Facial expression variations and occlusions complicate the task of identifying persons from their 3D facial scans. We propose a new 3D face registration and recognition method based on local facial regions that is able to provide better accuracy in the presence of expression variations and facial occlusions. Proposed fast and flexible alignment method uses average regional models (ARMs), where local correspondences are inferred by the iterative closest point (ICP) algorithm. Dissimilarity scores obtained from local regional matchers are fused to robustly identify probe subjects. In this work, a multi-expression 3D face database, Bosphorus 3D face database, that contains significant amount of different expression types and realistic facial occlusion is used for identification experiments. The experimental results on this challenging database demonstrate that the proposed system improves the performance of the standard ICP-based holistic approach (71.39%) by obtaining 95.87% identification rate in the case of expression variations. When facial occlusions are present, the performance gain is even better. Identification rate improves from 47.05% to 94.12%.


IEEE Network | 2007

Surveillance Wireless Sensor Networks: Deployment Quality Analysis

Ertan Onur; Cem Ersoy; Hakan Deliç; Lale Akarun

Surveillance wireless sensor networks are deployed at perimeter or border locations to detect unauthorized intrusions. For deterministic deployment of sensors, the quality of deployment can be determined sufficiently by analysis in advance of deployment. However, when random deployment is required, determining the deployment quality becomes challenging. To assess the quality of sensor deployment, appropriate measures can be employed that reveal the weaknesses in the coverage of SWSNs with respect to the success ratio and time for detecting intruders. In this article, probabilistic sensor models are adopted, and the quality of deployment issue is surveyed and analyzed in terms of novel measures. Furthermore, since the presence of obstacles in the surveillance terrain has a negative impact on previously proposed deployment strategies and analysis techniques, we argue in favor of utilizing image segmentation algorithms by imitating the sensing area as a grayscale image referred to as the iso-sensing graph. Finally, the effect of sensor count on detection ratio and time to detect the target is analyzed through OMNeT++ simulation of an SWSN in a border surveillance scenario.

Collaboration


Dive into the Lale Akarun's collaboration.

Top Co-Authors

Avatar

Oya Aran

Idiap Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge