Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ali Erol is active.

Publication


Featured researches published by Ali Erol.


Computer Vision and Image Understanding | 2007

Vision-based hand pose estimation: A review

Ali Erol; George Bebis; Mircea Nicolescu; Richard Boyle; Xander Twombly

Direct use of the hand as an input device is an attractive method for providing natural human-computer interaction (HCI). Currently, the only technology that satisfies the advanced requirements of hand-based input for HCI is glove-based sensing. This technology, however, has several drawbacks including that it hinders the ease and naturalness with which the user can interact with the computer-controlled environment, and it requires long calibration and setup procedures. Computer vision (CV) has the potential to provide more natural, non-contact solutions. As a result, there have been considerable research efforts to use the hand as an input device for HCI. In particular, two types of research directions have emerged. One is based on gesture classification and aims to extract high-level abstract information corresponding to motion patterns or postures of the hand. The second is based on pose estimation systems and aims to capture the real 3D motion of the hand. This paper presents a literature review on the latter research direction, which is a very challenging problem in the context of HCI.


computer vision and pattern recognition | 2005

A Review on Vision-Based Full DOF Hand Motion Estimation

Ali Erol; George Bebis; Mircea Nicolescu; Richard Boyle; Xander Twombly

Direct use of the hand as an input device is an attractive method for providing natural human-computer interaction (HCI). Currently, the only technology that satisfies the advanced requirements of hand-based input for HCI is glovebased sensing. This technology, however, has several drawbacks including that it hinders the ease and naturalness with which the user can interact with the computer controlled environment, and it requires long calibration and setup procedures. Computer vision has the potential to provide much more natural, non-contact solutions. As a result, there have been considerable research efforts to use the hand as an input device for HCI. A very challenging problem in this context, which is the focus of this review, is recovering the 3D pose of the hand and the fingers as glove-based devices do. This paper presents a brief literature review on full degreeof- freedom (DOF) hand motion estimation methods.


computer vision and pattern recognition | 2006

Peg-Free Hand Shape Verification Using High Order Zernike Moments

Gholamreza Amayeh; George Bebis; Ali Erol; Mircea Nicolescu

Hand-based verification is a key biometric technology with a wide range of potential applications both in industry and government. The focus of this work is on improving the efficiency, accuracy, and robustness of hand-based verification. In particular, we propose using high-order Zernike moments to represent hand geometry, avoiding the more difficult and prone to errors process of hand-landmark extraction (e.g., finding finger joints). The proposed system operates on 2D hand silhouette images acquired by placing the hand on a planar lighting table without any guidance pegs, increasing the ease of use compared to conventional systems. Zernike moments are powerful translation, rotation, and scale invariant shape descriptors. To deal with several practical issues related to the computation of highorder Zernike moments including computational cost and lack of accuracy due to numerical errors, we have employed an efficient algorithm that uses arbitrary precision arithmetic, a look-up table, and avoids recomputing the same terms multiple times [2]. The proposed hand-based authentication system has been tested on a database of 40 subjects illustrating promising results. Qualitative comparisons with state of the art systems illustrate comparable of better performance.


international symposium on visual computing | 2005

Accurate and efficient computation of high order zernike moments

Gholamreza Amayeh; Ali Erol; George Bebis; Mircea Nicolescu

Zernike Moments are useful tools in pattern recognition and image analysis due to their orthogonality and rotation invariance property. However, direct computation of these moments is very expensive, limiting their use especially at high orders. There have been some efforts to reduce the computational cost by employing quantized polar coordinate systems, which also reduce the accuracy of the moments. In this paper, we propose an efficient algorithm to accurately calculate Zernike moments at high orders. To preserve accuracy, we do not use any form of coordinate transformation and employ arbitrary precision arithmetic. The computational complexity is reduced by detecting the common terms in Zernike moments with different order and repetition. Experimental results show that our method is more accurate than other methods and it has comparable computational complexity especially in case of using large images and high order moments.


Computer Vision and Image Understanding | 2009

Hand-based verification and identification using palm-finger segmentation and fusion

Gholamreza Amayeh; George Bebis; Ali Erol; Mircea Nicolescu

Hand-based verification/identification represent a key biometric technology with a wide range of potential applications both in industry and government. Traditionally, hand-based verification and identification systems exploit information from the whole hand for authentication or recognition purposes. To account for hand and finger motion, guidance pegs are used to fix the position and orientation of the hand. In this paper, we propose a component-based approach to hand-based verification and identification which improves both accuracy and robustness as well as ease of use due to avoiding pegs. Our approach accounts for hand and finger motion by decomposing the hand silhouette in different regions corresponding to the back of the palm and the fingers. To improve accuracy and robustness, verification/recognition is performed by fusing information from different parts of the hand. The proposed approach operates on 2D images acquired by placing the hand on a flat lighting table and does not require using guidance pegs or extracting any landmark points on the hand. To decompose the silhouette of the hand in different regions, we have devised a robust methodology based on an iterative morphological filtering scheme. To capture the geometry of the back of the palm and the fingers, we employ region descriptors based on high-order Zernike moments which are computed using an efficient methodology. The proposed approach has been evaluated both for verification and recognition purposes on a database of 101 subjects with 10 images per subject, illustrating high accuracy and robustness. Comparisons with related approaches involving the use of the whole hand or different parts of the hand illustrate the superiority of the proposed approach. Qualitative and quantitative comparisons with state-of-the-art approaches indicate that the proposed approach has comparable or better accuracy.


workshop on applications of computer vision | 2005

Visual Hull Construction Using Adaptive Sampling

Ali Erol; George Bebis; Richard Boyle; Mircea Nicolescu

Volumetric visual hulls have become very popular in many computer vision applications including human body pose estimation and virtualized reality. In these applications, the visual hull is used to approximate the 3D geometry of an object. Existing volumetric visual hull construction techniques, however, produce a 3-color volume data that merely serves as a bounding volume. In other words it lacks an accurate surface representation. Polygonization can produce satisfactory results only at high resolutions. In this study we extend the binary visual hull to an implicit surface in order to capture the geometry of the visual hull itself. In particular, we introduce an octree-based visual hull specific adaptive sampling algorithm to obtain a volumetric representation that provides accuracy proportional to the level of detail. Moreover, we propose a method to process the resulting octree to extract a crack-free polygonal visual hull surface. Experimental results illustrate the performance of the algorithm.


computer vision and pattern recognition | 2007

A Component-Based Approach to Hand Verification

Gholamreza Amayeh; George Bebis; Ali Erol; Mircea Nicolescu

This paper describes a novel hand-based verification system based on palm-finger segmentation and fusion. The proposed system operates on 2D hand images acquired by placing the hand on a planar lighting table without any guidance pegs. The segmentation of the palm and the fingers is performed without requiring the extraction of any landmark points on the hand. First, the hand is segmented from the forearm using a robust, iterative methodology based on morphological operators. Then, the hand is segmented into six regions corresponding to the palm and the fingers using morphological operators again. The geometry of each component of the hand is represented using high order Zernike moments which are computed using an efficient methodology. Finally, verification is performed by fusing information from different parts of the hand. The proposed system has been evaluated on a database of 101 subjects illustrating high accuracy and robustness. Comparisons with competitive approaches that use the whole hand illustrate the superiority of the proposed, component-based, approach both in terms of accuracy and robustness. Qualitative comparisons with state of the art systems illustrate that the proposed system has comparable or better performance.


computational intelligence | 2007

RECOGNIZING SIMPLE HUMAN ACTIONS USING 3D HEAD MOVEMENT

Jorge Usabiaga; George Bebis; Ali Erol; Mircea Nicolescu; Monica N. Nicolescu

Recognizing human actions from video has been a challenging problem in computer vision. Although human actions can be inferred from a wide range of data, it has been demonstrated that simple human actions can be inferred by tracking the movement of the head in 2D. This is a promising idea as detecting and tracking the head is expected to be simpler and faster because the head has lower shape variability and higher visibility than other body parts (e.g., hands and/or feet). Although tracking the movement of the head alone does not provide sufficient information for distinguishing among complex human actions, it could serve as a complimentary component of a more sophisticated action recognition system. In this article, we extend this idea by developing a more general, viewpoint invariant, action recognition system by detecting and tracking the 3D position of the head using multiple cameras. The proposed approach employs Principal Component Analysis (PCA) to register the 3D trajectories in a common coordinate system and Dynamic Time Warping (DTW) to align them in time for matching. We present experimental results to demonstrate the potential of using 3D head trajectory information to distinguish among simple but common human actions independently of viewpoint.


international conference on biometrics theory applications and systems | 2007

Minutiae-Based Template Synthesis and Matching Using Hierarchical Delaunay Triangulations

Tamer Uz; George Bebis; Ali Erol; Salil Prabhakar

Fingerprint-based authentication is a key biometric technology with a wide range of potential applications both in industry and government. However, the presence of intrinsically low quality fingerprints and various distortions introduced during the acquisition process pose challenges in the development of robust and reliable feature extraction and matching algorithms. Our focus in this study is on improving minutiae-based fingerprint matching by effectively combining minutiae information from multiple impressions of the same finger. Specifically, we present a new minutiae template-merging approach based on hierarchical Delaunay triangulations. The key idea is synthesizing a super-template from multiple enrollment templates to increase coverage area, restore missing features, and alleviate spurious minutiae. Each minutia in the super-template is assigned a weight representing its frequency of occurrence, which serves as a minutiae quality measure. During the merging stage, we employ a hierarchical, weight-based, scheme to search for a valid alignment between a given template and the super-template. The same algorithm, with minor modifications, can be used to compare a query template with a super-template. We have performed extensive experiments and comparisons with competing approaches to demonstrate the proposed approach using a challenging public database (FVC2000 Dbl).


Proceedings of SPIE, the International Society for Optical Engineering | 2007

A new approach to hand-based authentication

Gholamreza Amayeh; George Bebis; Ali Erol; Mircea Nicolescu

Hand-based authentication is a key biometric technology with a wide range of potential applications both in industry and government. Traditionally, hand-based authentication is performed by extracting information from the whole hand. To account for hand and finger motion, guidance pegs are employed to fix the position and orientation of the hand. In this paper, we consider a component-based approach to hand-based verification. Our objective is to investigate the discrimination power of different parts of the hand in order to develop a simpler, faster, and possibly more accurate and robust verification system. Specifically, we propose a new approach which decomposes the hand in different regions, corresponding to the fingers and the back of the palm, and performs verification using information from certain parts of the hand only. Our approach operates on 2D images acquired by placing the hand on a flat lighting table. Using a part-based representation of the hand allows the system to compensate for hand and finger motion without using any guidance pegs. To decompose the hand in different regions, we use a robust methodology based on morphological operators which does not require detecting any landmark points on the hand. To capture the geometry of the back of the palm and the fingers in suffcient detail, we employ high-order Zernike moments which are computed using an effcient methodology. The proposed approach has been evaluated on a database of 100 subjects with 10 images per subject, illustrating promising performance. Comparisons with related approaches using the whole hand for verification illustrate the superiority of the proposed approach. Moreover, qualitative comparisons with state-of-the-art approaches indicate that the proposed approach has comparable or better performance.

Collaboration


Dive into the Ali Erol's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tamer Uz

Nevada System of Higher Education

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge