Georgios Passalis
National and Kapodistrian University of Athens
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Georgios Passalis.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2007
Ioannis A. Kakadiaris; Georgios Passalis; George Toderici; Mohammed N. Murtuza; Yunliang Lu; Nikolaos Karampatziakis; Theoharis Theoharis
In this paper, we present the computational tools and a hardware prototype for 3D face recognition. Full automation is provided through the use of advanced multistage alignment algorithms, resilience to facial expressions by employing a deformable model framework, and invariance to 3D capture devices through suitable preprocessing steps. In addition, scalability in both time and space is achieved by converting 3D facial scans into compact metadata. We present our results on the largest known, and now publicly available, face recognition grand challenge 3D facial database consisting of several thousand scans. To the best of our knowledge, this is the highest performance reported on the FRGC v2 database for the 3D modality
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011
Georgios Passalis; Panagiotis Perakis; Theoharis Theoharis; Ioannis A. Kakadiaris
The uncontrolled conditions of real-world biometric applications pose a great challenge to any face recognition approach. The unconstrained acquisition of data from uncooperative subjects may result in facial scans with significant pose variations along the yaw axis. Such pose variations can cause extensive occlusions, resulting in missing data. In this paper, a novel 3D face recognition method is proposed that uses facial symmetry to handle pose variations. It employs an automatic landmark detector that estimates pose and detects occluded areas for each facial scan. Subsequently, an Annotated Face Model is registered and fitted to the scan. During fitting, facial symmetry is used to overcome the challenges of missing data. The result is a pose invariant geometry image. Unlike existing methods that require frontal scans, the proposed method performs comparisons among interpose scans using a wavelet-based biometric signature. It is suitable for real-world applications as it only requires half of the face to be visible to the sensor. The proposed method was evaluated using databases from the University of Notre Dame and the University of Houston that, to the best of our knowledge, include the most challenging pose variations publicly available. The average rank-one recognition rate of the proposed method in these databases was 83.7 percent.
computer vision and pattern recognition | 2005
Georgios Passalis; Ioannis A. Kakadiaris; Theoharis Theoharis; George Toderici; N. Murtuza
From a user’s perspective, face recognition is one of the most desirable biometrics, due to its non-intrusive nature; however, variables such as face expression tend to severely affect recognition rates. We have applied to this problem our previous work on elastically adaptive deformable models to obtain parametric representations of the geometry of selected localized face areas using an annotated face model. We then use wavelet analysis to extract a compact biometric signature, thus allowing us to perform rapid comparisons on either a global or a per area basis. To evaluate the performance of our algorithm, we have conducted experiments using data from the Face Recognition Grand Challenge data corpus, the largest and most established data corpus for face recognition currently available. Our results indicate that our algorithm exhibits high levels of accuracy and robustness, and is not gender biased. In addition, it is minimally affected by facial expressions.
eurographics | 2008
Panagiotis Papadakis; Ioannis Pratikakis; Theoharis Theoharis; Georgios Passalis; Stavros J. Perantonis
Abstract We present a novel 3D object retrieval method that relies upon a hybrid descriptor which is composed of 2D features based on depth buffers and 3D features based on spherical harmonics. To compensate for rotation, two alignment methods, namely CPCA and NPCA, are used while compactness is supported via scalar feature quantization to a set of values that is further compressed using Huffman coding. The superior performance of the proposed retrieval methodology is demonstrated through an extensive comparison against state-of-the-art methods on standard datasets.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013
Panagiotis Perakis; Georgios Passalis; Theoharis Theoharis; Ioannis A. Kakadiaris
A 3D landmark detection method for 3D facial scans is presented and thoroughly evaluated. The main contribution of the presented method is the automatic and pose-invariant detection of landmarks on 3D facial scans under large yaw variations (that often result in missing facial data), and its robustness against large facial expressions. Three-dimensional information is exploited by using 3D local shape descriptors to extract candidate landmark points. The shape descriptors include the shape index, a continuous map of principal curvature values of a 3D objects surface, and spin images, local descriptors of the objects 3D point distribution. The candidate landmarks are identified and labeled by matching them with a Facial Landmark Model (FLM) of facial anatomical landmarks. The presented method is extensively evaluated against a variety of 3D facial databases and achieves state-of-the-art accuracy (4.5-6.3 mm mean landmark localization error), considerably outperforming previous methods, even when tested with the most challenging data.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2007
Georgios Passalis; Ioannis A. Kakadiaris; Theoharis Theoharis
As the size of the available collections of 3D objects grows, database transactions become essential for their management with the key operation being retrieval (query). Large collections are also precategorized into classes so that a single class contains objects of the same type (e.g., human faces, cars, four-legged animals). It is shown that general object retrieval methods are inadequate for intraclass retrieval tasks. We advocate that such intraclass problems require a specialized method that can exploit the basic class characteristics in order to achieve higher accuracy. A novel 3D object retrieval method is presented which uses a parameterized annotated model of the shape of the class objects, incorporating its main characteristics. The annotated subdivision-based model is fitted onto objects of the class using a deformable model framework, converted to a geometry image and transformed into the wavelet domain. Object retrieval takes place in the wavelet domain. The method does not require user interaction, achieves high accuracy, is efficient for use with large databases, and is suitable for nonrigid object classes. We apply our method to the face recognition domain, one of the most challenging intraclass retrieval tasks. We used the Face Recognition Grand Challenge v2 database, yielding an average verification rate of 95.2 percent at to 10-3 false accept rate. The latest results of our work can be found at http://www.cbl.uh.edu/UR8D/
Pattern Recognition | 2008
Theoharis Theoharis; Georgios Passalis; George Toderici; Ioannis A. Kakadiaris
As the accuracy of biometrics improves, it is getting increasingly hard to push the limits using a single modality. In this paper, a unified approach that fuses three-dimensional facial and ear data is presented. An annotated deformable model is fitted to the data and a geometry image is extracted. Wavelet coefficients are computed from the geometry image and used as a biometric signature. The method is evaluated using the largest publicly available database and achieves 99.7% rank-one recognition rate. The state-of-the-art accuracy of the multimodal fusion is attributed to the low correlation between the individual differentiability of the two modalities.
computer vision and pattern recognition | 2005
Ioannis A. Kakadiaris; Georgios Passalis; Theoharis Theoharis; George Toderici; Ioannis Konstantinidis; N. Murtuza
It is becoming increasingly important to be able to credential and identify authorized personnel at key points of entry. Such identity management systems commonly employ biometric identifiers. In this paper, we present a novel multimodal facial recognition approach that employs data from both visible spectrum and thermal infrared sensors. Data from multiple cameras is used to construct a three-dimensional mesh representing the face and a facial thermal texture map. An annotated face model with explicit two-dimensional parameterization (UV) is then fitted to this data to construct: 1) a three-channel UV deformation image encoding geometry, and 2) a one-channel UV vasculature image encoding facial vasculature. Recognition is accomplished by comparing: 1) the parametric deformation images, 2) the parametric vasculature images, and 3) the visible spectrum texture maps. The novelty of our work lies in the use of deformation images and physiological information as means for comparison. We have performed extensive tests on the Face Recognition Grand Challenge v1.0 dataset and on our own multimodal database with very encouraging results.
Applied Optics | 2007
Georgios Passalis; Nikos Sgouros; S. Athineos; Theoharis Theoharis
A method for the reconstruction of 3D shape and texture from integral photography (IP) images is presented. Sharing the same principles with stereoscopic-based object reconstruction, it offers increased robustness to noise and occlusions due to the unique characteristics of IP images. A coarse-to-fine approach is used, employing what we believe to be a novel grid refinement step in order to increase the quality of the reconstructed objects. The proposed methods properties include configurable depth accuracy and direct and seamless triangulation. We evaluate our method using synthetic data from a computer-simulated IP setup as well as real data from a simple yet effective digital IP setup. Experiments show reconstructed objects of high-quality indicating that IP can be a competitive modality for 3D object reconstruction.
international conference on biometrics theory applications and systems | 2009
Panagiotis Perakis; Georgios Passalis; Theoharis Theoharis; George Toderici; Ioannis A. Kakadiaris
Three-dimensional face recognition has lately received much attention due to its robustness in the presence of lighting and pose variations. However, certain pose variations often result in missing facial data. This is common in realistic scenarios, such as uncontrolled environments and uncooperative subjects. Most previous 3D face recognition methods do not handle extensive missing data as they rely on frontal scans. Currently, there is no method to perform recognition across scans of different poses. A unified method that addresses the partial matching problem is proposed. Both frontal and side (left or right) facial scans are handled in a way that allows interpose retrieval operations. The main contributions of this paper include a novel 3D landmark detector and a deformable model framework that supports symmetric fitting. The landmark detector is utilized to detect the pose of the facial scan. This information is used to mark areas of missing data and to roughly register the facial scan with an Annotated Face Model (AFM). The AFM is fitted using a deformable model framework that introduces the method of exploiting facial symmetry where data are missing. Subsequently, a geometry image is extracted from the fitted AFM that is independent of the original pose of the facial scan. Retrieval operations, such as face identification, are then performed on a wavelet domain representation of the geometry image. Thorough testing was performed by combining the largest publicly available databases. To the best of our knowledge, this is the first method that handles side scans with extensive missing data (e.g., up to half of the face missing).