Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Panagiotis Perakis is active.

Publication


Featured researches published by Panagiotis Perakis.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Using Facial Symmetry to Handle Pose Variations in Real-World 3D Face Recognition

Georgios Passalis; Panagiotis Perakis; Theoharis Theoharis; Ioannis A. Kakadiaris

The uncontrolled conditions of real-world biometric applications pose a great challenge to any face recognition approach. The unconstrained acquisition of data from uncooperative subjects may result in facial scans with significant pose variations along the yaw axis. Such pose variations can cause extensive occlusions, resulting in missing data. In this paper, a novel 3D face recognition method is proposed that uses facial symmetry to handle pose variations. It employs an automatic landmark detector that estimates pose and detects occluded areas for each facial scan. Subsequently, an Annotated Face Model is registered and fitted to the scan. During fitting, facial symmetry is used to overcome the challenges of missing data. The result is a pose invariant geometry image. Unlike existing methods that require frontal scans, the proposed method performs comparisons among interpose scans using a wavelet-based biometric signature. It is suitable for real-world applications as it only requires half of the face to be visible to the sensor. The proposed method was evaluated using databases from the University of Notre Dame and the University of Houston that, to the best of our knowledge, include the most challenging pose variations publicly available. The average rank-one recognition rate of the proposed method in these databases was 83.7 percent.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

3D Facial Landmark Detection under Large Yaw and Expression Variations

Panagiotis Perakis; Georgios Passalis; Theoharis Theoharis; Ioannis A. Kakadiaris

A 3D landmark detection method for 3D facial scans is presented and thoroughly evaluated. The main contribution of the presented method is the automatic and pose-invariant detection of landmarks on 3D facial scans under large yaw variations (that often result in missing facial data), and its robustness against large facial expressions. Three-dimensional information is exploited by using 3D local shape descriptors to extract candidate landmark points. The shape descriptors include the shape index, a continuous map of principal curvature values of a 3D objects surface, and spin images, local descriptors of the objects 3D point distribution. The candidate landmarks are identified and labeled by matching them with a Facial Landmark Model (FLM) of facial anatomical landmarks. The presented method is extensively evaluated against a variety of 3D facial databases and achieves state-of-the-art accuracy (4.5-6.3 mm mean landmark localization error), considerably outperforming previous methods, even when tested with the most challenging data.


international conference on biometrics theory applications and systems | 2009

Partial matching of interpose 3D facial data for face recognition

Panagiotis Perakis; Georgios Passalis; Theoharis Theoharis; George Toderici; Ioannis A. Kakadiaris

Three-dimensional face recognition has lately received much attention due to its robustness in the presence of lighting and pose variations. However, certain pose variations often result in missing facial data. This is common in realistic scenarios, such as uncontrolled environments and uncooperative subjects. Most previous 3D face recognition methods do not handle extensive missing data as they rely on frontal scans. Currently, there is no method to perform recognition across scans of different poses. A unified method that addresses the partial matching problem is proposed. Both frontal and side (left or right) facial scans are handled in a way that allows interpose retrieval operations. The main contributions of this paper include a novel 3D landmark detector and a deformable model framework that supports symmetric fitting. The landmark detector is utilized to detect the pose of the facial scan. This information is used to mark areas of missing data and to roughly register the facial scan with an Annotated Face Model (AFM). The AFM is fitted using a deformable model framework that introduces the method of exploiting facial symmetry where data are missing. Subsequently, a geometry image is extracted from the fitted AFM that is independent of the original pose of the facial scan. Retrieval operations, such as face identification, are then performed on a wavelet domain representation of the geometry image. Thorough testing was performed by combining the largest publicly available databases. To the best of our knowledge, this is the first method that handles side scans with extensive missing data (e.g., up to half of the face missing).


eurographics | 2009

Automatic 3D facial region retrieval from multi-pose facial datasets

Panagiotis Perakis; Theoharis Theoharis; Georgios Passalis; Ioannis A. Kakadiaris

The availability of 3D facial datasets is rapidly growing, mainly as a result of medical and biometric applications. These applications often require the retrieval of specific facial areas (such as the nasal region). The most crucial step in facial region retrieval is the detection of key 3D facial landmarks (e.g., the nose tip). A key advantage of 3D facial data over 2D facial data is their pose invariance. Any landmark detection method must therefore also be pose invariant. In this paper, we present the first 3D facial landmark detection method that works in datasets with pose rotations of up to 80 degree around the y-axis. It is tested on the largest publicly available 3D facial datasets, for which we have created a ground truth by manually annotating the 3D landmarks. Landmarks automatically detected by our method are then used to robustly retrieve facial regions from 3D facial datasets.


Pattern Recognition | 2014

Feature fusion for facial landmark detection

Panagiotis Perakis; Theoharis Theoharis; Ioannis A. Kakadiaris

Abstract Facial landmark detection is a crucial first step in facial analysis for biometrics and numerous other applications. However, it has proved to be a very challenging task due to the numerous sources of variation in 2D and 3D facial data. Although landmark detection based on descriptors of the 2D and 3D appearance of the face has been extensively studied, the fusion of such feature descriptors is a relatively under-studied issue. In this paper, a novel generalized framework for combining facial feature descriptors is presented, and several feature fusion schemes are proposed and evaluated. The proposed framework maps each feature into a similarity score and combines the individual similarity scores into a resultant score, used to select the optimal solution for a queried landmark. The evaluation of the proposed fusion schemes for facial landmark detection clearly indicates that a quadratic distance to similarity mapping in conjunction with a root mean square rule for similarity fusion achieves the best performance in accuracy, efficiency, robustness and monotonicity.


eurographics | 2015

Automatic 3D object fracturing for evaluation of partial retrieval and object restoration tasks: benchmark and application to 3D cultural heritage data

Robert Gregor; Danny Bauer; Ivan Sipiran; Panagiotis Perakis; Tobias Schreck

Recently, 3D digitization and printing hardware have seen rapidly increasing adoption. High-quality digitization of real-world objects is becoming more and more efficient. In this context, growing amounts of data from the cultural heritage (CH) domain such as columns, tombstones or arches are being digitized and archived in 3D repositories. In many cases, these objects are not complete, but fragmented into several pieces and eroded over time. As manual restoration of fragmented objects is a tedious and error-prone process, recent work has addressed automatic reassembly and completion of fragmented 3D data sets. While a growing number of related techniques are being proposed by researchers, their evaluation currently is limited to smaller numbers of high-quality test fragment sets. We address this gap by contributing a methodology to automatically generate 3D fragment data based on synthetic fracturing of 3D input objects. Our methodology allows generating large-scale fragment test data sets from existing CH object models, complementing manual benchmark generation based on scanning of fragmented real objects. Besides being scalable, our approach also has the advantage to come with ground truth information (i.e. the input objects), which is often not available when scans of real fragments are used. We apply our approach to the Hampson collection of digitized pottery objects, creating and making available a first, larger restoration test data set that comes with ground truth. Furthermore, we illustrate the usefulness of our test data for evaluation of a recent 3D restoration method based on symmetry analysis and also outline how the applicability of 3D retrieval techniques could be evaluated with respect to 3D restoration tasks. Finally, we discuss first results of an ongoing extension of our methodology to include object erosion processes by means of a physiochemical model simulating weathering effects.


Pattern Recognition | 2016

An effective methodology for dynamic 3D facial expression retrieval

Antonios Danelakis; Theoharis Theoharis; Ioannis Pratikakis; Panagiotis Perakis

The problem of facial expression recognition in dynamic sequences of 3D face scans has received a significant amount of attention in the recent past whereas the problem of retrieval in this type of data has not. A novel retrieval methodology for such data is introduced in this paper. The proposed methodology automatically detects specific facial landmarks and uses them to create a descriptor. This descriptor is the concatenation of three sub-descriptors which capture topological as well as geometric information of the 3D face scans. The motivation behind the proposed hybrid facial expression descriptor is the fact that some facial expressions, like happiness and surprise, are characterized by obvious changes in the mouth topology while others, like anger, fear and sadness, produce geometric but no significant topological changes. The proposed retrieval scheme exploits the Dynamic Time Warping technique in order to compare descriptors corresponding to different 3D facial sequences. A detailed evaluation of the introduced retrieval scheme is presented showing that it outperforms previous state-of-the-art retrieval schemes. Experiments have been conducted using the six prototypical expressions of the standard dataset BU-4DFE and the eight prototypical expressions of the recently available dataset BP4D-Spontaneous. Finally, a majority voting scheme based on the retrieval results is used to achieve unsupervised dynamic 3D facial expression recognition. The achieved classification accuracy is comparable to the state-of-the-art supervised dynamic 3D facial expression recognition techniques. HighlightsWe illustrate a novel retrieval methodology for dynamic sequences of 3D face scans.We present a detailed evaluation of the introduced retrieval scheme.BU-4DFE and BP4D-Spontaneous data sets were used for experiments.Retrieval results are used to achieve unsupervised facial expression recognition.The presented results outperform state-of-the-art retrieval schemes.


Handbook of Face Recognition | 2011

Face Recognition Using 3D Images

Ioannis A. Kakadiaris; Georgios Passalis; George Toderici; E. Efraty; Panagiotis Perakis; Dat Chu; Shishir K. Shah; Theoharis Theoharis

In this chapter, we present advances that aid in overcoming the challenges encountered in 3D face recognition. First, we present a fully automatic 3D face recognition system, UR3D, which has been proven to be robust under variations in expressions. Second, we demonstrate how to handle pose variations. Finally, we demonstrate how the problems related to the cost and unfriendliness of 3D scanners can be mitigated through hybrid systems.


Archive | 2010

3D Facial Landmark Detection & Face Registration A 3D Facial Landmark Model & 3D Local Shape Descriptors Approach

Panagiotis Perakis; Georgios Passalis; Theoharis Theoharis; Ioannis A. Kakadiaris


Archive | 2014

Towards the Creation of Digital Stones from 2D Samples

Christian Schellewald; Panagiotis Perakis; Theoharis Theoharis

Collaboration


Dive into the Panagiotis Perakis's collaboration.

Top Co-Authors

Avatar

Theoharis Theoharis

University of Houston System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georgios Passalis

National and Kapodistrian University of Athens

View shared research outputs
Top Co-Authors

Avatar

Theoharis Theoharis

University of Houston System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antonios Danelakis

National and Kapodistrian University of Athens

View shared research outputs
Top Co-Authors

Avatar

Ioannis Pratikakis

Democritus University of Thrace

View shared research outputs
Top Co-Authors

Avatar

Dat Chu

University of Houston

View shared research outputs
Top Co-Authors

Avatar

E. Efraty

University of Houston

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge