Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefanos Zafeiriou is active.

Publication


Featured researches published by Stefanos Zafeiriou.


international conference on computer vision | 2013

300 Faces in-the-Wild Challenge: The First Facial Landmark Localization Challenge

Christos Sagonas; Georgios Tzimiropoulos; Stefanos Zafeiriou; Maja Pantic

Automatic facial point detection plays arguably the most important role in face analysis. Several methods have been proposed which reported their results on databases of both constrained and unconstrained conditions. Most of these databases provide annotations with different mark-ups and in some cases the are problems related to the accuracy of the fiducial points. The aforementioned issues as well as the lack of a evaluation protocol makes it difficult to compare performance between different systems. In this paper, we present the 300 Faces in-the-Wild Challenge: The first facial landmark localization Challenge which is held in conjunction with the International Conference on Computer Vision 2013, Sydney, Australia. The main goal of this challenge is to compare the performance of different methods on a new-collected dataset using the same evaluation protocol and the same mark-up and hence to develop the first standardized benchmark for facial landmark localization.


computer vision and pattern recognition | 2013

Robust Discriminative Response Map Fitting with Constrained Local Models

Akshay Asthana; Stefanos Zafeiriou; Shiyang Cheng; Maja Pantic

We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that, unlike the holistic texture based features used in the discriminative AAM approaches, the response map can be represented by a small set of parameters and these parameters can be very efficiently used for reconstructing unseen response maps. Furthermore, we show that by adopting very simple off-the-shelf regression techniques, it is possible to learn robust functions from response maps to the shape parameters updates. The experiments, conducted on Multi-PIE, XM2VTS and LFPW database, show that the proposed DRMF method outperforms state-of-the-art algorithms for the task of generic face fitting. Moreover, the DRMF method is computationally very efficient and is real-time capable. The current MATLAB implementation takes 1 second per image. To facilitate future comparisons, we release the MATLAB code and the pre-trained models for research purposes.


IEEE Transactions on Neural Networks | 2006

Exploiting discriminant information in nonnegative matrix factorization with application to frontal face verification

Stefanos Zafeiriou; Anastasios Tefas; Ioan Buciu; Ioannis Pitas

In this paper, two supervised methods for enhancing the classification accuracy of the Nonnegative Matrix Factorization (NMF) algorithm are presented. The idea is to extend the NMF algorithm in order to extract features that enforce not only the spatial locality, but also the separability between classes in a discriminant manner. The first method employs discriminant analysis in the features derived from NMF. In this way, a two-phase discriminant feature extraction procedure is implemented, namely NMF plus Linear Discriminant Analysis (LDA). The second method incorporates the discriminant constraints inside the NMF decomposition. Thus, a decomposition of a face to its discriminant parts is obtained and new update rules for both the weights and the basis images are derived. The introduced methods have been applied to the problem of frontal face verification using the well-known XM2VTS database. Both methods greatly enhance the performance of NMF for frontal face verification


Image and Vision Computing | 2012

Static and dynamic 3D facial expression recognition: A comprehensive survey

Georgia Sandbach; Stefanos Zafeiriou; Maja Pantic; Lijun Yin

Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the users experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose and illumination variations. In order to deal with these problems, 3D and 4D (dynamic 3D) recordings are increasingly used in expression analysis research. In this paper we survey the recent advances in 3D and 4D facial expression recognition. We discuss developments in 3D facial data acquisition and tracking, and present currently available 3D/4D face databases suitable for 3D/4D facial expressions analysis as well as the existing facial expression recognition systems that exploit either 3D or 4D data in detail. Finally, challenges that have to be addressed if 3D facial expression recognition systems are to become a part of future applications are extensively discussed.


computer vision and pattern recognition | 2013

A Semi-automatic Methodology for Facial Landmark Annotation

Christos Sagonas; Georgios Tzimiropoulos; Stefanos Zafeiriou; Maja Pantic

Developing powerful deformable face models requires massive, annotated face databases on which techniques can be trained, validated and tested. Manual annotation of each facial image in terms of landmarks requires a trained expert and the workload is usually enormous. Fatigue is one of the reasons that in some cases annotations are inaccurate. This is why, the majority of existing facial databases provide annotations for a relatively small subset of the training images. Furthermore, there is hardly any correspondence between the annotated land-marks across different databases. These problems make cross-database experiments almost infeasible. To overcome these difficulties, we propose a semi-automatic annotation methodology for annotating massive face datasets. This is the first attempt to create a tool suitable for annotating massive facial databases. We employed our tool for creating annotations for MultiPIE, XM2VTS, AR, and FRGC Ver. 2 databases. The annotations will be made publicly available from http://ibug.doc.ic.ac.uk/ resources/facial-point-annotations/. Finally, we present experiments which verify the accuracy of produced annotations.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Subspace Learning from Image Gradient Orientations

Georgios Tzimiropoulos; Stefanos Zafeiriou; Maja Pantic

We introduce the notion of subspace learning from image gradient orientations for appearance-based object recognition. As image data are typically noisy and noise is substantially different from Gaussian, traditional subspace learning from pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data population. We show that replacing pixel intensities with gradient orientations and the ℓ2 norm with a cosine-based distance measure offers, to some extend, a remedy to this problem. Within this framework, which we coin Image Gradient Orientations (IGO) subspace learning, we first formulate and study the properties of Principal Component Analysis of image gradient orientations (IGO-PCA). We then show its connection to previously proposed robust PCA techniques both theoretically and experimentally. Finally, we derive a number of other popular subspace learning techniques, namely, Linear Discriminant Analysis (LDA), Locally Linear Embedding (LLE), and Laplacian Eigenmaps (LE). Experimental results show that our algorithms significantly outperform popular methods such as Gabor features and Local Binary Patterns and achieve state-of-the-art performance for difficult problems such as illumination and occlusion-robust face recognition. In addition to this, the proposed IGO-methods require the eigendecomposition of simple covariance matrices and are as computationally efficient as their corresponding ℓ2 norm intensity-based counterparts. Matlab code for the methods presented in this paper can be found at http://ibug.doc.ic.ac.uk/resources.


IEEE Transactions on Visualization and Computer Graphics | 2005

Blind robust watermarking schemes for copyright protection of 3D mesh objects

Stefanos Zafeiriou; Anastasios Tefas; Ioannis Pitas

In this paper, two novel methods suitable for blind 3D mesh object watermarking applications are proposed. The first method is robust against 3D rotation, translation, and uniform scaling. The second one is robust against both geometric and mesh simplification attacks. A pseudorandom watermarking signal is cast in the 3D mesh object by deforming its vertices geometrically, without altering the vertex topology. Prior to watermark embedding and detection, the object is rotated and translated so that its center of mass and its principal component coincide with the origin and the z-axis of the Cartesian coordinate system. This geometrical transformation ensures watermark robustness to translation and rotation. Robustness to uniform scaling is achieved by restricting the vertex deformations to occur only along the r coordinate of the corresponding (r, /spl theta/, /spl phi/) spherical coordinate system. In the first method, a set of vertices that correspond to specific angles /spl theta/ is used for watermark embedding. In the second method, the samples of the watermark sequence are embedded in a set of vertices that correspond to a range of angles in the /spl theta/ domain in order to achieve robustness against mesh simplifications. Experimental results indicate the ability of the proposed method to deal with the aforementioned attacks.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

Robust FFT-Based Scale-Invariant Image Registration with Image Gradients

Georgios Tzimiropoulos; Vasileios Argyriou; Stefanos Zafeiriou; Tania Stathaki

We present a robust FFT-based approach to scale-invariant image registration. Our method relies on FFT-based correlation twice: once in the log-polar Fourier domain to estimate the scaling and rotation and once in the spatial domain to recover the residual translation. Previous methods based on the same principles are not robust. To equip our scheme with robustness and accuracy, we introduce modifications which tailor the method to the nature of images. First, we derive efficient log-polar Fourier representations by replacing image functions with complex gray-level edge maps. We show that this representation both captures the structure of salient image features and circumvents problems related to the low-pass nature of images, interpolation errors, border effects, and aliasing. Second, to recover the unknown parameters, we introduce the normalized gradient correlation. We show that, using image gradients to perform correlation, the errors induced by outliers are mapped to a uniform distribution for which our normalized gradient correlation features robust performance. Exhaustive experimentation with real images showed that, unlike any other Fourier-based correlation techniques, the proposed method was able to estimate translations, arbitrary rotations, and scale factors up to 6.


Image and Vision Computing | 2016

300 Faces In-The-Wild Challenge

Christos Sagonas; Epameinondas Antonakos; Georgios Tzimiropoulos; Stefanos Zafeiriou; Maja Pantic

Computer Vision has recently witnessed great research advance towards automatic facial points detection. Numerous methodologies have been proposed during the last few years that achieve accurate and efficient performance. However, fair comparison between these methodologies is infeasible mainly due to two issues. (a) Most existing databases, captured under both constrained and unconstrained (in-the-wild) conditions have been annotated using different mark-ups and, in most cases, the accuracy of the annotations is low. (b) Most published works report experimental results using different training/testing sets, different error metrics and, of course, landmark points with semantically different locations. In this paper, we aim to overcome the aforementioned problems by (a) proposing a semi-automatic annotation technique that was employed to re-annotate most existing facial databases under a unified protocol, and (b) presenting the 300 Faces In-The-Wild Challenge (300-W), the first facial landmark localization challenge that was organized twice, in 2013 and 2015. To the best of our knowledge, this is the first effort towards a unified annotation scheme of massive databases and a fair experimental comparison of existing facial landmark localization systems. The images and annotations of the new testing database that was used in the 300-W challenge are available from http://ibug.doc.ic.ac.uk/resources/300-W_IMAVIS/.


IEEE Transactions on Information Forensics and Security | 2007

A Novel Discriminant Non-Negative Matrix Factorization Algorithm With Applications to Facial Image Characterization Problems

Irene Kotsia; Stefanos Zafeiriou; Ioannis Pitas

The methods introduced so far regarding discriminant non-negative matrix factorization (DNMF) do not guarantee convergence to a stationary limit point. In order to remedy this limitation, a novel DNMF method is presented that uses projected gradients. The proposed algorithm employs some extra modifications that make the method more suitable for classification tasks. The usefulness of the proposed technique to frontal face verification and facial expression recognition problems is demonstrated.

Collaboration


Dive into the Stefanos Zafeiriou's collaboration.

Top Co-Authors

Avatar

Maja Pantic

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ioannis Pitas

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Irene Kotsia

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anastasios Tefas

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Maria Petrou

Imperial College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge