Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bernhard Egger is active.

Publication


Featured researches published by Bernhard Egger.


german conference on pattern recognition | 2013

A Monte Carlo strategy to integrate detection and model-based face analysis

Sandro Schönborn; Andreas Forster; Bernhard Egger; Thomas Vetter

We present a novel probabilistic approach for fitting a statistical model to an image. A 3D Morphable Model (3DMM) of faces is interpreted as a generative (Top-Down) Bayesian model. Random Forests are used as noisy detectors (Bottom-Up) for the face and facial landmark positions. The Top-Down and Bottom-Up parts are then combined using a Data-Driven Markov Chain Monte Carlo Method (DDMCMC). As core of the integration, we use the Metropolis-Hastings algorithm which has two main advantages. First, the algorithm can handle unreliable detections and therefore does not need the detectors to take an early and possible wrong hard decision before fitting. Second, it is open for integration of various cues to guide the fitting process. Based on the proposed approach, we implemented a completely automatic, pose and illumination invariant face recognition application. We are able to train and test the building blocks of our application on different databases. The system is evaluated on the Multi-PIE database and reaches state of the art performance.


german conference on pattern recognition | 2014

Pose Normalization for Eye Gaze Estimation and Facial Attribute Description from Still Images

Bernhard Egger; Sandro Schönborn; Andreas Forster; Thomas Vetter

Our goal is to obtain an eye gaze estimation and a face description based on attributes (e.g. glasses, beard or thick lips) from still images. An attribute-based face description reflects human vocabulary and is therefore adequate as face description. Head pose and eye gaze play an important role in human interaction and are a key element to extract interaction information from still images. Pose variation is a major challenge when analyzing them. Most current approaches for facial image analysis are not explicitly pose-invariant. To obtain a pose-invariant representation, we have to account the three dimensional nature of a face. A 3D Morphable Model (3DMM) of faces is used to obtain a dense 3D reconstruction of the face in the image. This Analysis-by-Synthesis approach provides model parameters which contain an explicit face description and a dense model to image correspondence. However, the fit is restricted to the model space and cannot explain all variations. Our model only contains straight gaze directions and lacks high detail textural features. To overcome this limitations, we use the obtained correspondence in a discriminative approach. The dense correspondence is used to extract a pose-normalized version of the input image. The warped image contains all information from the original image and preserves gaze and detailed textural information. On the pose-normalized representation we train a regression function to obtain gaze estimation and attribute description. We provide results for pose-invariant gaze estimation on still images on the UUlm Head Pose and Gaze Database and attribute description on the Multi-PIE database. To the best of our knowledge, this is the first pose-invariant approach to estimate gaze from unconstrained still images.


International Journal of Computer Vision | 2017

Markov Chain Monte Carlo for Automated Face Image Analysis

Sandro Schönborn; Bernhard Egger; Andreas Morel-Forster; Thomas Vetter

We present a novel fully probabilistic method to interpret a single face image with the 3D Morphable Model. The new method is based on Bayesian inference and makes use of unreliable image-based information. Rather than searching a single optimal solution, we infer the posterior distribution of the model parameters given the target image. The method is a stochastic sampling algorithm with a propose-and-verify architecture based on the Metropolis–Hastings algorithm. The stochastic method can robustly integrate unreliable information and therefore does not rely on feed-forward initialization. The integrative concept is based on two ideas, a separation of proposal moves and their verification with the model (Data-Driven Markov Chain Monte Carlo), and filtering with the Metropolis acceptance rule. It does not need gradients and is less prone to local optima than standard fitters. We also introduce a new collective likelihood which models the average difference between the model and the target image rather than individual pixel differences. The average value shows a natural tendency towards a normal distribution, even when the individual pixel-wise difference is not Gaussian. We employ the new fitting method to calculate posterior models of 3D face reconstructions from single real-world images. A direct application of the algorithm with the 3D Morphable Model leads us to a fully automatic face recognition system with competitive performance on the Multi-PIE database without any database adaptation.


Computer Vision and Image Understanding | 2015

Background modeling for generative image models

Sandro Schönborn; Bernhard Egger; Andreas Forster; Thomas Vetter

Discussion of the implicit but unavoidable background model in generative image models.Analysis of common practical strategies to deal with the problem and their drawbacks.Explicit background models are proposed as a fundamental solution.The background model is introduced through an efficient likelihood ratio correction.The background correction clearly improves on face pose estimation and recognition. Face image interpretation with generative models is done by reconstructing the input image as well as possible. A comparison between the target and the model-generated image is complicated by the fact that faces are surrounded by background. The standard likelihood formulation only compares within the modeled face region. Through this restriction an unwanted but unavoidable background model appears in the likelihood. This implicitly present model is inappropriate for most backgrounds and leads to artifacts in the reconstruction, ranging from pose misalignment to shrinking of the face. We discuss the problem in detail for a probabilistic 3D Morphable Model and propose to use explicit image-based background models as a simple but fundamental solution. We also discuss common practical strategies which deal with the problem but suffer from a limited applicability which inhibits the fully automatic adaption of such models. We integrate the explicit background model through a likelihood ratio correction of the face model and thereby remove the need to evaluate the complete image. The background models are generic and do not need to model background specifics. The corrected 3D Morphable Model directly leads to more accurate pose estimation and image interpretations at large yaw angles with strong self-occlusion.


british machine vision conference | 2016

Occlusion-aware 3D Morphable Face Models.

Bernhard Egger; Andreas Schneider; Clemens Blumer; Andreas Forster; Sandro Schönborn; Thomas Vetter

We propose a probabilistic occlusion-aware 3D Morphable Face Model adaptation framework for face image analysis based on the Analysis-by-Synthesis setup. In natural images, parts of the face are often occluded by a variety of objects. Such occlusions are a challenge for face model adaptation. We propose to segment the image into face and non-face regions and model them separately. The segmentation and the face model parameters are not known in advance and have to be adapted to the target image. A good segmentation is necessary to obtain a good face model fit and vice-versa. Therefore, face model adaptation and segmentation are solved together using an EM-like procedure. We use a stochastic sampling strategy based on the Metropolis-Hastings algorithm for face model parameter adaptation and a modified Chan-Vese segmentation for face region segmentation. Previous robust methods are limited to homogeneous, controlled illumination settings and tend to fail for important regions such as the eyes or mouth. We propose a RANSAC-based robust illumination estimation technique to handle complex illumination conditions. We do not use any manual annotation and the algorithm is not optimised to any specific kind of occlusion or database. We evaluate our method on a controlled and an “in the wild” database.


International Journal of Computer Vision | 2018

Occlusion-Aware 3D Morphable Models and an Illumination Prior for Face Image Analysis

Bernhard Egger; Sandro Schönborn; Andreas Schneider; Adam Kortylewski; Andreas Morel-Forster; Clemens Blumer; Thomas Vetter

Faces in natural images are often occluded by a variety of objects. We propose a fully automated, probabilistic and occlusion-aware 3D morphable face model adaptation framework following an analysis-by-synthesis setup. The key idea is to segment the image into regions explained by separate models. Our framework includes a 3D morphable face model, a prototype-based beard model and a simple model for occlusions and background regions. The segmentation and all the model parameters have to be inferred from the single target image. Face model adaptation and segmentation are solved jointly using an expectation–maximization-like procedure. During the E-step, we update the segmentation and in the M-step the face model parameters are updated. For face model adaptation we apply a stochastic sampling strategy based on the Metropolis–Hastings algorithm. For segmentation, we apply loopy belief propagation for inference in a Markov random field. Illumination estimation is critical for occlusion handling. Our combined segmentation and model adaptation needs a proper initialization of the illumination parameters. We propose a RANSAC-based robust illumination estimation technique. By applying this method to a large face image database we obtain a first empirical distribution of real-world illumination conditions. The obtained empirical distribution is made publicly available and can be used as prior in probabilistic frameworks, for regularization or to synthesize data for deep learning methods.


Statistical Shape and Deformation Analysis#R##N#Methods, Implementation and Applications | 2017

Probabilistic Morphable Models

Bernhard Egger; Sandro Schönborn; Clemens Blumer; Thomas Vetter

Abstract 3D Morphable Face Models have been introduced for the analysis of 2D face photographs. The analysis is performed by actively reconstructing the three-dimensional face from the image in an Analysis-by-Synthesis loop, exploring statistical models for shape and appearance. Here we follow a probabilistic approach to acquire a robust and automatic model adaptation. The probabilistic formulation helps to overcome two main limitations of the classical approach. First, Morphable Model adaptation is highly depending on a good initialization. The initial position of landmark points and face pose was given by manual annotation in previous approaches. Our fully probabilistic formulation allows us to integrate unreliable Bottom-Up cues from face and feature point detectors. This integration is superior to the classical feed-forward approach, which is prone to early and possibly wrong decisions. The integration of uncertain Bottom-Up detectors leads to a fully automatic model adaptation process. Second, the probabilistic framework gives us a natural way to handle outliers and occlusions. Face images are recorded in highly unconstrained settings. Often parts of the face are occluded by various objects. Unhandled occlusions can mislead the model adaptation process. The probabilistic interpretation of our model makes possible to detect and segment occluded parts of the image and leads to robust model adaptation. Throughout this chapter we develop a fully probabilistic framework for image interpretation. We start by reformulating the Morphable Model as a probabilistic model in a fully Bayesian framework. Given an image, we search for a posterior distribution of possible image explanations. The integration of Bottom-Up information and the model parameters adaptation is performed using a Data Driven Markov Chain Monte Carlo approach. The face model is extended to be occlusion-aware and explicitly segments the image into face and non-face regions during the model adaptation process. The segmentation and model adaptation is performed in an Expectation-Maximization-style algorithm utilizing a robust illumination estimation method. The presented fully automatic face model adaptation can be used in a wide range of applications like face analysis, face recognition or face image manipulation. Our framework is able to handle images containing strong outliers, occlusions and facial expressions under arbitrary poses and illuminations. Furthermore, the fully probabilistic embedding has the additional advantage that it also delivers the uncertainty of the resulting image interpretation.


international conference on computer graphics theory and applications | 2016

Copula Eigenfaces

Bernhard Egger; Dinu Kaufmann; Sandro Schönborn; Volker Roth; Thomas Vetter

Principal component analysis is a ubiquitous method in parametric appearance modeling for describing dependency and variance in a data set. The method requires that the observed data be Gaussian-distributed. We show that this requirement is not fulfilled in the context of analysis and synthesis of facial appearance. The model mismatch leads to unnatural artifacts which are severe to human perception. In order to prevent these artifacts, we propose to use a semiparametric Gaussian copula model, where dependency and variance are modeled separately. The Gaussian copula enables us to use arbitrary Gaussian and non-Gaussian marginal distributions. The new flexibility provides scale invariance and robustness to outliers as well as a higher specificity in generated images. Moreover, the new model makes possible a combined analysis of facial appearance and shape data. In practice, the proposed model can easily enhance the performance obtained by principal component analysis in existing pipelines: The steps for analysis and synthesis can be implemented as convenient pre- and post-processing steps.


International Joint Conference on Computer Vision, Imaging and Computer Graphics | 2016

Copula Eigenfaces with Attributes: Semiparametric Principal Component Analysis for a Combined Color, Shape and Attribute Model

Bernhard Egger; Dinu Kaufmann; Sandro Schönborn; Volker Roth; Thomas Vetter

Principal component analysis is a ubiquitous method in parametric appearance modeling for describing dependency and variance in datasets. The method requires the observed data to be Gaussian-distributed. We show that this requirement is not fulfilled in the context of analysis and synthesis of facial appearance. The model mismatch leads to unnatural artifacts which are severe to human perception. As a remedy, we use a semiparametric Gaussian copula model, where dependency and variance are modeled separately. This model enables us to use arbitrary Gaussian and non-Gaussian marginal distributions. Moreover, facial color, shape and continuous or categorical attributes can be analyzed in an unified way. Accounting for the joint dependency between all modalities leads to a more specific face model. In practice, the proposed model can enhance performance of principal component analysis in existing pipelines: The steps for analysis and synthesis can be implemented as convenient pre- and post-processing steps.


ieee international conference on automatic face gesture recognition | 2018

Morphable Face Models - An Open Framework

Thomas Gerig; Andreas Morel-Forster; Clemens Blumer; Bernhard Egger; Marcel Lüthi; Sandro Schoenborn; Thomas Vetter

Collaboration


Dive into the Bernhard Egger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge