Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martin Breidt is active.

Publication


Featured researches published by Martin Breidt.


international conference on computer graphics and interactive techniques | 2012

Render me real?: investigating the effect of render style on the perception of animated virtual humans

Rachel McDonnell; Martin Breidt; Hh Bülthoff

The realistic depiction of lifelike virtual humans has been the goal of many movie makers in the last decade. Recently, films such as Tron: Legacy and The Curious Case of Benjamin Button have produced highly realistic characters. In the real-time domain, there is also a need to deliver realistic virtual characters, with the increase in popularity of interactive drama video games (such as L.A. Noire™ or Heavy Rain™). There have been mixed reactions from audiences to lifelike characters used in movies and games, with some saying that the increased realism highlights subtle imperfections, which can be disturbing. Some developers opt for a stylized rendering (such as cartoon-shading) to avoid a negative reaction [Thompson 2004]. In this paper, we investigate some of the consequences of choosing realistic or stylized rendering in order to provide guidelines for developers for creating appealing virtual characters. We conducted a series of psychophysical experiments to determine whether render style affects how virtual humans are perceived. Motion capture with synchronized eye-tracked data was used throughout to animate custom-made virtual model replicas of the captured actors.


applied perception in graphics and visualization | 2006

Semantic 3D motion retargeting for facial animation

C Curio; Martin Breidt; Mario Kleiner; Quoc C. Vuong; Martin A. Giese; Hh Bülthoff

We present a system for realistic facial animation that decomposes facial motion capture data into semantically meaningful motion channels based on the Facial Action Coding System. A captured performance is retargeted onto a morphable 3D face model based on a semantic correspondence between motion capture and 3D scan data. The resulting facial animation reveals a high level of realism by combining the high spatial resolution of a 3D scanner with the high temporal accuracy of motion capture data that accounts for subtle facial movements with sparse measurements.Such an animation system allows us to systematically investigate human perception of moving faces. It offers control over many aspects of the appearance of a dynamic face, while utilizing as much measured data as possible to avoid artistic biases. Using our animation system, we report results of an experiment that investigates the perceived naturalness of facial motion in a preference task. For expressions with small amounts of head motion, we find a benefit for our part-based generative animation system over an example-based approach that deforms the whole face at once.


tests and proofs | 2008

Evaluating the perceptual realism of animated facial expressions

Christian Wallraven; Martin Breidt; Douglas W. Cunningham; Hh Bülthoff

The human face is capable of producing an astonishing variety of expressions—expressions for which sometimes the smallest difference changes the perceived meaning considerably. Producing realistic-looking facial animations that are able to transmit this degree of complexity continues to be a challenging research topic in computer graphics. One important question that remains to be answered is: When are facial animations good enough? Here we present an integrated framework in which psychophysical experiments are used in a first step to systematically evaluate the perceptual quality of several different computer-generated animations with respect to real-world video sequences. The first experiment provides an evaluation of several animation techniques, exposing specific animation parameters that are important to achieve perceptual fidelity. In a second experiment, we then use these benchmarked animation techniques in the context of perceptual research in order to systematically investigate the spatiotemporal characteristics of expressions. A third and final experiment uses the quality measures that were developed in the first two experiments to examine the perceptual impact of changing facial features to improve the animation techniques. Using such an integrated approach, we are able to provide important insights into facial expressions for both the perceptual and computer graphics community.


ieee international conference on automatic face gesture recognition | 2011

Robust semantic analysis by synthesis of 3D facial motion

Martin Breidt; Heinrich H. Biilthoff; C Curio

Rich face models already have a large impact on the fields of computer vision, perception research, as well as computer graphics and animation. Attributes such as descriptiveness, semantics, and intuitive control are desirable properties but hard to achieve. Towards the goal of building such high-quality face models, we present a 3D model-based analysis-by-synthesis approach that is able to parameterize 3D facial surfaces, and that can estimate the state of semantically meaningful components, even from noisy depth data such as that produced by Time-of-Flight (ToF) cameras or devices such as Microsoft Kinect. At the core, we present a specialized 3D morphable model (3DMM) for facial expression analysis and synthesis. In contrast to many other models, our model is derived from a large corpus of localized facial deformations that were recorded as 3D scans from multiple identities. This allows us to analyze unstructured dynamic 3D scan data using a modified Iterative Closest Point model fitting process, followed by a constrained Action Unit model regression, resulting in semantically meaningful facial deformation time courses. We demonstrate the generative capabilities of our 3DMMs for facial surface reconstruction on high and low quality surface data from a ToF camera. The analysis of simultaneous recordings of facial motion using passive stereo and noisy Time-of-Flight camera shows good agreement of the recovered facial semantics.


workshop on program comprehension | 2003

How believable are real faces? Towards a perceptual basis for conversational animation

Douglas W. Cunningham; Martin Breidt; Mario Kleiner; Christian Wallraven; Hh Bülthoff

Regardless of whether the humans involved are virtual or real, well-developed conversational skills are a necessity. The synthesis of interface agents that are not only understandable but also believable can be greatly aided by knowledge of which facial motions are perceptually necessary and sufficient for clear and believable conversational facial expressions. Here, we recorded several core conversational expressions (agreement, disagreement, happiness, sadness, thinking, and confusion) from several individuals, and then psychophysically determined the perceptual ambiguity and believability of the expressions. The results show that people can identify these expressions quite well, although there are some systematic patterns of confusion. People were also very confident of their identifications and found the expressions to be rather believable. The specific pattern of confusions and confidence ratings have strong implications for conversational animation. Finally, the present results provide the information necessary to begin a more fine-grained analysis of the core components of these expressions.


international conference on robotics and automation | 2010

A novel framework for closed-loop robotic motion simulation - part I: Inverse kinematics design

P Robuffo Giordano; Carlo Masone; Joachim Tesch; Martin Breidt; Lorenzo Pollini; Hh Bülthoff

This paper considers the problem of realizing a 6-DOF closed-loop motion simulator by exploiting an anthropomorphic serial manipulator as motion platform. Contrary to standard Stewart platforms, an industrial anthropomorphic manipulator offers a considerably larger motion envelope and higher dexterity that let envisage it as a viable and superior alternative. Our work is divided in two papers. In this Part I, we discuss the main challenges in adopting a serial manipulator as motion platform, and thoroughly analyze one key issue: the design of a suitable inverse kinematics scheme for online motion reproduction. Experimental results are proposed to analyze the effectiveness of our approach. Part II [1] will address the design of a motion cueing algorithm tailored to the robot kinematics, and will provide an experimental evaluation on the chosen scenario: closed-loop simulation of a Formula 1 racing car.


Journal of Craniofacial Surgery | 2009

Three-dimensional assessment of facial development in children with Pierre Robin sequence.

Michael Krimmel; Susanne Kluba; Martin Breidt; Margit Bacher; Klaus Dietz; Heinrich Buelthoff; Siegmar Reinert

Newborns with Pierre Robin sequence (PRS) have mandibular hypoplasia, glossoptosis, and possibly cleft palate. Their facial appearance is characteristic. The further facial development is controversial. The aim of this study was to analyze the facial development of children with PRS. In a prospective, cross-sectional study, 344 healthy children and 37 children with PRS and cleft palate younger than 8 years were scanned three-dimensionally. Twenty-one standard anthropometric landmarks were identified, and the images were superimposed. Growth curves for normal facial development were calculated. The facial morphology of children with PRS was compared with that of healthy children. The facial growth of children with PRS in the transversal and vertical direction was normal. In the sagittal direction, the mandibular deficit was confirmed. Except for the orbital landmarks and nasion, all landmarks of the midface demonstrated a significant sagittal deficit. This difference to healthy children remained constant for all ages. Our study cannot support the theory of mandibular catch-up growth. The sagittal deficit of the midface could be observed in all ages. This indicates that children with PRS have a very early, severe, and persistent underdevelopment of this part of the face. We conclude that this disturbance must be addressed in early childhood with orthodontic and speech therapy.


applied perception in graphics and visualization | 2004

View dependence of complex versus simple facial motions

Christian Wallraven; Douglas W. Cunningham; Martin Breidt; Hh Bülthoff

In this study we investigate the viewpoint dependency of complex facial expressions versus simple facial motions (so-called “action units” [Ekman and Friesen 1978]). The results not only shed light on the cognitive processes underlying the processing of complex and simple facial motion for expression recognition, but also suggest ways how to incorporate these results into computer graphics and computer animation [Breidt et al. 2003]. For example, expression recognition might be highly viewpoint dependent making it difficult to recognize expressions from the side. As a direct consequence, modeling of expressions would then require only the frontal views to “look good”, i.e., it would in principle be unnecessary to attempt detailed 3D modeling of expressions. If, however, recognition of expressions were view-invariant, then modeling would have to provide a faithful 3D rendering of facial expressions.


international conference on robotics and automation | 2010

A novel framework for closed-loop robotic motion simulation - part II: Motion cueing design and experimental validation

P Robuffo Giordano; Carlo Masone; Joachim Tesch; Martin Breidt; Lorenzo Pollini; Hh Bülthoff

This paper, divided in two Parts, considers the problem of realizing a 6-DOF closed-loop motion simulator by exploiting an anthropomorphic serial manipulator as motion platform. After having proposed a suitable inverse kinematics scheme in Part I [1], we address here the other key issue, i.e., devising a motion cueing algorithm tailored to the specific robot motion envelope. An extension of the well-known classical washout filter designed in cylindrical coordinates will provide an effective solution to this problem. The paper will then present a thorough experimental evaluation of the overall architecture (inverse kinematics + motion cueing) on the chosen scenario: closed-loop simulation of a Formula 1 racing car. This will prove the feasibility of our approach in fully exploiting the robot motion capabilities as a motion simulator.


joint pattern recognition symposium | 2009

Markerless 3D Face Tracking

Christian Walder; Martin Breidt; Hh Bülthoff; Bernhard Schölkopf; C Curio

We present a novel algorithm for the markerless tracking of deforming surfaces such as faces. We acquire a sequence of 3D scans along with color images at 40Hz. The data is then represented by implicit surface and color functions, using a novel partition-of-unity type method of efficiently combining local regressors using nearest neighbor searches. Both these functions act on the 4D space of 3D plus time, and use temporal information to handle the noise in individual scans. After interactive registration of a template mesh to the first frame, it is then automatically deformed to track the scanned surface, using the variation of both shape and color as features in a dynamic energy minimization problem. Our prototype system yields high-quality animated 3D models in correspondence, at a rate of approximately twenty seconds per timestep. Tracking results for faces and other objects are presented.

Collaboration


Dive into the Martin Breidt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Klaus Dietz

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge