Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Verónica Orvalho is active.

Publication


Featured researches published by Verónica Orvalho.


international conference on computer graphics and interactive techniques | 2010

A practical appearance model for dynamic facial color

Jorge Jimenez; Timothy Scully; Nuno Barbosa; Craig Donner; Xenxo Alvarez; Teresa Vieira; Paul J. Matts; Verónica Orvalho; Diego Gutierrez; Tim Weyrich

Facial appearance depends on both the physical and physiological state of the skin. As people move, talk, undergo stress, and change expression, skin appearance is in constant flux. One of the key indicators of these changes is the color of skin. Skin color is determined by scattering and absorption of light within the skin layers, caused mostly by concentrations of two chromophores, melanin and hemoglobin. In this paper we present a real-time dynamic appearance model of skin built from in vivo measurements of melanin and hemoglobin concentrations. We demonstrate an efficient implementation of our method, and show that it adds negligible overhead to existing animation and rendering pipelines. Additionally, we develop a realistic, intuitive, and automatic control for skin color, which we term a skin appearance rig. This rig can easily be coupled with a traditional geometric facial animation rig. We demonstrate our method by augmenting digital facial performance with realistic appearance changes.


eurographics | 2012

A Facial Rigging Survey

Verónica Orvalho; Pedro Bastos; Frederic I. Parke; Bruno Oliveira; Xenxo Alvarez

Rigging is the process of setting up a group of controls to operate a 3D model, analogous to the strings of a puppet. It plays a fundamental role in the animation process as it eases the manipulation and editing of expressions, but rigging can be very laborious and cumbersome for an artist. This difficulty arises from the lack of a standard definition of what is a rig and the multitude approaches on how to setup a face. This survey presents a critical review on the fundamentals of rigging, with an outlook of the different techniques, their uses and problems. It describes the main problems that appear when preparing a character for animation. This paper also gives an overview of the role and relationship between the rigger and the animator. Continues with an exhaustive analysis of the published literature and previous work, centered on the facial rigging pipeline. Finally, the survey discusses future directions of facial rigging.


Computer Graphics Forum | 2008

Transferring the Rig and Animations from a Character to Different Face Models

Verónica Orvalho; Ernesto Zacur; Antonio Susín

We introduce a facial deformation system that allows artists to define and customize a facial rig and later apply the same rig to different face models. The method uses a set of landmarks that define specific facial features and deforms the rig anthropometrically. We find the correspondence of the main attributes of a source rig, transfer them to different three‐demensional (3D) face models and automatically generate a sophisticated facial rig. The method is general and can be used with any type of rig configuration. We show how the landmarks, combined with other deformation methods, can adapt different influence objects (NURBS surfaces, polygon surfaces, lattice) and skeletons from a source rig to individual face models, allowing high quality geometric or physically‐based animations. We describe how it is possible to deform the source facial rig, apply the same deformation parameters to different face models and obtain unique expressions. We enable reusing of existing animation scripts and show how shapes nicely mix one with the other in different face models. We describe how our method can easily be integrated in an animation pipeline. We end with the results of tests done with major film and game companies to show the strength of our proposal.


human factors in computing systems | 2012

Shape your body: control a virtual silhouette using body motion

Luís Leite; Verónica Orvalho

In this paper we propose to use our body as a puppetry controller, giving life to a virtual silhouette through acting. A framework was deployed based on Microsoft Kinect using OpenNI and Unity to animate in real-time a silhouette. This was used to perform a set of experiments related to the users interaction with human and non-human like puppets. We believe that a performance-driven silhouette can be just as expressive as a traditional shadow puppet with a high degree of freedom, making use of our entire body as an input. We describe our solution that allows real-time interactive control of virtual shadow puppets for performance animation based on body motion. We show through our experiment, performed by non-expert artists, that using our body to control puppets is like mixing the performance of an actor with the manipulation of a puppeteer.


I-perception | 2016

Apparent Biological Motion in First and Third Person Perspective

Emmanuele Tidoni; Michele Scandola; Verónica Orvalho; Matteo Candidi

Apparent biological motion is the perception of plausible movements when two alternating images depicting the initial and final phase of an action are presented at specific stimulus onset asynchronies. Here, we show lower subjective apparent biological motion perception when actions are observed from a first relative to a third visual perspective. These findings are discussed within the context of sensorimotor contributions to body ownership.


international symposium on communications, control and signal processing | 2012

An interactive game for teaching facial expressions to children with Autism Spectrum Disorders

Suyog Dutt Jain; Birgi Tamersoy; Yan Zhang; Jake K. Aggarwal; Verónica Orvalho

Autism Spectrum Disorders (ASDs), a neuerodevelopmental disability in children is a cause of major concern. The children with ASDs find it difficult to express and recognize emotions which makes it hard for them to interact socially. Conventional methods use medicinal means, special education and behavioral analysis. They are not always successful and are usually expensive. There is a significant need to develop technology based methods for effective intervention and cure. We propose an interactive game design which uses modern computer vision and computer graphics techniques. This game tracks facial features and uses tracked features to: 1) recognize the facial expressions of the player, and 2) animate an avatar, which mimics the players facial expressions. The ultimate goal of the game is to influence the emotional behavior of the player.


Computers & Graphics | 2012

Special Section on CANS: Sketch express: A sketching interface for facial animation

Jose Carlos Miranda; Xenxo Alvarez; João Orvalho; Diego Gutierrez; António Augusto de Sousa; Verónica Orvalho

One of the most challenging tasks for an animator is to quickly create convincing facial expressions. Finding an effective control interface to manipulate facial geometry has traditionally required experienced users (usually technical directors), who create and place the necessary animation controls. Here we present our sketching interface control system, designed to reduce the time and effort necessary to create facial animations. Inspired in the way artists draw, where simple strokes define the shape of an object, our approach allows the user to sketch such strokes either directly on the 3D mesh or on two different types of canvas: a 2D fixed canvas or more flexible 2.5D dynamic screen-aligned billboards. In all cases, the strokes do not control the geometry of the face, but the underlying animation rig instead, allowing direct manipulation of the rig elements. Additionally, we show how the strokes can be easily reused in different characters, allowing retargeting of poses on several models. We illustrate our interactive approach using varied facial models of different styles showing that first time users typically create appealing 3D poses and animations in just a few minutes. We also present in this article the results of a user study. We deploy our method in an application for an artistic purpose. Our system has also been used in a pioneer serious game context, where the goal was to teach people with Autism Spectrum Disorders (ASD) to recognize facial emotions, using real time synthesis and automatic facial expression analysis.


international conference on enterprise information systems | 2011

LIFEisGAME: A Facial Character Animation System to Help Recognize Facial Expressions

Tiago Fernandes; Samanta Alves; José Miranda; Cristina Queirós; Verónica Orvalho

This article presents the LIFEisGAME project, a serious game that will help children with ASDs to recognize and express emotions through facial expressions. The game design tackles one of the main experiential learning cycle of emotion recognition: recognize and mimic (game mode: build a face). We describe the technology behind the game, which focus on a character animation pipeline and a sketching algorithm. We detailed the facial expression analyzer that is used to calculate the score in the game. We also present a study that analyzes what type of characters children prefer when playing a game. Last, we present a pilot study we have performed with kids with ASD.


PLOS ONE | 2013

Does My Face FIT?: A Face Image Task Reveals Structure and Distortions of Facial Feature Representation

Christina T. Fuentes; Catarina Runa; Xenxo Alvarez Blanco; Verónica Orvalho; Patrick Haggard

Despite extensive research on face perception, few studies have investigated individuals’ knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual’s features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one’s own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one’s own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.


advances in computer entertainment technology | 2011

Anim-actor: understanding interaction with digital puppetry using low-cost motion capture

Luís Leite; Verónica Orvalho

Character animation has traditionally relied on complex and time-consuming key frame animation or expensive motion capture techniques. These methods are out of reach for low budget animation productions. We present a low-cost performance-driven technique that allows real-time interactive control of puppets for performance or film animation. In this paper we study how users can interpret simple actions like, walking, with different puppets. The system was deployed for the Xbox Kinect and shows how low-cost equipments can provide a new mean for motion capture representation. Last, we performed a pilot experiment animating silhouettes and 3D puppets in real-time, based on body movements interpreted by non-expert artists, representing different Olympic Sports that vary on visual representation for other participants to identify. As a result, interaction with 2D puppets needs more interpretation than with 3D puppets.

Collaboration


Dive into the Verónica Orvalho's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luís Leite

Faculdade de Engenharia da Universidade do Porto

View shared research outputs
Top Co-Authors

Avatar

António Marques

Oporto Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

José Miranda

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge