Rossana Baptista Queiroz
Pontifícia Universidade Católica do Rio Grande do Sul
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rossana Baptista Queiroz.
conference on computability in europe | 2008
Rossana Baptista Queiroz; Leandro Motta Barros; Soraia Raupp Musse
Eyes play an important role in communication among people. Motions of the eye express emotions and regulate the flow of conversation. Hence we consider fundamental that virtual humans or other characters present convincing and expressive gaze in applications such as Embodied Conversational Agents (ECAs), games and movies. However, we perceive that in many applications that require automatic generation of facial movements, such as ECA, characters eye motion does not carry meaning related to its expressiveness. This work proposes a model for the automatic generation of expressive gaze by examining eye behavior in different affective states. To collect data related to gaze and expressiveness, we looked at Computer Graphics movies. This data was used as a basis to describe gaze expressions in the proposed model. We also implemented a prototype and performed some tests with users in order to observe the impact of eye behavior during some expressions of emotion. The results show that the model is capable of generating eye motions that are coherent with the affective states of the virtual character.
conference on computability in europe | 2009
Rossana Baptista Queiroz; Marcelo Cohen; Soraia Raupp Musse
In this article we describe our approach to generating convincing and empathetic facial animation. Our goal is to develop a robust facial animation platform that is usable and can be easily extended. We also want to facilitate the integration of research in the area and to directly incorporate the characters in interactive applications such as embodied conversational agents and games. We have developed a framework capable of easily animating MPEG-4 parameterized faces through high-level description of facial actions and behaviors. The animations can be generated in real time for interactive applications. We present some case studies that integrate computer vision techniques in order to provide interaction between the user and a character that interacts with different facial actions according to events in the application.
intelligent virtual agents | 2010
Rossana Baptista Queiroz; Adriana Braun; Juliano Lucas Moreira; Marcelo Cohen; Soraia Raupp Musse; Marcelo Thielo; Ramin Samadani
This paper presents a model to generate personalized facial animations for avatars using Performance Driven Animation (PDA). This approach allows the users to reflect their face expressions in his/her avatar, considering as input a small set of feature points provided by Computer Vision (CV) tracking algorithms. The model is based on the MPEG-4 Facial Animation standard, and uses a hierarchy of the animation parameters to provide animation of face regions where it lacks CV data. To deform the face, we use two skin mesh deformation methods, which are computationally cheap and provide avatar animation in real time. We performed an evaluation with subjects in order to qualitatively evaluate our method. Results show that the proposed model can generate coherent and visually satisfactory animations.
2009 VIII Brazilian Symposium on Games and Digital Entertainment | 2009
Henry Braun; Rafael Hocevar; Rossana Baptista Queiroz; Marcelo Cohen; Juliano Lucas Moreira; Júlio C. S. Jacques Júnior; Adriana Braun; Soraia Raupp Musse; Ramin Samadani
In this paper we present a platform called VhCVE, in which relevant issues related to Collaborative Virtual Environments applications are integrated. The main goal is to provide a framework where participants can interact with others by voice and chat. Also, manipulation tools such as a mouse using Computer Vision and Physics are included, as well as rendering techniques (e.g. light sources, shadows and weather effects). In addition, avatar animation in terms of face and body motion are provided. Results indicate that our platform can be used as a interactive virtual world to help communication among people.
intelligent virtual agents | 2007
Rossana Baptista Queiroz; Leandro Motta Barros; Soraia Raupp Musse
We present a model for automatic generation of expressive gaze in virtual agents. Our main focus is the eye behavior associated to expressiveness. Our approach is to collect data from animated Computer Graphics films, and codify such observations into an animation framework. The main contribution is the modeling aspects of an animation system, calibrated with empirical observations in order to generate realistic eyes motion. Results show that this approach generates convincing animations that improve the empathy of virtual agents.
computer games | 2015
Rossana Baptista Queiroz; Adriana Braun; Soraia Raupp Musse
This work presents a methodology which aims to improve and automate the process of generating facial animation for interactive applications. We propose an adaptive and semiautomatic methodology, which allows to transfer facial expressions from a face mesh to another. The model has three main stages: rigging, expression transfer and animation, where the output meshes can be used as key poses for blendshape-based animation. The input of the model is a face mesh in neutral pose and a set of face data that can be provided from different sources, such as artist crafted meshes and motion capture data. The model generates a set of blendshapes corresponding to the input set, with minimum user intervention. We opted to use a simple rig structure in order to provide a trivial correspondence either with sparse facial feature points based systems or dense geometric data supplied by RGBD based systems. The rig structure can be refined on-the-fly to deal with different input geometric data according to the need. The main contribution of this work is an adaptive methodology which aims to create facial animations with few user intervention and capable or transferring expression details according to the need and/or amount of input data.
brazilian symposium on computer graphics and image processing | 2010
Rossana Baptista Queiroz; Marcelo Cohen; Juliano Lucas Moreira; Adriana Braun; Júlio C. S. Jacques Júnior; Soraia Raupp Musse
This work describes a methodology for generation of facial ground truth with synthetic faces. Our focus is to provide a way to generate accurate data for the evaluation of Computer Vision algorithms, in terms of facial detection and its components. Such algorithms play a key role in face detection. We present a prototype in which we can generate facial animation videos using a 3D face models database, controlling face actions, illumination conditions and camera position. The facial animation platform allows us to generate animations with speech, facial expressions and eye motion, in order to approach realistic human face behavior. In addition, our model provides the ground truth of a set of facial feature points at each frame. As result, we are able to build a video database of synthetic human faces with ground truth, which can be used for training/evaluation of several algorithms for tracking and/or detection. We also present experiments using our generated videos to evaluate face, eye and mouth detection algorithms, comparing their performance with real video sequences.
Computers in Entertainment | 2018
Adriana Braun; Rossana Baptista Queiroz; Won-Sook Lee; Bruno Feijó; Soraia Raupp Musse
This article proposes the Persona method. The goal of the prosposed method is to learn and classify the facial actions of actors in video sequences. Persona is based on standard action units. We use a database with main expressions mapped and pre-classified that allows the automatic learning and faces selection. The learning stage uses Support Vector Machine (SVM) classifiers to identify expressions from a set of feature points tracked in the input video. After that, labeled control 3D masks are built for each selected action unit or expression, which composes the Persona structure. The proposed method is almost automatic (little intervention is needed) and does not require markers on the actor’s face or motion capture devices. Many applications are possible based on the Persona structure such as expression recognition, customized avatar deformation, and mood analysis, as discussed in this article.
Entertainment Computing | 2017
Rossana Baptista Queiroz; Adriana Braun; Soraia Raupp Musse
Abstract This work presents a methodology for generic facial expression transfer, aiming to speed the process of generating facial animation for interactive applications. We propose an adaptive and semiautomatic methodology, which allows to transfer facial expressions from a face mesh to another. The model has three main stages: rigging, expression transfer and animation, where the output meshes can be used as key poses for blendshape-based animation. The input of the model is a face mesh in neutral pose and a set of face data that can be provided from different sources, such as artist crafted meshes and motion capture data. The model generates a set of blendshapes corresponding to the input set, with minimum user intervention. We used a simple rig structure in order to provide a trivial correspondence either with sparse facial feature points based systems or dense geometric data supplied by RGBD based systems. The rig structure can be refined on-the-fly to deal with different input geometric data according to the need. Results show the quality of expressions transfer assessment using face data including artist crafted meshes and performance driven animation.
computer games | 2017
Adriano Luis Kerber; Rossana Baptista Queiroz; Daniel Camozzato; Vinícius J. Cassol