Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephane Garchery is active.

Publication


Featured researches published by Stephane Garchery.


conference on multimedia modeling | 2005

Believability and Interaction in Virtual Worlds

Nadia Magnenat-Thalmann; HyungSeok Kim; Arjan Egges; Stephane Garchery

In this paper we present a discussion about believability for Virtual Environments, emotional simulation and also Embodied Conversational Agents (ECAs). We will discuss about the definition of believability and the three elements of believability environments (immersion, presentation and interaction). We also present a discussion about believability and interfaces. Finally, ECA, emotional and personnality simulation are explained and presented.


International Journal of Imaging Systems and Technology | 2003

Synthetic faces: Analysis and applications

Sumedha Kshirsagar; Stephane Garchery; Gael Sannier; Nadia Magnenat-Thalmann

Facial animation has been a topic of intensive research for more than three decades. Still, designing realistic facial animations remains to be a challenging task. Several models and tools have been developed so far to automate the design of faces and facial animations synchronized with speech, emotions, and gestures. In this article, we take a brief overview of the existing parameterized facial animation systems. We then turn our attention to facial expression analysis, which we believe is the key to improving realism in animated faces. We report the results of our research regarding the analysis of the facial motion capture data. We use an optical tracking system that extracts the 3D positions of markers attached at specific feature point locations. We capture the movements of these face markers for a talking person. We then form a vector space representation by using the principal component analysis of this data. We call this space “expression and viseme space.” As a result, we propose a new parameter space for sculpting facial expressions for synthetic faces. Such a representation not only offers insight into improving realism of animated faces, but also gives a new way of generating convincing speech animation and blending between several expressions. Expressive facial animation finds a variety of applications ranging from virtual environments to entertainment and games. With the advances in Internet technology, the development of online sales assistants, Web navigation aides and Web‐based interactive tutors is promising than ever before. We overview the recent advances in the field of facial animation on the Web, with a detailed look at the requirements for Web‐based facial animation systems and various applications.


Computers & Graphics | 2004

Adaptation of virtual human animation and representation for MPEG

Thomas Di Giacomo; Chris Joslin; Stephane Garchery; HyungSeok Kim; Nadia Magnenat-Thalmann

Abstract While level of detail (LoD) methods for the representation of 3D models are efficient and established tools to manage the trade-off between speed and quality of the rendering, LoD for animation has not yet been intensively studied by the community, and especially virtual humans animation has not been focused in the past. Animation, a major step for immersive and credible virtual environments, involves heavy computations and as such, it needs a control on its complexity to be embedded into real-time systems. Today, it becomes even more critical and necessary to provide such a control with the emergence of powerful new mobile devices and their increasing use for cyberworlds. With the help of suitable middleware solutions, executables are becoming more and more multi-platform. However, the adaptation of content, for various network and terminal capabilities—as well as for different user preferences, is still a key feature that needs to be investigated. It would ensure the adoption of “Multiple Target Devices Single Content” concept for virtual environments, and it would in theory provide the possibility of such virtual worlds in any possible condition without the need for multiple content. It is on this issue that we focus, with a particular emphasis on 3D objects and animation. This paper presents some theoretical and practical methods for adapting a virtual humans representation and animation stream, both for their skeleton-based body animation and their deformation-based facial animation, we also discuss practical details to the integration of our methods into MPEG-21 and MPEG-4 architectures.


The Visual Computer | 2006

Device-based decision-making for adaptation of three-dimensional content

HyungSeok Kim; Chris Joslin; Thomas Di Giacomo; Stephane Garchery; Nadia Magnenat-Thalmann

The goal of this research was the creation of an adaptation mechanism for the delivery of three-dimensional content. The adaptation of content, for various network and terminal capabilities – as well as for different user preferences, is a key feature that needs to be investigated. Current state-of-the art research of the adaptation shows promising results for specific tasks and limited types of content, but is still not well-suited for massive heterogeneous environments. In this research, we present a method for transmitting adapted three-dimensional content to multiple target devices. This paper presents some theoretical and practical methods for adapting three-dimensional content, which includes shapes and animation. We also discuss practical details of the integration of our methods into MPEG-21 and MPEG-4 architectures.


computer graphics international | 2004

Adaptation mechanism for three dimensional content within the MPEG-21 framework

HyungSeok Kim; Chris Joslin; T. Di Giacomo; Stephane Garchery; Nadia Magnenat-Thalmann

The goal of the research is creation of an adaptation mechanism for the delivery of three-dimensional content. The adaptation of content, for various network and terminal capabilities - as well as for different user preferences, is a key feature that needs to be investigated. Current state-of-the art research of the adaptation shows promising results for specific purpose and limited types of content but still it is not well adaptable for the massive heterogeneous environments. We present a method for transmitting adapted thee-dimensional contents to multiple target devices. We present some theoretical and practical methods for adapting three-dimensional contents, which includes shapes and animation. We also discuss practical details to the integration of our methods into MPEG-21 and MPEG-4 architectures


international conference on multimedia and expo | 2004

Multi-resolution meshes for multiple target, single content adaptation within the MPEG-21 framework

HyungSeok Kim; Chris Joslin; Thomas DiGiacomo; Stephane Garchery; Nadia Magnenat-Thalmann

To present three-dimensional data both in heavy and light-weight clients, an adaptation scheme is required. Current state-of-the art research shows promising results for specific purposes but it is still not well adoptable for light-weight clients such as mobile devices. In this research, we present a method for transmitting adapted 3D content to multiple target devices. To accomplish this goal, we devised a clustered representation of a multi-resolution model that is flexible, simple, efficient, and works with the MPEG-21 adaptation mechanism.


Archive | 2008

Real-Time Adaptive Facial Animation

Stephane Garchery; Thomas Di Giacomo; Nadia Magnenat-Thalmann

Facial modeling and animation are important research topics in computer graphics. During the last 20 years, a lot of research has been done in these areas, but it still remains a challenging task. The impact of previous and ongoing research has been felt in many applications, like games, Web-based 3D animations, 3D animation movies, etc. Two directions are investigated: precalculating animation with very realistic results for animated films and real-time animation for interactive applications. Correspondingly, the animation techniques vary from key-frame animations, where animators set each frame, to algorithmic parameterized mesh deformation. Many of the proposed deformation models use a parameterization scheme, which helps control the animation. Computer graphics have evolved to a relatively mature state. In parallel to the evolution of 3D graphics technologies, user and application requirements have also dramatically increased from simple virtual worlds to highly complex, interactive, and detailed virtual environments. Additionally, the targeted display platforms have widely broadened from dedicated graphics workstations or clusters of machines to standard desktop PCs, laptops, and mobile devices such as personal digital assistants (PDAs) or even mobile phones. Facial animation can be one illustration of such closely related evolutions of graphics techniques and corresponding applications and user’s requirements. Actually, despite much research and work on modeling, animation, and rendering techniques, it is still an important challenge to animate a highly realistic face with simulated hair and cloth, to display hundred of thousands of real-time animated humans on a standard computer, and it is still not possible to render animated characters on most mobile devices. The focus of this chapter is to present dynamically adaptive real-time facial animation techniques. We discuss methods to automatically and dynamically control the processing and memory loads together with the visual realism of rendered motions for real-time facial animation. Such approaches would theoretically allow us to free additional resources for hair or cloth animation; for instance, it should also achieve real-time performance for facial animation on multiplatform and on lightweight devices, as well as enable improvements to virtual environments with the addition of more and more facially animated humans in a single scene.


Archive | 2008

Expressive Visual Speech Generation

Thomas Di Giacomo; Stephane Garchery; Nadia Magnenat-Thalmann

With the emergence of 3D graphics, we are now able to create very realistic 3D characters that can move and talk. Multimodal interaction with such characters is also possible, as various technologies have matured for speech and video analysis, natural language dialogues, and animation. However, the behavior expressed by these characters is far from believable in most systems. We feel that this problem arises due to their lack of individuality on various levels: perception, dialogue, and expression. In this chapter, we describe results of research that tries to realistically connect personality and 3D characters, not only on an expressive level (for example, generating individualized expressions on a 3D face), but also with real-time video tracking, on a dialogue level (generating responses that actually correspond to what a certain personality in a certain emotional state would say) and on a perceptive level (having a virtual character that uses expression user data to create corresponding behavior). The idea of linking personality with agent behavior has been discussed by Marsella et al. [33], with the influence of emotion on behavior in general, and Johns et al. [21] with how personality and emotion can affect decision making. Traditionally, any text or voice-driven speech animation system uses the phonemes as the basic units of speech, and visemes as the basic units of animation. Though text-to-speech synthesizers and phoneme recognizers often use biphonebased techniques, the end user seldom has access to this information, except for dedicated systems. Most commercially and freely available software applications allow access to only time-stamped phoneme streams along with audio. Thus, in order to generate animation from this information, an extra level of processing, namely co-articulation, is required. This process takes care of the influence of the neighboring visemes for fluent speech production. This processing stage can be eliminated by using the syllable as a basic unit of speech rather than the phoneme. Overall, we do not intend to give a complete survey of ongoing research in behavior, emotion, and personality. Our main goal is to create believable conversational agents that can interact with many modalities. We thus concentrate on emotion extraction of a real user (Section 2.3), visyllable-based speech animation (Section 2.4), dialogue systems and emotions (Section 2.5).


cyberworlds | 2003

Adaptation of facial and body animation for MPEG-based architectures

T. Di Giacomo; Chris Joslin; Stephane Garchery; Nadia Magnenat-Thalmann


Archive | 2004

Believable virtual environment : sensory and perceptual believability

HyungSeok Kim; T. Di Giacomo; A. Egges; L. Lyard; Stephane Garchery; Nadia Magnenat-Thalmann

Collaboration


Dive into the Stephane Garchery's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Egges

University of Geneva

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

L. Lyard

University of Geneva

View shared research outputs
Researchain Logo
Decentralizing Knowledge