Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Igor S. Pandzic is active.

Publication


Featured researches published by Igor S. Pandzic.


Presence: Teleoperators & Virtual Environments | 1999

Anyone for Tennis

Tom Molet; Amaury Aubel; Tolga K. Çapin; Stéphane Carion; Elwin Lee; Nadia Magnenat-Thalmann; Hansrudi Noser; Igor S. Pandzic; Gael Sannier; Daniel Thalmann

In this paper we present a virtual tennis game. We describe the creation and modeling of the virtual humans and body deformations, also showing the real-time animation and rendering aspects of the avatars. We focus on the animation of the virtual tennis ball and the behavior of a synthetic, autonomous referee who judges the tennis games. The networked, collaborative, virtual environment system is described with special reference to its interfaces to driver programs. We also mention the virtual reality (VR) devices that are used to merge the interactive players into the virtual tennis environment, together with the equipment and technologies employed for this exciting experience. We conclude with remarks on personal experiences during the game and on future research topics to improve parts of the presented system.


Virtual Reality | 1999

Nonverbal communication interface for collaborative virtual environments

Anthony Guye-Vuillème; Tolga K. Çapin; Igor S. Pandzic; N Magnenat Thalmann; Daniel Thalmann

Nonverbal communication is an important aspect of real-life face-to-face interaction and one of the most efficient ways to convey emotions, therefore users should be provided the means to replicate it in the virtual world. Because articulated embodiments are well suited to provide body communication in virtual environments, this paper first reviews some of the advantages and disadvantages of complex embodiments. After a brief introduction to nonverbal communication theories, we present our solution, taking into account the practical limitations of input devices and social science aspects. We introduce our sample of actions and implementation using our VLNET (Virtual Life Network) networked virtual environment and discuss the results of an informal evaluation experiment.


Computer Graphics Forum | 1995

The HUMANOID Environment for Interactive Animation of Multiple Deformable Human Characters

Ronan Boulic; Tolga K. Çapin; Zhiyong Huang; Prem Kalra; Bernd Lintermann; Nadia Magnenat-Thalmann; Laurent Moccozet; Tom Molet; Igor S. Pandzic; Kurt Saar; Alfred A. Schmitt; Jerry Shen; Daniel Thalmann

We describe the HUMANOID environment dedicated to human modeling and animation for general multimedia, VR, and CAD applications integrating virtual humans. We present the design of the system and the integration of the various features: generic modeling of a large class of entities with the BODY data structure, realistic skin deformation for body and hands, facial animation, collision detection, integrated motion control and parallelization of computation intensive tasks.


Signal Processing-image Communication | 1997

MPEG-4: Audio/video and synthetic graphics/audio for mixed media

Peter Doenges; Tolga K. Çapin; Fabio Lavagetto; Joern Ostermann; Igor S. Pandzic; Eric D. Petajan

Abstract MPEG-4 addresses coding of digital hybrids of natural and synthetic, aural and visual (A/V) information. The objective of this synthetic/natural hybrid coding (SNHC) is to facilitate content-based manipulation, interoperability, and wider user access in the delivery of animated mixed media. SNHC will support non-real-time and passive media delivery, as well as more interactive, real-time applications. Integrated spatial-temporal coding is sought for audio, video, and 2D/3D computer graphics as standardized A/V objects. Targets of standardization include mesh-segmented video coding, compression of geometry, synchronization between A/V objects, multiplexing of streamed A/V objects, and spatial-temporal integration of mixed media types. Composition, interactivity, and scripting of A/V objects can thus be supported in client terminals, as well as in content production for servers, also more effectively enabling terminals as servers. Such A/V objects can exhibit high efficiency in transmission and storage, plus content-based interactivity, spatial-temporal scalability, and combinations of transient dynamic data and persistent downloaded data. This approach can lower bandwidth of mixed media, offer tradeoffs in quality versus update for specific terminals, and foster varied distribution methods for content that exploit spatial and temporal coherence over buses and networks. MPEG-4 responds to trends at home and work to move beyond the paradigm of audio/video as a passive experience to more flexible A/V objects which combine audio/video with synthetic 2D/3D graphics and audio.


The Visual Computer | 1999

User evaluation: Synthetic talking faces for interactive services

Igor S. Pandzic; Jörn Ostermann; David R. Millen

Computer simulation of human faces has been an active research area for a long time. However, it is less clear what the applications of facial animation (FA) will be. We have undertaken experiments on 190 subjects in order to explore the benefits of FA. Part of the experiment was aimed at exploring the objective benefits, i.e., to see if FA can help users to perform certain tasks better. The other part of the experiment was aimed at subjective benefits. At the same time comparison of different FA techniques was undertaken. We present the experiment design and the results. The results show that FA aids users in understanding spoken text in noisy conditions; that it can effectively make waiting times more acceptable to the user; and that it makes services more attractive to the users, particularly when they compare directly the same service with or without the FA.


Computer Graphics Forum | 1997

A Flexible Architecture for Virtual Humans in Networked Collaborative Virtual Environments

Igor S. Pandzic; Elwin Lee; Nadia Magnenat Thalmann; Tolga K. Çapin; Daniel Thalmann

Complex virtual human representation provides more natural interaction and communication among participants in networked virtual environments, hence it is expected to increase the sense of being together within the same virtual world. We present a flexible framework for the integration of virtual humans in networked collaborative virtual environments. A modular architecture allows flexible representation and control of the virtual humans, whether they are controlled by a physical user using all sorts of tracking and other devices, or by an intelligent control program turning them into autonomous actors. The modularity of the system allows for fairly easy extensions and integration with new techniques making it interesting also as a testbed for various domains from “classic” VR to psychological experiments. We present results in terms of functionalities, example applications and measurements of performance and network traffic with an increasing number of participants in the simulation.


international conference on 3d web technology | 2002

Facial animation framework for the web and mobile platforms

Igor S. Pandzic

Talking virtual characters are graphical simulations of real or imaginary persons capable of human-like behavior, most importantly talking and gesturing. They may find applications on the Internet and mobile platforms as newscasters, customer service representatives, sales representatives, guides etc. After briefly discussing the possible applications and the technical requirements for bringing such applications to life, we describe our approach to enable these applications: the Facial Animation Framework. This framework consists of (1) a lightweitht, portable, MPEG-4 compatible Facial Animation Player, (2) a system for fast production of ready-to-animate, MPEG-4 compatible face models and (3) a plethora of MPEG-4 compatible tools for Facial Animation content production. We believe that this kind of approach offers enough flexibility to rapidly adapt to a broad range of applications involving facial animation on various platforms.


Displays | 1994

Real-time facial interaction

Igor S. Pandzic; Prem Kalra; Nadia Magnenat Thalmann; Daniel Thalmann

The human interface for computer graphics systems is evolving to involve a multimodal approach. It is now moving from keyboard operation to more natural modes of interaction using visual, audio and gestural means. This paper discusses real-time interaction using visual input from a human face. It describes the underlying approach to recognizing and analysing the facial movements of a real performance. The output in the form of parameters describing the facial expressions can then be used to drive one or more applications running on the same or on a remote computer. This enables the user to control the graphics system by means of facial expressions. This is used primarily as part of a real-time facial animation system, where the synthetic actor reproduces the animators expression. This offers interesting possibilities for teleconferencing as the requirements for the network bandwidth are low (about 7 kbit/s). Experiments are also done using facial movements to control a walkthrough or perform simple object manipulation


Computer Graphics Forum | 2010

State of the Art in Example-Based Motion Synthesis for Virtual Characters in Interactive Applications

Tomislav Pejsa; Igor S. Pandzic

Animated virtual human characters are a common feature in interactive graphical applications, such as computer and video games, online virtual worlds and simulations. Due to dynamic nature of such applications, character animation must be responsive and controllable in addition to looking as realistic and natural as possible. Though procedural and physics‐based animation provide a great amount of control over motion, they still look too unnatural to be of use in all but a few specific scenarios, which is why interactive applications nowadays still rely mainly on recorded and hand‐crafted motion clips. The challenge faced by animation system designers is to dynamically synthesize new, controllable motion by concatenating short motion segments into sequences of different actions or by parametrically blending clips that correspond to different variants of the same logical action. In this article, we provide an overview of research in the field of example‐based motion synthesis for interactive applications. We present methods for automated creation of supporting data structures for motion synthesis and describe how they can be employed at run‐time to generate motion that accurately accomplishes tasks specified by the AI or human user.


ieee virtual reality conference | 1997

A dead-reckoning algorithm for virtual human figures

Tolga K. Çapin; Igor S. Pandzic

In networked virtual environments, when the participants are represented by virtual human figures, the articulated structure of the human body introduces a new complexity in the usage of the network resources. This might create a significant overhead in communication, especially as the number of participants in the simulation increases. In addition, the animation should be realistic, as it is easy to recognize anomalies in the virtual human animation. This requires real-time algorithms to decrease the network overhead while considering characteristics of body motion. The dead-reckoning technique is a way to decrease the number of messages communicated among the participants, and has been used for simple non-articulated objects in popular systems. The authors introduce a dead-reckoning technique for articulated virtual human figures based on Kalman filtering, discuss main issues and present experimental results.

Collaboration


Dive into the Igor S. Pandzic's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge