Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Celine Loscos is active.

Publication


Featured researches published by Celine Loscos.


IEEE Computer Graphics and Applications | 2008

Automatic High-Dynamic Range Image Generation for Dynamic Scenes

Katrien Jacobs; Celine Loscos; Greg Ward

Automatic high-dynamic range image generation from low- dynamic range images offers a solution to conventional methods, which require a static scene. The method consists of two modules: a camera-alignment module and a movement detector, which removes the ghosting effects in the HDRI created by moving objects.


Proceedings of Theory and Practice of Computer Graphics, 2003. | 2003

Intuitive crowd behavior in dense urban environments using local laws

Celine Loscos; David Marchal; Alexandre Meyer

In games, entertainment, medical and architectural applications, the creation of populated virtual city environments has recently become widespread. In this paper we want to provide a technique that allows the simulation of up to 10,000 pedestrians walking in real-time. Simulation for such environments is difficult as a trade off needs to be found between realism and real-time simulation. This paper presents a pedestrian crowd simulation method aiming at improving the local and global reactions of the pedestrians. The method uses a subdivision of space into a 2D (two-dimensional) grid for pedestrian-to-pedestrian collision avoidance, while assigning goals to pedestrians to make their trajectories smoother and coherent. Goals are computed automatically and connected into a graph that reflects the structure of the city and triggers a spatial repartition of the density of pedestrians. In order to create realistic reactions when areas become crowded, local directions are stored and updated in real-time, allowing the apparition of pedestrian streams. Combining the different methods contributes to a more realistic model, while keeping a real-time frame rate for up to 10,000 simulated pedestrians.


eurographics | 2006

Building Expression into Virtual Characters

Vinoba Vinayagamoorthy; Marco Gillies; Anthony Steed; Emmanuel Tanguy; Xueni Pan; Celine Loscos; Mel Slater

Virtual characters are an important part of many 3D graphical simulations. In entertainment or training applications, virtual characters might be one of the main mechanisms for creating and developing content and scenarios. In such applications the user may need to interact with a number of different characters that need to invoke specific responses in the user, so that the user interprets the scenario in the way that the designer intended. Whilst representations of virtual characters have come a long way in recent years, interactive virtual characters tend to be a bit “wooden” with respect to their perceived behaviour. In this STAR we give an overview of work on expressive virtual characters. In particular, we assume that a virtual character representation is already available, and we describe a variety of models and methods that are used to give the characters more “depth” so that they are less wooden and more plausible. We cover models of individual characters’ emotion and personality, models of interpersonal behaviour and methods for generating expression.


IEEE Transactions on Visualization and Computer Graphics | 2000

Interactive virtual relighting of real scenes

Celine Loscos; George Drettakis; Luc Robert

Computer augmented reality (CAR) is a rapidly emerging field which enables users to mix real and virtual worlds. Our goal is to provide interactive tools to perform common illumination, i.e., light interactions between real and virtual objects, including shadows and relighting (real and virtual light source modification). In particular, we concentrate on virtually modifying real light source intensities and inserting virtual lights and objects into a real scene; such changes can be very useful for virtual lighting design and prototyping. To achieve this, we present a three-step method. We first reconstruct a simplified representation of real scene geometry using semiautomatic vision-based techniques. With the simplified geometry, and by adapting recent hierarchical radiosity algorithms, we construct an approximation of real scene light exchanges. We next perform a preprocessing step, based on the radiosity system, to create unoccluded illumination textures. These replace the original scene textures which contained real light effects such as shadows from real lights. This texture is then modulated by a ratio of the radiosity (which can be changed) over a display factor which corresponds to the radiosity for which occlusion has been ignored. Since our goal is to achieve a convincing relighting effect, rather than an accurate solution, we present a heuristic correction process which results in visually plausible renderings. Finally, we perform an interactive process to compute new illumination with modified real and virtual light intensities.


Computer Graphics Forum | 2002

Visualizing Crowds in Real-Time

Franco Tecchia; Celine Loscos; Yiorgos Chrysanthou

Real‐time crowd visualization has recently attracted quite an interest from the graphics community and, asinteractive applications become even more complex, there is a natural demand for new and unexplored applicationscenarios. However, the interactive simulation of complex environments populated by large numbers of virtualcharacters is a composite problem which poses serious difficulties even on modern computer hardware. In thispaper we look at methods to deal with various aspects of crowd visualization, ranging from collision detectionand behaviour modeling to fast rendering with shadows and quality shading. These methods make extensive useof current graphics hardware capabilities with the aim of providing scalability without compromising run‐timespeed. Results from a system employing these techniques seem to suggest that simulations of reasonably complexenvironments populated with thousands of animated characters are possible in real‐time.


IEEE Computer Graphics and Applications | 2002

Image-based crowd rendering

Franco Tecchia; Celine Loscos; Yiorgos Chrysanthou

Populated virtual urban environments are important in many applications, from urban planning to entertainment. At the current stage of technology, users can interactively navigate through complex, polygon-based scenes rendered with sophisticated lighting effects and high-quality antialiasing techniques. As a result, animated characters (or agents) that users can interact with are also becoming increasingly common. However, rendering crowded scenes with thousands of different animated virtual people in real time is still challenging. To address this, we developed an image-based rendering approach for displaying multiple avatars. We take advantage of the properties of the urban environment and the way a viewer and the avatars move within it to produce fast rendering, based on positional and directional discretization. To display many different individual people at interactive frame rates, we combined texture compression with multipass rendering.


eurographics | 2006

Classification of Illumination Methods for Mixed Reality

Katrien Jacobs; Celine Loscos

A mixed reality (MR) represents an environment composed both by real and virtual objects. MR applications are used more and more, for instance in surgery, architecture, cultural heritage, entertainment, etc. For some of these applications it is important to merge the real and virtual elements using consistent illumination. This paper proposes a classification of illumination methods for MR applications that aim at generating a merged environment in which illumination and shadows are consistent. Three different illumination methods can be identified: common illumination, relighting and methods based on inverse illumination. In this paper a classification of the illumination methods for MR is given based on their input requirements: the amount of geometry and radiance known of the real environment. This led us to define four categories of methods that vary depending on the type of geometric model used for representing the real scene, and the sdifferent radiance information available for each point of the real scene. Various methods are described within their category.


eurographics | 1999

Interactive virtual relighting and remodeling of real scenes

Celine Loscos; Marie-Claude Frasson; George Drettakis; Bruce Walter; Xavier Granier; Pierre Poulin

Lighting design is often tedious due to the required physical manipulation of real light sources and objects. As an alternative, we present an interactive system to virtually modify the lighting and geometry of scenes with both real and synthetic objects, including mixed real/virtual lighting and shadows. n nIn our method, real scene geometry is first approximately reconstructed from photographs. Additional images are taken from a single viewpoint with a real light in different positions to estimate reflectance. A filtering process is used to compensate for inaccuracies, and per image reflectances are averaged to generate an approximate reflectance image for the given viewpoint, removing shadows in the process. This estimate is used to initialise a global illumination hierarchical radiosity system, representing real-world secondary illumination; the system is optimized for interactive updates. Direct illumination from lights is calculated separately using ray-casting and a table for efficient reuse of data where appropriate. n nOur system allows interactive modification of light emission and object positions, all with mixed real/virtual illumination effects. Real objects can also be virtually removed using texture-filling algorithms for reflectance estimation.


Virtual Reality | 2006

Interaction with co-located haptic feedback in virtual reality

David Swapp; Vijay Pawar; Celine Loscos

This paper outlines a study into the effects of co-location (the term ‘co-location’ is used throughout to refer to the co-location of haptic and visual sensory modes, except where otherwise specified) of haptic and visual sensory modes in VR simulations. The study hypothesis is that co-location of these sensory modes will lead to improved task performance within a VR environment. Technical challenges and technological limitations are outlined prior to a description of the implementation adopted for this study. Experiments were conducted to evaluate the effect on user performance of co-located haptics (force feedback) in a 3D virtual environment. Results show that co-location is an important factor, and when coupled with haptic feedback the performance of the user is greatly improved.


virtual reality software and technology | 2006

A versatile large-scale multimodal VR system for cultural heritage visualization

Chris Christou; Cameron Angus; Celine Loscos; Andrea Dettori; Maria Roussou

We describe the development and evaluation of a large-scale multimodal virtual reality simulation suitable for the visualization of cultural heritage sites and architectural planning. The system is demonstrated with a reconstruction of an ancient Greek temple in Messene that was created as part of a EU funded cultural heritage project (CREATE). The system utilizes a CAVE-like theatre consisting of head-tracked user localization, a haptic interface with two arms, and 3D sound. The haptic interface was coupled with a realistic physics engine allowing users to experience and fully appreciate the effort involved in the construction of architectural components and their changes through the ages. Initial user-based studies were carried out, to evaluate the usability and performance of the system. A simple task of stacking blocks was used to compare errors and timing in a haptics-enabled system with a haptics-disabled system. In addition, a qualitative study of the final system took place while it was installed in a museum.

Collaboration


Dive into the Celine Loscos's collaboration.

Top Co-Authors

Avatar

Katrien Jacobs

University College London

View shared research outputs
Top Co-Authors

Avatar

Franco Tecchia

Sant'Anna School of Advanced Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Massimo Bergamasco

Sant'Anna School of Advanced Studies

View shared research outputs
Top Co-Authors

Avatar

Anthony Steed

University College London

View shared research outputs
Top Co-Authors

Avatar

Cameron Angus

University College London

View shared research outputs
Top Co-Authors

Avatar

David Swapp

University College London

View shared research outputs
Top Co-Authors

Avatar

Andrea Dettori

Sant'Anna School of Advanced Studies

View shared research outputs
Researchain Logo
Decentralizing Knowledge