Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joëlle Thollot is active.

Publication


Featured researches published by Joëlle Thollot.


international conference on computer graphics and interactive techniques | 2000

Conservative visibility preprocessing using extended projections

George Drettakis; Joëlle Thollot; Claude Puech

Visualization of very complex scenes can be significantly accelerated using occlusion culling. In this paper we present a visibility preprocessing method which efficiently computes potentially visible geometry for volumetric viewing cells. We introduce novel extended projection operators, which permits efficient and conservative occlusion culling with respect to all viewpoints within a cell, and takes into account the combined occlusion effect of multiple occluders. We use extended projection of occluders onto a set of projection planes to create extended occlusion maps; we show how to efficiently test occludees against these occlusion maps to determine occlusion with respect to the entire cell. We also present an improved projection operator for certain specific but important configurations. An important advantage of our approach is that we can re-project extended projections onto a series of projection planes (via an occlusion sweep), and accumulate occlusion information from multiple blockers. This new approach allows the creation of effective occlusion maps for previously hard-to-treat scenes such as leaves of trees in a forest. Graphics hardware is used to accelerate both the extended projection and reprojection operations. We present a complete implementation demonstrating significant speedup with respect to view-frustum culling only, without the computational overhead of on-line occlusion culling.


Computer Graphics Forum | 2008

Apparent Greyscale: A Simple and Fast Conversion to Perceptually Accurate Images and Video

Kaleigh Smith; Pierre-Edouard Landes; Joëlle Thollot; Karol Myszkowski

This paper presents a quick and simple method for converting complex images and video to perceptually accurate greyscale versions. We use a two‐step approach first to globally assign grey values and determine colour ordering, then second, to locally enhance the greyscale to reproduce the original contrast. Our global mapping is image independent and incorporates the Helmholtz‐Kohlrausch colour appearance effect for predicting differences between isoluminant colours. Our multiscale local contrast enhancement reintroduces lost discontinuities only in regions that insufficiently represent original chromatic contrast. All operations are restricted so that they preserve the overall image appearance, lightness range and differences, colour ordering, and spatial details, resulting in perceptually accurate achromatic reproductions of the colour original.


international conference on computer graphics and interactive techniques | 2007

Video watercolorization using bidirectional texture advection

Adrien Bousseau; Fabrice Neyret; Joëlle Thollot; David Salesin

In this paper, we present a method for creating watercolor-like animation, starting from video as input. The method involves two main steps: applying textures that simulate a watercolor appearance; and creating a simplified, abstracted version of the video to which the texturing operations are applied. Both of these steps are subject to highly visible temporal artifacts, so the primary technical contributions of the paper are extensions of previous methods for texturing and abstraction to provide temporal coherence when applied to video sequences. To maintain coherence for textures, we employ texture advection along lines of optical flow. We furthermore extend previous approaches by incorporating advection in both forward and reverse directions through the video, which allows for minimal texture distortion, particularly in areas of disocclusion that are otherwise highly problematic. To maintain coherence for abstraction, we employ mathematical morphology extended to the temporal domain, using filters whose temporal extents are locally controlled by the degree of distortions in the optical flow. Together, these techniques provide the first practical and robust approach for producing watercolor animations from video, which we demonstrate with a number of examples.


non-photorealistic animation and rendering | 2006

Interactive watercolor rendering with temporal coherence and abstraction

Adrien Bousseau; Matthew Kaplan; Joëlle Thollot; François X. Sillion

This paper presents an interactive watercolor rendering technique that recreates the specific visual effects of lavis watercolor. Our method allows the user to easily process images and 3d models and is organized in two steps: an abstraction step that recreates the uniform color regions of watercolor and an effect step that filters the resulting abstracted image to obtain watercolor-like images. In the case of 3d environments we also propose two methods to produce temporally coherent animations that keep a uniform pigment repartition while avoiding the shower door effect.


eurographics | 2006

Stroke Pattern Analysis and Synthesis

Pascal Barla; Simon Breslav; Joëlle Thollot; François X. Sillion; Lee Markosian

We present a synthesis technique that can automatically generate stroke patterns based on a user‐specified reference pattern. Our method is an extension of texture synthesis techniques to vector‐based patterns. Such an extension requires (a) an analysis of the pattern properties to extract meaningful pattern elements (defined as clusters of strokes) and (b) a synthesis algorithm based on similarities in the detected stroke clusters. Our method is based on results from human vision research concerning perceptual organization. The resulting synthesized patterns effectively reproduce the properties of the input patterns, and can be used to fill both 1D paths and 2D regions.


international conference on computer graphics and interactive techniques | 2005

Geometric clustering for line drawing simplification

Pascal Barla; Joëlle Thollot; François X. Sillion

We present a new approach to the simplification of line drawings, in which a smaller set of lines is created to represent the geometry of the original lines. An important feature of our method is that it maintains the morphological structure of the original drawing while allowing user-defined decisions about the appearance of lines. The technique works by analyzing the structure of the drawing at a certain scale and identifying clusters of lines that can be merged given a specific error threshold. These clusters are then processed to create new lines, in a separate stage where different behaviors can be favored based on the application. Successful results are presented for a variety of drawings including scanned and vectorized artwork, original vector drawings, drawings created from 3d models, and hatching marks. The clustering technique is shown to be effective in all these situations.


eurographics symposium on rendering techniques | 2007

Dynamic point distribution for stroke-based rendering

David Vanderhaeghe; Pascal Barla; Joëlle Thollot; François X. Sillion

We present a new point distribution algorithm that is well adapted to stroke-based rendering systems. Its main characteristic is to deal efficiently with three conflicting constraints: the distribution of points should retain a good repartition in 2D; their motion should tightly follow the target motion in the underlying scene; and as few points as possible should be added or deleted from frame to frame. We show that previous methods fail to meet at least one of these constraints in the general case, as opposed to our approach that is independent of scene complexity and motion. As a result, our algorithm is able to take 3D scenes as well as videos as input and create non-uniform distributions with good temporal coherence and density properties. To illustrate it, we show applications in four different styles: stippling, pointillism, hatching and painterly.


Computer Graphics Forum | 2011

State‐of‐the‐Art Report on Temporal Coherence for Stylized Animations

Pierre Bénard; Adrien Bousseau; Joëlle Thollot

Non‐photorealistic rendering (NPR) algorithms allow the creation of images in a variety of styles, ranging from line drawing and pen‐and‐ink to oil painting and watercolour. These algorithms provide greater flexibility, control and automation over traditional drawing and painting. Despite significant progress over the past 15 years, the application of NPR to the generation of stylized animations remains an active area of research. The main challenge of computer‐generated stylized animations is to reproduce the look of traditional drawings and paintings while minimizing distracting flickering and sliding artefacts present in hand‐drawn animations. These goals are inherently conflicting and any attempt to address the temporal coherence of stylized animations is a trade‐off. This state‐of‐the‐art report is motivated by the growing number of methods proposed in recent years and the need for a comprehensive analysis of the trade‐offs they propose. We formalize the problem of temporal coherence in terms of goals and compare existing methods accordingly. We propose an analysis for both line and region stylization methods and discuss initial steps towards their perceptual evaluation. The goal of our report is to help uninformed readers to choose the method that best suits their needs, as well as motivate further research to address the limitations of existing methods.


interactive 3d graphics and games | 2009

Dynamic solid textures for real-time coherent stylization

Pierre Bénard; Adrien Bousseau; Joëlle Thollot

Stylized rendering methods, which aim at depicting 3D scenes with 2D marks such as pigments or strokes, are often faced with temporal coherence issues when applied to dynamic scenes. These issues arise from the difficulty of having to satisfy two contrary goals: ensuring that the style marks follow 3D motions while preserving their 2D appearance. In this paper we describe a new texture based method for real-time temporally coherent stylization called dynamic textures. A dynamic texture is a standard texture mapped on the object and enriched with an infinite zoom mechanism. This simple and fast mechanism maintains quasi-constant size and density of texture elements in screen space for any distance from the camera. We show that these dynamic textures can be used in many stylization techniques, enforcing the 2D appearance of the style marks while preserving the accurate 3D motion of the depicted objects. Although our infinite zoom technique can be used with both 2D or 3D textures, we focus in this paper on the 3D case (dynamic solid textures) which avoids the need for complex parameterizations of 3D surfaces. This makes dynamic textures easy to integrate in existing rendering pipelines with almost no loss in performance, as demonstrated by our implementation in a game rendering engine.


international conference on computer graphics and interactive techniques | 2013

Inverse dynamic hair modeling with frictional contact

Alexandre Derouet-Jourdan; Florence Bertails-Descoubes; Gilles Daviet; Joëlle Thollot

In the latest years, considerable progress has been achieved for accurately acquiring the geometry of human hair, thus largely improving the realism of virtual characters. In parallel, rich physics-based simulators have been successfully designed to capture the intricate dynamics of hair due to contact and friction. However, at the moment there exists no consistent pipeline for converting a given hair geometry into a realistic physics-based hair model. Current approaches simply initialize the hair simulator with the input geometry in the absence of external forces. This results in an undesired sagging effect when the dynamic simulation is started, which basically ruins all the efforts put into the accurate design and/or capture of the input hairstyle. In this paper we propose the first method which consistently and robustly accounts for surrounding forces---gravity and frictional contacts, including hair self-contacts---when converting a geometric hairstyle into a physics-based hair model. Taking an arbitrary hair geometry as input together with a corresponding body mesh, we interpret the hair shape as a static equilibrium configuration of a hair simulator, in the presence of gravity as well as hair-body and hair-hair frictional contacts. Assuming that hair parameters are homogeneous and lie in a plausible range of physical values, we show that this large underdetermined inverse problem can be formulated as a well-posed constrained optimization problem, which can be solved robustly and efficiently by leveraging the frictional contact solver of the direct hair simulator. Our method was successfully applied to the animation of various hair geometries, ranging from synthetic hairstyles manually designed by an artist to the most recent human hair data automatically reconstructed from capture.

Collaboration


Dive into the Joëlle Thollot's collaboration.

Top Co-Authors

Avatar

Thomas Hurtut

École Polytechnique de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adrien Bousseau

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cyril Soler

University of Grenoble

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge