Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fabrice Rousselle is active.

Publication


Featured researches published by Fabrice Rousselle.


international conference on computer graphics and interactive techniques | 2012

Adaptive rendering with non-local means filtering

Fabrice Rousselle; Claude Knaus; Matthias Zwicker

We propose a novel approach for image space adaptive sampling and filtering in Monte Carlo rendering. We use an iterative scheme composed of three steps. First, we adaptively distribute samples in the image plane. Second, we denoise the image using a non-linear filter. Third, we estimate the residual per-pixel error of the filtered rendering, and the error estimate guides the sample distribution in the next iteration. The effectiveness of our approach hinges on the use of a state of the art image denoising technique, which we extend to an adaptive rendering framework. A key idea is to split the Monte Carlo samples into two buffers. This improves denoising performance and facilitates variance and error estimation. Our method relies only on the Monte Carlo samples, allowing us to handle arbitrary light transport and lens effects. In addition, it is robust to high noise levels and complex image content. We compare our approach to a state of the art adaptive rendering technique based on adaptive bandwidth selection and demonstrate substantial improvements in terms of both numerical error and visual quality. Our framework is easy to implement on top of standard Monte Carlo renderers and it incurs little computational overhead.


international conference on computer graphics and interactive techniques | 2011

Adaptive sampling and reconstruction using greedy error minimization

Fabrice Rousselle; Claude Knaus; Matthias Zwicker

We introduce a novel approach for image space adaptive sampling and reconstruction in Monte Carlo rendering. We greedily minimize relative mean squared error (MSE) by iterating over two steps. First, given a current sample distribution, we optimize over a discrete set of filters at each pixel and select the filter that minimizes the pixel error. Next, given the current filter selection, we distribute additional samples to further reduce MSE. The success of our approach hinges on a robust technique to select suitable per pixel filters. We develop a novel filter selection procedure that robustly solves this problem even with noisy input data. We evaluate our approach using effects such as motion blur, depth of field, interreflections, etc. We provide a comparison to a state-of-the-art algorithm based on wavelet shrinkage and show that we achieve significant improvements in numerical error and visual image quality. Our approach is simple to implement, requires a single user parameter, and is compatible with standard Monte Carlo rendering.


Computer Graphics Forum | 2015

Recent Advances in Adaptive Sampling and Reconstruction for Monte Carlo Rendering

Matthias Zwicker; Wojciech Jarosz; Jaakko Lehtinen; Bochang Moon; Ravi Ramamoorthi; Fabrice Rousselle; Pradeep Sen; Cyril Soler; Sung-Eui Yoon

Monte Carlo integration is firmly established as the basis for most practical realistic image synthesis algorithms because of its flexibility and generality. However, the visual quality of rendered images often suffers from estimator variance, which appears as visually distracting noise. Adaptive sampling and reconstruction algorithms reduce variance by controlling the sampling density and aggregating samples in a reconstruction step, possibly over large image regions. In this paper we survey recent advances in this area. We distinguish between “a priori” methods that analyze the light transport equations and derive sampling rates and reconstruction filters from this analysis, and “a posteriori” methods that apply statistical techniques to sets of samples to drive the adaptive sampling and reconstruction process. They typically estimate the errors of several reconstruction filters, and select the best filter locally to minimize error. We discuss advantages and disadvantages of recent state‐of‐the‐art techniques, and provide visual and quantitative comparisons. Some of these techniques are proving useful in real‐world applications, and we aim to provide an overview for practitioners and researchers to assess these approaches. In addition, we discuss directions for potential further improvements.


Computer Graphics Forum | 2013

Robust Denoising using Feature and Color Information

Fabrice Rousselle; Marco Manzi; Matthias Zwicker

We propose a method that robustly combines color and feature buffers to denoise Monte Carlo renderings. On one hand, feature buffers, such as per pixel normals, textures, or depth, are effective in determining denoising filters because features are highly correlated with rendered images. Filters based solely on features, however, are prone to blurring image details that are not well represented by the features. On the other hand, color buffers represent all details, but they may be less effective to determine filters because they are contaminated by the noise that is supposed to be removed. We propose to obtain filters using a combination of color and feature buffers in an NL-means and cross-bilateral filtering framework. We determine a robust weighting of colors and features using a SURE-based error estimate. We show significant improvements in subjective and quantitative errors compared to the previous state-of-the-art. We also demonstrate adaptive sampling and space-time filtering for animations.


ACM Transactions on Graphics | 2017

Kernel-predicting convolutional networks for denoising Monte Carlo renderings

Steve Bako; Thijs Vogels; Brian McWilliams; Mark Meyer; Jan Novák; Alex Harvill; Pradeep Sen; Tony DeRose; Fabrice Rousselle

Regression-based algorithms have shown to be good at denoising Monte Carlo (MC) renderings by leveraging its inexpensive by-products (e.g., feature buffers). However, when using higher-order models to handle complex cases, these techniques often overfit to noise in the input. For this reason, supervised learning methods have been proposed that train on a large collection of reference examples, but they use explicit filters that limit their denoising ability. To address these problems, we propose a novel, supervised learning approach that allows the filtering kernel to be more complex and general by leveraging a deep convolutional neural network (CNN) architecture. In one embodiment of our framework, the CNN directly predicts the final denoised pixel value as a highly non-linear combination of the input features. In a second approach, we introduce a novel, kernel-prediction network which uses the CNN to estimate the local weighting kernels used to compute each denoised pixel from its neighbors. We train and evaluate our networks on production data and observe improvements over state-of-the-art MC denoisers, showing that our methods generalize well to a variety of scenes. We conclude by analyzing various components of our architecture and identify areas of further research in deep learning for MC denoising.


eurographics | 2016

Nonlinearly weighted first-order regression for denoising Monte Carlo renderings

Benedikt Bitterli; Fabrice Rousselle; Bochang Moon; Jose A. Iglesias-Guitian; David Adler; Kenny Mitchell; Wojciech Jarosz; Jan Novák

We address the problem of denoising Monte Carlo renderings by studying existing approaches and proposing a new algorithm that yields state‐of‐the‐art performance on a wide range of scenes. We analyze existing approaches from a theoretical and empirical point of view, relating the strengths and limitations of their corresponding components with an emphasis on production requirements. The observations of our analysis instruct the design of our new filter that offers high‐quality results and stable performance. A key observation of our analysis is that using auxiliary buffers (normal, albedo, etc.) to compute the regression weights greatly improves the robustness of zero‐order models, but can be detrimental to first‐order models. Consequently, our filter performs a first‐order regression leveraging a rich set of auxiliary buffers only when fitting the data, and, unlike recent works, considers the pixel color alone when computing the regression weights. We further improve the quality of our output by using a collaborative denoising scheme. Lastly, we introduce a general mean squared error estimator, which can handle the collaborative nature of our filter and its nonlinear weights, to automatically set the bandwidth of our regression kernel.


Computer Graphics Forum | 2015

Recent Advances in Facial Appearance Capture

Oliver Klehm; Fabrice Rousselle; Marios Papas; Derek Bradley; Christophe Hery; Bernd Bickel; Wojciech Jarosz; Thabo Beeler

Facial appearance capture is now firmly established within academic research and used extensively across various application domains, perhaps most prominently in the entertainment industry through the design of virtual characters in video games and films. While significant progress has occurred over the last two decades, no single survey currently exists that discusses the similarities, differences, and practical considerations of the available appearance capture techniques as applied to human faces. A central difficulty of facial appearance capture is the way light interacts with skin—which has a complex multi‐layered structure—and the interactions that occur below the skin surface can, by definition, only be observed indirectly. In this report, we distinguish between two broad strategies for dealing with this complexity. “Image‐based methods” try to exhaustively capture the exact face appearance under different lighting and viewing conditions, and then render the face through weighted image combinations. “Parametric methods” instead fit the captured reflectance data to some parametric appearance model used during rendering, allowing for a more lightweight and flexible representation but at the cost of potentially increased rendering complexity or inexact reproduction. The goal of this report is to provide an overview that can guide practitioners and researchers in assessing the tradeoffs between current approaches and identifying directions for future advances in facial appearance capture.


eurographics | 2015

Path-space motion estimation and decomposition for robust animation filtering

Henning Zimmer; Fabrice Rousselle; Wenzel Jakob; Oliver Wang; David Adler; Wojciech Jarosz; Olga Sorkine-Hornung; Alexander Sorkine-Hornung

Renderings of animation sequences with physics‐based Monte Carlo light transport simulations are exceedingly costly to generate frame‐by‐frame, yet much of this computation is highly redundant due to the strong coherence in space, time and among samples. A promising approach pursued in prior work entails subsampling the sequence in space, time, and number of samples, followed by image‐based spatio‐temporal upsampling and denoising. These methods can provide significant performance gains, though major issues remain: firstly, in a multiple scattering simulation, the final pixel color is the composite of many different light transport phenomena, and this conflicting information causes artifacts in image‐based methods. Secondly, motion vectors are needed to establish correspondence between the pixels in different frames, but it is unclear how to obtain them for most kinds of light paths (e.g. an object seen through a curved glass panel).


international conference on computer graphics and interactive techniques | 2014

Improved sampling for gradient-domain metropolis light transport

Marco Manzi; Fabrice Rousselle; Markus Kettunen; Jaakko Lehtinen; Matthias Zwicker

We present a generalized framework for gradient-domain Metropolis rendering, and introduce three techniques to reduce sampling artifacts and variance. The first one is a heuristic weighting strategy that combines several sampling techniques to avoid outliers. The second one is an improved mapping to generate offset paths required for computing gradients. Here we leverage the properties of manifold walks in path space to cancel out singularities. Finally, the third technique introduces generalized screen space gradient kernels. This approach aligns the gradient kernels with image structures such as texture edges and geometric discontinuities to obtain sparser gradients than with the conventional gradient kernel. We implement our framework on top of an existing Metropolis sampler, and we demonstrate significant improvements in visual and numerical quality of our results compared to previous work.


international conference on computer graphics and interactive techniques | 2016

Image-space control variates for rendering

Fabrice Rousselle; Wojciech Jarosz; Jan Novák

We explore the theory of integration with control variates in the context of rendering. Our goal is to optimally combine multiple estimators using their covariances. We focus on two applications, re-rendering and gradient-domain rendering, where we exploit coherence between temporally and spatially adjacent pixels. We propose an image-space (iterative) reconstruction scheme that employs control variates to reduce variance. We show that recent works on scene editing and gradient-domain rendering can be directly formulated as control-variate estimators, despite using seemingly different approaches. In particular, we demonstrate the conceptual equivalence of screened Poisson image reconstruction and our iterative reconstruction scheme. Our composite estimators offer practical and simple solutions that improve upon the current state of the art for the two investigated applications.

Collaboration


Dive into the Fabrice Rousselle's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Adler

Walt Disney Animation Studios

View shared research outputs
Top Co-Authors

Avatar

Pradeep Sen

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Meyer

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge