Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean-François Lalonde is active.

Publication


Featured researches published by Jean-François Lalonde.


international conference on 3d vision | 2014

Lighting Estimation in Outdoor Image Collections

Jean-François Lalonde; Iain A. Matthews

Large scale structure-from-motion (SfM) algorithms have recently enabled the reconstruction of highly detailed 3-D models of our surroundings simply by taking photographs. In this paper, we propose to leverage these reconstruction techniques to automatically estimate the outdoor illumination conditions for each image in a SfM photo collection. We introduce a novel dataset of outdoor photo collections, where the ground truth lighting conditions are known at each image. We also present an inverse rendering approach that recovers a high dynamic range estimate of the lighting conditions for each low dynamic range input image. Our novel database is used to quantitatively evaluate the performance of our algorithm. Results show that physically plausible lighting estimates can faithfully be recovered, both in terms of light direction and intensity.


computer vision and pattern recognition | 2017

Deep Outdoor Illumination Estimation

Yannick Hold-Geoffroy; Kalyan Sunkavalli; Sunil Hadap; Emiliano Gambaretto; Jean-François Lalonde

We present a CNN-based technique to estimate high-dynamic range outdoor illumination from a single low dynamic range image. To train the CNN, we leverage a large dataset of outdoor panoramas. We fit a low-dimensional physically-based outdoor illumination model to the skies in these panoramas giving us a compact set of parameters (including sun position, atmospheric conditions, and camera parameters). We extract limited field-of-view images from the panoramas, and train a CNN with this large set of input image–output lighting parameter pairs. Given a test image, this network can be used to infer illumination parameters that can, in turn, be used to reconstruct an outdoor illumination environment map. We demonstrate that our approach allows the recovery of plausible illumination conditions and enables photorealistic virtual object insertion from a single image. An extensive evaluation on both the panorama dataset and captured HDR environment maps shows that our technique significantly outperforms previous solutions to this problem.


ACM Transactions on Graphics | 2017

Learning to predict indoor illumination from a single image

Marc-André Gardner; Kalyan Sunkavalli; Ersin Yumer; Xiaohui Shen; Emiliano Gambaretto; Christian Gagné; Jean-François Lalonde

We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. In contrast to previous work that relies on specialized image capture, user input, and/or simple scene models, we train an end-to-end deep neural network that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting. We show that this can be accomplished in a three step process: 1) we train a robust lighting classifier to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited field-of-view photo, and 3) we fine-tune this network using a small dataset of HDR environment maps to predict light intensities. This allows us to automatically recover high-quality HDR illumination estimates that significantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion, produces photo-realistic results that we validate via a perceptual user study.


IEEE Transactions on Visualization and Computer Graphics | 2017

Deep 6-DOF Tracking

Mathieu Garon; Jean-François Lalonde

We present a temporal 6-DOF tracking method which leverages deep learning to achieve state-of-the-art performance on challenging datasets of real world capture. Our method is both more accurate and more robust to occlusions than the existing best performing approaches while maintaining real-time performance. To assess its efficacy, we evaluate our approach on several challenging RGBD sequences of real objects in a variety of conditions. Notably, we systematically evaluate robustness to occlusions through a series of sequences where the object to be tracked is increasingly occluded. Finally, our approach is purely data-driven and does not require any hand-designed features: robust tracking is automatically learned from data.


international symposium on mixed and augmented reality | 2016

Real-Time High Resolution 3D Data on the HoloLens

Mathieu Garon; Pierre-Olivier Boulet; Jean-Philippe Doironz; Luc Beaulieu; Jean-François Lalonde

The recent appearance of augmented reality headsets, such as the Microsoft HoloLens, is a marked move from traditional 2D screen to 3D hologram-like interfaces. Striving to be completely portable, these devices unfortunately suffer multiple limitations, such as the lack of real-time, high quality depth data, which severely restricts their use as research tools. To mitigate this restriction, we provide a simple method to augment a HoloLens headset with much higher resolution depth data. To do so, we calibrate an external depth sensor connected to a computer stick that communicates with the HoloLens headset in real-time. To show how this system could be useful to the research community, we present an implementation of small object detection on HoloLens device.


tests and proofs | 2015

The Perception of Lighting Inconsistencies in Composite Outdoor Scenes

Minghui Tan; Jean-François Lalonde; Lavanya Sharan; Holly E. Rushmeier; Carol O'Sullivan

It is known that humans can be insensitive to large changes in illumination. For example, if an object of interest is extracted from one digital photograph and inserted into another, we do not always notice the differences in illumination between the object and its new background. This inability to spot illumination inconsistencies is often the key to success in digital “doctoring” operations. We present a set of experiments in which we explore the perception of illumination in outdoor scenes. Our results can be used to predict when and why inconsistencies go unnoticed. Applications of the knowledge gained from our studies include smarter digital “cut-and-paste” and digital “fake” detection tools, and image-based composite scene backgrounds for layout and previsualization.


Computer Graphics Forum | 2018

From Faces to Outdoor Light Probes

Dan Andrei Calian; Jean-François Lalonde; Paulo F.U. Gotardo; Tomas Simon; Iain A. Matthews; Kenny Mitchell

Image‐based lighting has allowed the creation of photo‐realistic computer‐generated content. However, it requires the accurate capture of the illumination conditions, a task neither easy nor intuitive, especially to the average digital photography enthusiast. This paper presents an approach to directly estimate an HDR light probe from a single LDR photograph, shot outdoors with a consumer camera, without specialized calibration targets or equipment. Our insight is to use a persons face as an outdoor light probe. To estimate HDR light probes from LDR faces we use an inverse rendering approach which employs data‐driven priors to guide the estimation of realistic, HDR lighting. We build compact, realistic representations of outdoor lighting both parametrically and in a data‐driven way, by training a deep convolutional autoencoder on a large dataset of HDR sky environment maps. Our approach can recover high‐frequency, extremely high dynamic range lighting environments. For quantitative evaluation of lighting estimation accuracy and relighting accuracy, we also contribute a new database of face photographs with corresponding HDR light probes. We show that relighting objects with HDR light probes estimated by our method yields realistic results in a wide variety of settings.


international conference on 3d vision | 2015

x-Hour Outdoor Photometric Stereo

Yannick Hold-Geoffroy; Jinsong Zhang; Paulo F.U. Gotardo; Jean-François Lalonde

While Photometric Stereo (PS) has long been confined to the lab, there has been a recent interest in applying this technique to reconstruct outdoor objects and scenes. Un-fortunately, the most successful outdoor PS techniques typically require gathering either months of data, or waiting for a particular time of the year. In this paper, we analyze the illumination requirements for single-day outdoor PS to yield stable normal reconstructions, and determine that these requirements are often available in much less than a full day. In particular, we show that the right set of conditions for stable PS solutions may be observed in the sky within short time intervals of just above one hour. This work provides, for the first time, a detailed analysis of the factors affecting the performance of x-hour outdoor photometric stereo.


international conference on computational photography | 2015

What Is a Good Day for Outdoor Photometric Stereo

Yannick Hold-Geoffroy; Jinsong Zhang; Paulo F.U. Gotardo; Jean-François Lalonde

Photometric Stereo has been explored extensively in laboratory conditions since its inception. Recently, attempts have been made at applying this technique under natural outdoor lighting. Outdoor photometric stereo presents additional challenges as one does not have control over illumination anymore. In this paper, we explore the stability of surface normals reconstructed outdoors. We present a data-driven analysis based on a large database of outdoor HDR environment maps. Given a sequence of object images and corresponding sky maps captured in a single day, we investigate natural factors that impact the uncertainty in the estimated surface normals. Quantitative evidence reveals strong dependencies between expected accuracy and the normal orientation, cloud coverage, and sun elevation. In particular, we show that partially cloudy days yield greater accuracy than sunny days with clear skies; furthermore, high sun elevation-recommended in previous work-is in fact not necessarily optimal when taking more elaborate illumination models into account.


international conference on computational photography | 2015

Contrast-Use Metrics for Tone Mapping Images

Miguel Granados; Tunc Ozan Aydin; J. Rafael Tena; Jean-François Lalonde; Christian Theobalt

Existing tone mapping operators (TMOs) provide good results in well-lit scenes, but often perform poorly on images in low light conditions. In these scenes, noise isprevalent and gets amplified by TMOs, as they confuse contrast created by noise with contrast created by the scene. This paper presents a principled approach to produce tone mapped images with less visible noise. For this purpose, we leverage established models of camera noise and human contrast perception to design two new quality scores: contrast waste and contrast loss, which measure image quality as a function of contrast allocation. To produce tone mappings with less visible noise, we apply these scores in two ways: first, to automatically tune the parameters of existing TMOs to reduce the amount of noise they produce; and second, to propose a new noise-aware tone curve.

Collaboration


Dive into the Jean-François Lalonde's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lavanya Sharan

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge