Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rémi Cozot is active.

Publication


Featured researches published by Rémi Cozot.


Proceedings of SPIE | 2012

Temporal coherency for video tone mapping

Ronan Boitard; Kadi Bouatouch; Rémi Cozot; Dominique Thoreau; Adrien Gruson

Tone Mapping Operators (TMOs) aim at converting real world high dynamic range (HDR) images captured with HDR cameras, into low dynamic range (LDR) images that can be displayed on LDR displays. Several TMOs have been proposed over the last decade, from the simple global mapping to the more complex one simulating the human vision system. While these solutions work generally well for still pictures, they are usually less e_cient for video sequences as they are source of visual artifacts. Only few of them can be adapted to cope with a sequence of images. In this paper we present a major problem that a static TMO usually encounters while dealing with video sequences, namely the temporal coherency. Indeed, as each tone mapper deals with each frame separately, no temporal coherency is taken into account and hence the results can be quite disturbing for high varying dynamics in a video. We propose a temporal coherency algorithm that is designed to analyze a video as a whole, and from its characteristics adapts each tone mapped frame of a sequence in order to preserve the temporal coherency. This temporal coherency algorithm has been tested on a set of real as well as Computer Graphics Image (CGI) content and put in competition with several algorithms that are designed to be time-dependent. Results show that temporal coherency preserves the overall contrast in a sequence of images. Furthermore, this technique is applicable to any TMO as it is a post-processing that only depends on the used TMO.


Journal of The Society for Information Display | 2007

Image display algorithms for high‐ and low‐dynamic‐range display devices

Erik Reinhard; Timo Kunkel; Yoann Marion; Jonathan Brouillat; Rémi Cozot; Kadi Bouatouch

— With interest in high-dynamic-range imaging mounting, techniques for displaying such images on conventional display devices are gaining in importance. Conversely, high-dynamic-range display hardware is creating the need for display algorithms that prepare images for such displays. In this paper, the current state of the art in dynamic-range reduction and expansion is reviewed, and in particular the theoretical and practical need to structure tone reproduction as a combination of a forward and a reverse pass is passed.


Signal Processing-image Communication | 2014

Zonal brightness coherency for video tone mapping

Ronan Boitard; Rémi Cozot; Dominique Thoreau; Kadi Bouatouch

Tone Mapping Operators (TMOs) compress High Dynamic Range (HDR) contents to address Low Dynamic Range (LDR) displays. While many solutions have been designed over the last decade, only few of them can cope with video sequences. Indeed, these TMOs tone map each frame of a video sequence separately, which results in temporal incoherency. Two main types of temporal incoherency are usually considered: flickering artifacts and temporal brightness incoherency. While the reduction of flickering artifacts has been well studied, less work has been performed on brightness incoherency. In this paper, we propose a method that aims at preserving spatio-temporal brightness coherency when tone mapping video sequences. Our technique computes HDR video zones which are constant throughout a sequence, based on the luminance of each pixel. Our method aims at preserving the brightness coherency between the brightest zone of the video and each other zone. This technique adapts to any TMO and results show that it preserves well spatio-temporal brightness coherency. We validate our method using a subjective evaluation. In addition, unlike local TMOs, our method, when applied to still images, is capable of ensuring spatial brightness coherency. Finally, it also preserves video fade effects commonly used in post-production.


IEEE Transactions on Visualization and Computer Graphics | 2012

Design and Application of Real-Time Visual Attention Model for the Exploration of 3D Virtual Environments

Sébastien Hillaire; Anatole Lécuyer; Tony Regia-Corte; Rémi Cozot; Jérôme Royan; Gaspard Breton

This paper studies the design and application of a novel visual attention model designed to compute users gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the users one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera motions, and dynamic luminance. All these effects are computed based on the simulated gaze of the user, and are meant to improve users sensations in future virtual reality applications.


Computer Graphics Forum | 2010

Using a Visual Attention Model to Improve Gaze Tracking Systems in Interactive 3D Applications

Sébastien Hillaire; Gaspard Breton; Nizar Ouarti; Rémi Cozot; Anatole Lécuyer

This paper introduces the use of a visual attention model to improve the accuracy of gaze tracking systems. Visual attention models simulate the selective attention part of the human visual system. For instance, in a bottom‐up approach, a saliency map is defined for the image and gives an attention weight to every pixel of the image as a function of its colour, edge or intensity.


virtual reality software and technology | 2010

A real-time visual attention model for predicting gaze point during first-person exploration of virtual environments

Sébastien Hillaire; Anatole Lécuyer; Tony Regia-Corte; Rémi Cozot; Jérôme Royan; Gaspard Breton

This paper introduces a novel visual attention model to compute users gaze position automatically, i.e. without using a gaze-tracking system. Our model is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute, in real-time, a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive process which takes place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines the bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the users one. We have conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with more than 100% of accuracy gained. This suggests that computing in realtime a gaze point in a 3D virtual environment is possible and is a valid approach as compared to object-based approaches.


graphics interface | 2016

Style Aware Tone Expansion for HDR Displays

Cambodge Bist; Rémi Cozot; Gérard Madec; Xavier Ducloux

The vast majority of video content existing today is in Standard Dynamic Range (SDR) format and there is a strong interest in upscaling this content for upcoming High Dynamic Range (HDR) displays. Tone expansion or inverse tone mapping converts SDR content into HDR format using Expansion Operators (EO). In this paper, we show that current EOs do not perform as well when dealing with content of various lighting style aesthetics. In addition to this, we present a series of perceptual user studies evaluating user preference for lighting style in HDR content. This study shows that tone expansion of stylized content takes the form of gamma correction and we propose a method that adapts the gamma value to the style of the video. We validate our method through a subjective evaluation against state-of-the-art methods. Furthermore, our work has been oriented for 1000 nits HDR displays and we present a framework positioning our method in conformance with existing SDR standards and upcoming HDR TV standards.


international conference on multimedia and expo | 2014

Motion-guided quantization for video tone mapping

Ronan Boitard; Dominique Thoreau; Rémi Cozot; Kadi Bouatouch

Tone Mapping Operators (TMOs) transform High Dynamic Range (HDR) contents to address Low Dynamic Range (LDR) displays. However, before reaching the end-user, these contents are usually compressed using a codec (coder-decoder) for broadcasting or storage purposes. Achieving the best trade-off between rendering and compression efficiency is of prime importance. Any TMO includes a rounding quantization to convert floating point values to integer ones. In this work, we propose to modify this quantization to increase the compression efficiency of the tone mapped content. By using a motion compensation, our technique preserves the rendering intent of the TMO while maximizing the correlations between successive frames. Experimental results show that we can save up to 12% of the total bit-rate as well as an average bit-rate reduction of 8.5% for all the test sequences. We show that our technique can be applied to other applications such as denoising.


The Visual Computer | 2017

High-dynamic-range image recovery from flash and non-flash image pairs

Hristina Hristova; Olivier Le Meur; Rémi Cozot; Kadi Bouatouch

In this paper, we propose a novel method for creating HDR images from only two images—flash and non-flash images. Our method consists of two main steps, namely brightness gamma correction and bi-local chromatic adaptation transform (CAT). The brightness gamma correction performs series of increases and decreases of the non-flash brightness and yields multiple images with various exposure values. The bi-local CAT enhances the quality of each computed image by recovering missing details, using information from the flash image. The final multi-exposure images are then merged together to compute an HDR image. An evaluation shows that our HDR images, obtained by using only two LDR images, are close to HDR images, obtained by combining five manually taken multi-exposure images. Our method does not require the usage of a tripod and it is suitable for images of non-still objects, such as people, candle flames.


Computers & Graphics | 2017

Tone expansion using lighting style aesthetics

Cambodge Bist; Rémi Cozot; Gérard Madec; Xavier Ducloux

Abstract High Dynamic Range (HDR) is the latest video format for display technology and there is a strong industrial effort in deploying an HDR capable ecosystem in the near future. However, most existing video content today are in Standard Dynamic Range (SDR) format and there is a growing necessity to upscale this content for HDR displays. Tone expansion, also known as inverse tone mapping, converts an SDR content into an HDR format using Expansion Operators (EOs). In this paper, we show that current state-of-the-art EOs do not preserve artistic intent when dealing with content of various lighting style aesthetics. Furthermore, we present a series of subjective user studies evaluating user preference for various lighting styles as seen on HDR displays. This study shows that tone expansion of stylized content takes the form of gamma correction and we propose a novel EO that adapts the gamma value to the intended style of the video. However, we also observe that a power function-based expansion technique causes changes in terms of color appearance. To solve this problem, we propose a simple color correction method that can be applied after tone expansion to emulate the intended colors in HDR. We validate our method through a perceptual evaluation against existing methods. In addition to this, our work targets 1000 nits HDR displays and we present a framework aligning our method in conformance with existing SDR standards and the latest HDR TV standards.

Collaboration


Dive into the Rémi Cozot's collaboration.

Top Co-Authors

Avatar

Kadi Bouatouch

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Ronan Boitard

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kadi Bouatouch

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge