Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gabriel Eilertsen is active.

Publication


Featured researches published by Gabriel Eilertsen.


Computer Graphics Forum | 2013

Evaluation of tone mapping operators for HDR video

Gabriel Eilertsen; Robert Wanat; Rafal Mantiuk; Jonas Unger

Eleven tone-mapping operators intended for video processing are analyzed and evaluated with camera-captured and computer-generated high-dynamic-range content. After optimizing the parameters of the operators in a formal experiment, we inspect and rate the artifacts (flickering, ghosting, temporal color consistency) and color rendition problems (brightness, contrast and color saturation) they produce. This allows us to identify major problems and challenges that video tone-mapping needs to address. Then, we compare the tone-mapping results in a pair-wise comparison experiment to identify the operators that, on average, can be expected to perform better than the others and to assess the magnitude of differences between the best performing operators.


international conference on computer graphics and interactive techniques | 2015

Real-time noise-aware tone mapping

Gabriel Eilertsen; Rafal Mantiuk; Jonas Unger

Real-time high quality video tone mapping is needed for many applications, such as digital viewfinders in cameras, display algorithms which adapt to ambient light, in-camera processing, rendering engines for video games and video post-processing. We propose a viable solution for these applications by designing a video tone-mapping operator that controls the visibility of the noise, adapts to display and viewing environment, minimizes contrast distortions, preserves or enhances image details, and can be run in real-time on an incoming sequence without any preprocessing. To our knowledge, no existing solution offers all these features. Our novel contributions are: a fast procedure for computing local display-adaptive tone-curves which minimize contrast distortions, a fast method for detail enhancement free from ringing artifacts, and an integrated video tone-mapping solution combining all the above features.


eurographics | 2017

A comparative review of tone-mapping algorithms for high dynamic range video

Gabriel Eilertsen; Rafal Mantiuk; Jonas Unger

Tone‐mapping constitutes a key component within the field of high dynamic range (HDR) imaging. Its importance is manifested in the vast amount of tone‐mapping methods that can be found in the literature, which are the result of an active development in the area for more than two decades. Although these can accommodate most requirements for display of HDR images, new challenges arose with the advent of HDR video, calling for additional considerations in the design of tone‐mapping operators (TMOs). Today, a range of TMOs exist that do support video material. We are now reaching a point where most camera captured HDR videos can be prepared in high quality without visible artifacts, for the constraints of a standard display device. In this report, we set out to summarize and categorize the research in tone‐mapping as of today, distilling the most important trends and characteristics of the tone reproduction pipeline. While this gives a wide overview over the area, we then specifically focus on tone‐mapping of HDR video and the problems this medium entails. First, we formulate the major challenges a video TMO needs to address. Then, we provide a description and categorization of each of the existing video TMOs. Finally, by constructing a set of quantitative measures, we evaluate the performance of a number of the operators, in order to give a hint on which can be expected to render the least amount of artifacts. This serves as a comprehensive reference, categorization and comparative assessment of the state‐of‐the‐art in tone‐mapping for HDR video.


international conference on computer graphics and interactive techniques | 2017

HDR image reconstruction from a single exposure using deep CNNs

Gabriel Eilertsen; Joel Kronander; Gyorgy Denes; Rafal Mantiuk; Jonas Unger

Camera sensors can only capture a limited range of luminance simultaneously, and in order to create high dynamic range (HDR) images a set of different exposures are typically combined. In this paper we address the problem of predicting information that have been lost in saturated image areas, in order to enable HDR reconstruction from a single exposure. We show that this problem is well-suited for deep learning algorithms, and propose a deep convolutional neural network (CNN) that is specifically designed taking into account the challenges in predicting HDR values. To train the CNN we gather a large dataset of HDR images, which we augment by simulating sensor saturation for a range of cameras. To further boost robustness, we pre-train the CNN on a simulated HDR dataset created from a subset of the MIT Places database. We demonstrate that our approach can reconstruct high-resolution visually convincing HDR results in a wide range of situations, and that it generalizes well to reconstruction of images captured with arbitrary and low-end cameras that use unknown camera response functions and post-processing. Furthermore, we compare to existing methods for HDR expansion, and show high quality results also for image based lighting. Finally, we evaluate the results in a subjective experiment performed on an HDR display. This shows that the reconstructed HDR images are visually convincing, with large improvements as compared to existing methods.


international conference on computer graphics and interactive techniques | 2013

Survey and evaluation of tone mapping operators for HDR video

Gabriel Eilertsen; Jonas Unger; Robert Wanat; Rafal Mantiuk

This work presents a survey and a user evaluation of tone mapping operators (TMOs) for high dynamic range (HDR) video, i.e. TMOs that explicitly include a temporal model for processing of variations in the input HDR images in the time domain. The main motivations behind this work is that: robust tone mapping is one of the key aspects of HDR imaging [Reinhard et al. 2006]; recent developments in sensor and computing technologies have now made it possible to capture HDR-video, e.g. [Unger and Gustavson 2007; Tocci et al. 2011]; and, as shown by our survey, tone mapping for HDR video poses a set of completely new challenges compared to tone mapping for still HDR images. Furthermore, video tone mapping, though less studied, is highly important for a multitude of applications including gaming, cameras in mobile devices, adaptive display devices and movie post-processing.


international conference on computer graphics and interactive techniques | 2014

Perceptually based parameter adjustments for video processing operations

Gabriel Eilertsen; Jonas Unger; Robert Wanat; Rafal Mantiuk

Extensive post processing plays a central role in modern video production pipelines. A problem in this context is that many filters and processing operators are very sensitive to parameter settings ...


international conference on image processing | 2016

A high dynamic range video codec optimized by large-scale testing

Gabriel Eilertsen; Rafal Mantiuk; Jonas Unger

While a number of existing high-bit depth video compression methods can potentially encode high dynamic range (HDR) video, few of them provide this capability. In this paper, we investigate techniques for adapting HDR video for this purpose. In a large-scale test on 33 HDR video sequences, we compare 2 video codecs, 4 luminance encoding techniques (transfer functions) and 3 color encoding methods, measuring quality in terms of two objective metrics, PU-MSSIM and HDR-VDP-2. From the results we design an open source HDR video encoder, optimized for the best compression performance given the techniques examined.


scandinavian conference on image analysis | 2017

BriefMatch: Dense Binary Feature Matching for Real-Time Optical Flow Estimation

Gabriel Eilertsen; Per-Erik Forssén; Jonas Unger

Research in optical flow estimation has to a large extent focused on achieving the best possible quality with no regards to running time. Nevertheless, in a number of important applications the speed is crucial. To address this problem we present BriefMatch, a real-time optical flow method that is suitable for live applications. The method combines binary features with the search strategy from PatchMatch in order to efficiently find a dense correspondence field between images. We show that the BRIEF descriptor provides better candidates (less outlier-prone) in shorter time, when compared to direct pixel comparisons and the Census transform. This allows us to achieve high quality results from a simple filtering of the initially matched candidates. Currently, BriefMatch has the fastest running time on the Middlebury benchmark, while placing highest of all the methods that run in shorter than 0.5 s.


international conference on image processing | 2016

Real-time noise-aware tone-mapping and its use in luminance retargeting

Gabriel Eilertsen; Rafal Mantiuk; Jonas Unger

With the aid of tone-mapping operators, high dynamic range images can be mapped for reproduction on standard displays. However, for large restrictions in terms of display dynamic range and peak luminance, limitations of the human visual system have significant impact on the visual appearance. In this paper, we use components from the real-time noise-aware tone-mapping to complement an existing method for perceptual matching of image appearance under different luminance levels. The refined luminance retargeting method improves subjective quality on a display with large limitations in dynamic range, as suggested by our subjective evaluation.


international conference on computer graphics and interactive techniques | 2016

Luma HDRv: an open source high dynamic range video codec optimized by large-scale testing

Gabriel Eilertsen; Rafal Mantiuk; Jonas Unger

We present Luma HDRv -- an open source solution for encoding of high dynamic range (HDR) video. The software makes use of techniques for adapting the HDR video for compression with a standard video codec. In the design of the encoder we perform a large-scale test, using 33 HDR video sequences in order to compare 2 video codecs, 4 luminance encoding techniques (transfer functions) and 3 color encoding methods. This serves both as an evaluation of existing techniques for encoding of HDR luminances and colors, as well as to optimize the performance of Luma HDRv.

Collaboration


Dive into the Gabriel Eilertsen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francesco Banterle

Istituto di Scienza e Tecnologie dell'Informazione

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gyorgy Denes

University of Cambridge

View shared research outputs
Researchain Logo
Decentralizing Knowledge