Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonas Unger is active.

Publication


Featured researches published by Jonas Unger.


international conference on computer graphics and interactive techniques | 2005

Performance relighting and reflectance transformation with time-multiplexed illumination

Andreas Wenger; Andrew Gardner; Chris Tchou; Jonas Unger; Tim Hawkins; Paul E. Debevec

We present a technique for capturing an actors live-action performance in such a way that the lighting and reflectance of the actor can be designed and modified in postproduction. Our approach is to illuminate the subject with a sequence of time-multiplexed basis lighting conditions, and to record these conditions with a high-speed video camera so that many conditions are recorded in the span of the desired output frame interval. We investigate several lighting bases for representing the sphere of incident illumination using a set of discrete LED light sources, and we estimate and compensate for subject motion using optical flow and image warping based on a set of tracking frames inserted into the lighting basis. To composite the illuminated performance into a new background, we include a time-multiplexed matte within the basis. We also show that the acquired data enables time-varying surface normals, albedo, and ambient occlusion to be estimated, which can be used to transform the actors reflectance to produce both subtle and stylistic effects.


eurographics | 2003

Capturing and rendering with incident light fields

Jonas Unger; Andreas Wenger; Tim Hawkins; Andrew Gardner; Paul E. Debevec

This paper presents a process for capturing spatially and directionally varying illumination from a real-world scene and using this lighting to illuminate computer-generated objects. We use two devices for capturing such illumination. In the first we photograph an array of mirrored spheres in high dynamic range to capture the spatially varying illumination. In the second, we obtain higher resolution data by capturing images with an high dynamic range omnidirectional camera as it traverses across a plane. For both methods we apply the light field technique to extrapolate the incident illumination to a volume. We render computer-generated objects as illuminated by this captured illumination using a custom shader within an existing global illumination rendering system. To demonstrate our technique we capture several spatially-varying lighting environments with spotlights, shadows, and dappled lighting and use them to illuminate synthetic scenes. We also show comparisons to real objects under the same illumination.


Computer Graphics Forum | 2013

Evaluation of tone mapping operators for HDR video

Gabriel Eilertsen; Robert Wanat; Rafal Mantiuk; Jonas Unger

Eleven tone-mapping operators intended for video processing are analyzed and evaluated with camera-captured and computer-generated high-dynamic-range content. After optimizing the parameters of the operators in a formal experiment, we inspect and rate the artifacts (flickering, ghosting, temporal color consistency) and color rendition problems (brightness, contrast and color saturation) they produce. This allows us to identify major problems and challenges that video tone-mapping needs to address. Then, we compare the tone-mapping results in a pair-wise comparison experiment to identify the operators that, on average, can be expected to perform better than the others and to assess the magnitude of differences between the best performing operators.


ACM Transactions on Graphics | 2012

BRDF models for accurate and efficient rendering of glossy surfaces

Joakim Löw; Joel Kronander; Anders Ynnerman; Jonas Unger

This article presents two new parametric models of the Bidirectional Reflectance Distribution Function (BRDF), one inspired by the Rayleigh-Rice theory for light scattering from optically smooth surfaces, and one inspired by micro-facet theory. The models represent scattering from a wide range of glossy surface types with high accuracy. In particular, they enable representation of types of surface scattering which previous parametric models have had trouble modeling accurately. In a study of the scattering behavior of measured reflectance data, we investigate what key properties are needed for a model to accurately represent scattering from glossy surfaces. We investigate different parametrizations and how well they match the behavior of measured BRDFs. We also examine the scattering curves which are represented in parametric models by different distribution functions. Based on the insights gained from the study, the new models are designed to provide accurate fittings to the measured data. Importance sampling schemes are developed for the new models, enabling direct use in existing production pipelines. In the resulting renderings we show that the visual quality achieved by the models matches that of the measured data.


electronic imaging | 2007

High Dynamic Range Video for Photometric Measurement of Illumination

Jonas Unger; Stefan Gustavson

We describe the design and implementation of a high dynamic range (HDR) imaging system capable of capturing RGB color images with a dynamic range of 10,000,000 : 1 at 25 frames per second. We use a highly programmable camera unit with high throughput A/D conversion, data processing and data output. HDR acquisition is performed by multiple exposures in a continuous rolling shutter progression over the sensor. All the different exposures for one particular row of pixels are acquired head to tail within the frame time, which means that the time disparity between exposures is minimal, the entire frame time can be used for light integration and the longest exposure is almost the entire frame time. The system is highly configurable, and trade-offs are possible between dynamic range, precision, number of exposures, image resolution and frame rate.


IEEE Transactions on Visualization and Computer Graphics | 2012

Efficient Visibility Encoding for Dynamic Illumination in Direct Volume Rendering

Joel Kronander; Daniel Jönsson; Joakim Löw; Patric Ljung; Anders Ynnerman; Jonas Unger

We present an algorithm that enables real-time dynamic shading in direct volume rendering using general lighting, including directional lights, point lights, and environment maps. Real-time performance is achieved by encoding local and global volumetric visibility using spherical harmonic (SH) basis functions stored in an efficient multiresolution grid over the extent of the volume. Our method enables high-frequency shadows in the spatial domain, but is limited to a low-frequency approximation of visibility and illumination in the angular domain. In a first pass, level of detail (LOD) selection in the grid is based on the current transfer function setting. This enables rapid online computation and SH projection of the local spherical distribution of visibility information. Using a piecewise integration of the SH coefficients over the local regions, the global visibility within the volume is then computed. By representing the light sources using their SH projections, the integral over lighting, visibility, and isotropic phase functions can be efficiently computed during rendering. The utility of our method is demonstrated in several examples showing the generality and interactive performance of the approach.


international conference on computational photography | 2013

Unified HDR reconstruction from raw CFA data

Joel Kronander; Stefan Gustavson; Gerhard Bonnet; Jonas Unger

HDR reconstruction from multiple exposures poses several challenges. Previous HDR reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion in several steps. We instead present a unifying approach, performing HDR assembly directly from raw sensor data in a single processing operation. Our algorithm includes a spatially adaptive HDR reconstruction based on fitting local polynomial approximations to observed sensor data, using a localized likelihood approach incorporating spatially varying sensor noise. We also present a realistic camera noise model adapted to HDR video. The method allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over state-of-the-art methods, both in terms of flexibility and reconstruction quality.


Signal Processing-image Communication | 2014

A unified framework for multi-sensor HDR video reconstruction

Joel Kronander; Stefan Gustavson; Gerhard Bonnet; Anders Ynnerman; Jonas Unger

One of the most successful approaches to modern high quality HDR-video capture is to use camera setups with multiple sensors imaging the scene through a common optical system. However, such systems pose several challenges for HDR reconstruction algorithms. Previous reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion as separate problems. In contrast, in this paper we present a unifying approach, performing HDR assembly directly from raw sensor data. Our framework includes a camera noise model adapted to HDR video and an algorithm for spatially adaptive HDR reconstruction based on fitting of local polynomial approximations to observed sensor data. The method is easy to implement and allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over existing methods, both in terms of flexibility and reconstruction quality.


eurographics | 2004

A Real Time Light Probe

Jonas Unger; Stefan Gustavson; Mark Ollila; Mattias Johannesson

We present a novel system capable of capturing high dynamic range (HDR) Light Probes at video speed. Each Light Probe frame is built from an individual full set of exposures, all of which are captured within the frame time. The exposures are processed and assembled into a mantissa-exponent representation image within the camera unit before output, and then streamed to a standard PC. As an example, the system is capable of capturing Light Probe Images with a resolution of 512x512 pixels using a set of 10 exposures covering 15 f-stops at a frame rate of up to 25 final HDR frames per second. The system is built around commercial special-purpose camera hardware with on-chip programmable image processing logic and tightly integrated frame buffer memory, and the algorithm is implemented as custom downloadable microcode software.


international conference on computer graphics and interactive techniques | 2015

Real-time noise-aware tone mapping

Gabriel Eilertsen; Rafal Mantiuk; Jonas Unger

Real-time high quality video tone mapping is needed for many applications, such as digital viewfinders in cameras, display algorithms which adapt to ambient light, in-camera processing, rendering engines for video games and video post-processing. We propose a viable solution for these applications by designing a video tone-mapping operator that controls the visibility of the noise, adapts to display and viewing environment, minimizes contrast distortions, preserves or enhances image details, and can be run in real-time on an incoming sequence without any preprocessing. To our knowledge, no existing solution offers all these features. Our novel contributions are: a fast procedure for computing local display-adaptive tone-curves which minimize contrast distortions, a fast method for detail enhancement free from ringing artifacts, and an integrated video tone-mapping solution combining all the above features.

Collaboration


Dive into the Jonas Unger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Gardner

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Paul E. Debevec

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge