Stefan Gustavson
Linköping University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stefan Gustavson.
electronic imaging | 2007
Jonas Unger; Stefan Gustavson
We describe the design and implementation of a high dynamic range (HDR) imaging system capable of capturing RGB color images with a dynamic range of 10,000,000 : 1 at 25 frames per second. We use a highly programmable camera unit with high throughput A/D conversion, data processing and data output. HDR acquisition is performed by multiple exposures in a continuous rolling shutter progression over the sensor. All the different exposures for one particular row of pixels are acquired head to tail within the frame time, which means that the time disparity between exposures is minimal, the entire frame time can be used for light integration and the longest exposure is almost the entire frame time. The system is highly configurable, and trade-offs are possible between dynamic range, precision, number of exposures, image resolution and frame rate.
international conference on computational photography | 2013
Joel Kronander; Stefan Gustavson; Gerhard Bonnet; Jonas Unger
HDR reconstruction from multiple exposures poses several challenges. Previous HDR reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion in several steps. We instead present a unifying approach, performing HDR assembly directly from raw sensor data in a single processing operation. Our algorithm includes a spatially adaptive HDR reconstruction based on fitting local polynomial approximations to observed sensor data, using a localized likelihood approach incorporating spatially varying sensor noise. We also present a realistic camera noise model adapted to HDR video. The method allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over state-of-the-art methods, both in terms of flexibility and reconstruction quality.
Signal Processing-image Communication | 2014
Joel Kronander; Stefan Gustavson; Gerhard Bonnet; Anders Ynnerman; Jonas Unger
One of the most successful approaches to modern high quality HDR-video capture is to use camera setups with multiple sensors imaging the scene through a common optical system. However, such systems pose several challenges for HDR reconstruction algorithms. Previous reconstruction techniques have considered debayering, denoising, resampling (alignment) and exposure fusion as separate problems. In contrast, in this paper we present a unifying approach, performing HDR assembly directly from raw sensor data. Our framework includes a camera noise model adapted to HDR video and an algorithm for spatially adaptive HDR reconstruction based on fitting of local polynomial approximations to observed sensor data. The method is easy to implement and allows reconstruction to an arbitrary resolution and output mapping. We present an implementation in CUDA and show real-time performance for an experimental 4 Mpixel multi-sensor HDR video system. We further show that our algorithm has clear advantages over existing methods, both in terms of flexibility and reconstruction quality.
eurographics | 2004
Jonas Unger; Stefan Gustavson; Mark Ollila; Mattias Johannesson
We present a novel system capable of capturing high dynamic range (HDR) Light Probes at video speed. Each Light Probe frame is built from an individual full set of exposures, all of which are captured within the frame time. The exposures are processed and assembled into a mantissa-exponent representation image within the camera unit before output, and then streamed to a standard PC. As an example, the system is capable of capturing Light Probe Images with a resolution of 512x512 pixels using a set of 10 exposures covering 15 f-stops at a frame rate of up to 25 final HDR frames per second. The system is built around commercial special-purpose camera hardware with on-chip programmable image processing logic and tightly integrated frame buffer memory, and the algorithm is implemented as custom downloadable microcode software.
eurographics | 2008
Jonas Unger; Stefan Gustavson; Per Larsson; Anders Ynnerman
This paper presents methods for photo‐realistic rendering using strongly spatially variant illumination captured from real scenes. The illumination is captured along arbitrary paths in space using a high dynamic range, HDR, video camera system with position tracking. Light samples are rearranged into 4‐D incident light fields (ILF) suitable for direct use as illumination in renderings. Analysis of the captured data allows for estimation of the shape, position and spatial and angular properties of light sources in the scene. The estimated light sources can be extracted from the large 4D data set and handled separately to render scenes more efficiently and with higher quality. The ILF lighting can also be edited for detailed artistic control.
Pattern Recognition Letters | 2011
Stefan Gustavson; Robin Strand
We present a modified distance measure for use with distance transforms of anti-aliased, area sampled grayscale images of arbitrary binary contours. The modified measure can be used in any vector-propagation Euclidean distance transform. Our test implementation in the traditional SSED8 algorithm shows a considerable improvement in accuracy and homogeneity of the distance field compared to a traditional binary image transform. At the expense of a 10x slowdown for a particular image resolution, we achieve an accuracy comparable to a binary transform on a supersampled image with 16x16 higher resolution, which would require 256 times more computations and memory.
Computers & Graphics | 2013
Jonas Unger; Joel Kronander; Per Larsson; Stefan Gustavson; Joakim Löw; Anders Ynnerman
In this paper we present novel algorithms and data structures for capturing, processing and rendering with real world lighting conditions based on high dynamic range video sequences. Based on the captured HDR video data we show how traditional image based lighting can be extended to include illumination variations in both the temporal as well as the spatial domain. This enables highly realistic renderings where traditional IBL techniques using a single light probe fail to capture important details in the real world lighting environment. To demonstrate the usefulness of our approach, we show examples of both off-line and real-time rendering applications.
international conference on computer graphics and interactive techniques | 2012
Joel Kronander; Stefan Gustavson; Jonas Unger
HDR video is an emerging field of technology, with a few camera systems currently in existence [Myszkowski et al. 2008], Multi-sensor systems [Tocci et al. 2011] have recently proved to be particularly promising due to superior robustness against temporal artifacts, correct motion blur, and high light efficiency. Previous HDR reconstruction methods for multi-sensor systems have assumed pixel perfect alignment of the physical sensors. This is, however, very difficult to achieve in practice. It may even be the case that reflections in beam splitters make it impossible to match the arrangement of the Bayer filters between sensors. We therefor present a novel reconstruction method specifically designed to handle the case of non-negligible misalignments between the sensors. Furthermore, while previous reconstruction techniques have considered HDR assembly, debayering and denoising as separate problems, our method is capable of simultaneous HDR assembly, debayering and smoothing of the data (denoising). The method is also general in that it allows reconstruction to an arbitrary output resolution and mapping. The algorithm is implemented in CUDA, and shows video speed performance for an experimental HDR video platform consisting of four 2336x1756 pixels high quality CCD sensors imaging the scene trough a common optical system. ND-filters of different densities are placed in front of the sensors to capture a dynamic range of 24 f-stops.
Journal of Graphics Tools | 2012
Ian McEwan; David Sheets; Stefan Gustavson; Mark Richardson
We present GLSL implementations of Perlin noise and Perlin simplex noise that run fast enough for practical consideration on current generation GPU hardware. The key benefits are that the functions are purely computational (i.e., they use neither textures nor lookup tables) and that they are implemented in GLSL version 1.20, which means they are compatible with all current GLSL-capable platforms, including OpenGL ES 2.0 and WebGL 1.0. Their performance is on par with previously presented GPU implementations of noise, they are very convenient to use, and they scale well with increasing parallelism in present and upcoming GPU architectures.
international conference on computer graphics and interactive techniques | 2011
Jonas Unger; Stefan Gustavson; Joel Kronander; Per Larsson; Gerhard Bonnet; Gunnar Kaiser
We present an overview of our recently developed systems pipeline for capture, reconstruction, modeling and rendering of real world scenes based on state-of-the-art high dynamic range video (HDRV). The reconstructed scene representation allows for photo-realistic Image Based Lighting (IBL) in complex environments with strong spatial variations in the illumination. The pipeline comprises the following essential steps: 1.) Capture - The scene capture is based on a 4MPixel global shutter HDRV camera with a dynamic range of more than 24 f-stops at 30 fps. The HDR output stream is stored as individual un-compressed frames for maximum flexibility. A scene is usually captured using a combination of panoramic light probe sequences [1], and sequences with a smaller field of view to maximize the resolution at regions of special interest in the scene. The panoramic sequences ensure full angular coverage at each position and guarantee that the information required for IBL is captured. The position and orientation of the camera is tracked during capture. 2.) Scene recovery - Taking one or more HDRV sequences as input, a geometric proxy model of the scene is built using a semi-automatic approach. First, traditional computer vision algorithms such as structure from motion [2] and Manhattan world stereo [3] are used. If necessary, the recovered model is then modified using an interaction scheme based on visualizations of a volumetric representation of the scene radiance computed from the input HDRV sequence. The HDR nature of this volume also enables robust extraction of direct light sources and other high intensity regions in the scene. 3.) Radiance processing - When the scene proxy geometry has been recovered, the radiance data captured in the HDRV sequences are re-projected onto the surfaces and the recovered light sources. Since most surface points have been imaged from a large number of directions, it is possible to reconstruct view dependent texture maps at the proxy geometries. These 4D data sets describe a combination of detailed geometry that has not been recovered and the radiance reflected from the underlying real surfaces. The view dependent textures are then processed and compactly stored in an adaptive data structure. 4.) Rendering - Once the geometric and radiometric scene information has been recovered, it is possible to place virtual objects into the real scene and create photo-realistic renderings as illustrated above. The extracted light sources enable efficient sampling and rendering times that are fully comparable to that of traditional virtual computer graphics light sources. No previously described method is capable of capturing and reproducing the angular and spatial variation in the scene illumination in comparable detail. We believe that the rapid development of high quality HDRV systems will soon have a large impact on both computer vision and graphics. Following this trend, we are developing theory and algorithms for efficient processing HDRV sequences and using the abundance of radiance data that is going to be available.