Clemens Birklbauer
Johannes Kepler University of Linz
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Clemens Birklbauer.
Computer Graphics Forum | 2013
Clemens Birklbauer; Simon Opelt; Oliver Bimber
We present a caching framework with a novel probability‐based prefetching and eviction strategy applied to atomic cache units that enables interactive rendering of gigaray light fields. Further, we describe two new use cases that are supported by our framework: panoramic light fields, including a robust imaging technique and an appropriate parameterization scheme for real‐time rendering and caching; and light‐field‐cached volume rendering, which supports interactive exploration of large volumetric datasets using light‐field rendering. We consider applications such as light‐field photography and the visualization of large image stacks from modern scanning microscopes.
Computer Graphics Forum | 2012
Clemens Birklbauer; Oliver Bimber
We present a first approach to light‐field retargeting using z‐stack seam carving, which allows light‐field compression and extension while retaining angular consistency. Our algorithm first converts an input light field into a set of perspective‐sheared focal stacks. It then applies 3D deconvolution to convert the focal stacks into z‐stacks, and seam‐carves the z‐stack of the center perspective. The computed seams of the center perspective are sheared and applied to the z‐stacks of all off‐center perspectives. Finally, the carved z‐stacks are converted back into the perspective images of the output light field. To our knowledge, this is the first approach to light‐field retargeting. Unlike existing stereo‐pair retargeting or 3D retargeting techniques, it does not require depth information.
Optics Express | 2014
Alexander Koppelhuber; Clemens Birklbauer; Shahram Izadi; Oliver Bimber
We present a fully transparent and flexible light-sensing film that, based on a single thin-film luminescent concentrator layer, supports simultaneous multi-focal image reconstruction and depth estimation without additional optics. Together with the sampling of two-dimensional light fields propagated inside the film layer under various focal conditions, it allows entire focal image stacks to be computed after only one recording that can be used for depth estimation. The transparency and flexibility of our sensor unlock the potential of lensless multilayer imaging and depth sensing with arbitrary sensor shapes--enabling novel human-computer interfaces.
Optics Express | 2014
Alexander Koppelhuber; Sean Ryan Fanello; Clemens Birklbauer; David C. Schedl; Shahram Izadi; Oliver Bimber
LumiConSense, a transparent, flexible, scalable, and disposable thin-film image sensor has the potential to lead to new human-computer interfaces that are unconstrained in shape and sensing-distance. In this article we make four new contributions: (1) A new real-time image reconstruction method that results in a significant enhancement of image quality compared to previous approaches; (2) the efficient combination of image reconstruction and shift-invariant linear image processing operations; (3) various hardware and software prototypes which, realize the above contributions, demonstrating the current potential of our sensor for real-time applications; and finally, (4) a further higher quality offline reconstruction algorithm.
international conference on computational photography | 2015
David C. Schedl; Clemens Birklbauer; Oliver Bimber
We present a simple guided super-resolution technique for increasing directional resolution without reliance on depth estimation or image correspondences. Rather, it searches for best-matching multidimensional (4D or 3D) patches within the entire captured data set to compose new directional images that are consistent in both the spatial and the directional domains. We describe algorithms for guided upsampling, iterative guided upsampling, and sampling code estimation. Our experimental results reveal that the outcomes of existing light-field camera arrays and lightstage systems can be improved without additional hardware requirements or recording effort simply by realignment of cameras or light sources to change their sampling patterns.
Computer Graphics Forum | 2014
David C. Schedl; Clemens Birklbauer; Oliver Bimber
Capturing exposure sequences to compute high dynamic range (HDR) images causes motion blur in cases of camera movement. This also applies to light‐field cameras: frames rendered from multiple blurred HDR light‐field perspectives are also blurred. While the recording times of exposure sequences cannot be reduced for a single‐sensor camera, we demonstrate how this can be achieved for a camera array. Thus, we decrease capturing time and reduce motion blur for HDR light‐field video recording. Applying a spatio‐temporal exposure pattern while capturing frames with a camera array reduces the overall recording time and enables the estimation of camera movement within one light‐field video frame. By estimating depth maps and local point spread functions (PSFs) from multiple perspectives with the same exposure, regional motion deblurring can be supported. Missing exposures at various perspectives are then interpolated.
international conference on computer graphics and interactive techniques | 2012
Clemens Birklbauer; Oliver Bimber
We present a first approach towards panorama light-field imaging. By converting overlapping sub-light-fields into individual focal stacks, computing a panoramic focal stack from them, and converting the panoramic focal stack back into a panoramic light field, we avoid the demand for a precise reconstruction of scene depth.
international conference on computer vision and graphics | 2016
David C. Schedl; Clemens Birklbauer; Johann Gschnaller; Oliver Bimber
Typical light-field rendering uses a single focal plane to define the depth at which objects should appear sharp. This emulates the behavior of classical cameras. However, plenoptic cameras together with advanced light-field rendering enable depth-of-field effects that go far beyond the capabilities of conventional imaging. We present a generalized depth-of-field light-field rendering method that allows arbitrarily shaped objects to be all in focus while the surrounding fore- and background is consistently rendered out of focus based on user-defined focal plane and aperture settings. Our approach generates soft occlusion boundaries with a natural appearance which is not possible with existing techniques. It furthermore does not rely on dense depth estimation and thus allows presenting complex scenes with non-physical visual effects.
Computers & Graphics | 2015
Clemens Birklbauer; Oliver Bimber
We present a novel approach for guided light-field photography using off-the-shelf smartphones. In contrast to previous work that requires the user to decide where next to position a mobile camera, we actively compute and visualize during runtime a recommendation for the next sampling position and orientation taking into account the current camera pose and required camera alignments. This supports efficient capture of various types of large-field-of-view light fields in just a matter of minutes and without specialized camera equipment. To further reduce the overall capture time, we describe an extension of our guidance algorithm to collaborative light-field photography by small groups of users. Graphical abstractDisplay Omitted HighlightsWe propose a novel active guidance algorithm for capturing light fields.The method actively directs users successively towards recommended sampling poses.A variety of light-field types with a large field of view are supported.The light-field parameterization is computed from scene and camera properties.An extension of the guidance algorithm allows collaborative light-field photography.
international conference on computer graphics and interactive techniques | 2014
Alexander Koppelhuber; Philipp Wintersberger; Clemens Birklbauer; Oliver Bimber
Our sensor [Koppelhuber and Bimber 2013] consists of a thin, transparent polycarbonate film, referred to as luminescent concentrator (LC), that is doped with fluorescent dyes. Light of a particular wavelength sub-band that penetrates the film is emitted in longer wavelengths, while wavelengths outside the sub-band are fully transmitted. The example shown in figure 1(a) absorbs blue and emits green light. The emitted light is mostly trapped inside the film by total internal reflection (TIR), and is transported with reduced multi-scattering towards the LC edges while losing energy over transport distance. The bright film edges indicate decoupling of the light integral transported to each edge point from all directions inside the LC.