Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David C. Schedl is active.

Publication


Featured researches published by David C. Schedl.


Optics Express | 2014

Enhanced learning-based imaging with thin-film luminescent concentrators

Alexander Koppelhuber; Sean Ryan Fanello; Clemens Birklbauer; David C. Schedl; Shahram Izadi; Oliver Bimber

LumiConSense, a transparent, flexible, scalable, and disposable thin-film image sensor has the potential to lead to new human-computer interfaces that are unconstrained in shape and sensing-distance. In this article we make four new contributions: (1) A new real-time image reconstruction method that results in a significant enhancement of image quality compared to previous approaches; (2) the efficient combination of image reconstruction and shift-invariant linear image processing operations; (3) various hardware and software prototypes which, realize the above contributions, demonstrating the current potential of our sensor for real-time applications; and finally, (4) a further higher quality offline reconstruction algorithm.


international conference on computational photography | 2015

Directional Super-Resolution by Means of Coded Sampling and Guided Upsampling

David C. Schedl; Clemens Birklbauer; Oliver Bimber

We present a simple guided super-resolution technique for increasing directional resolution without reliance on depth estimation or image correspondences. Rather, it searches for best-matching multidimensional (4D or 3D) patches within the entire captured data set to compose new directional images that are consistent in both the spatial and the directional domains. We describe algorithms for guided upsampling, iterative guided upsampling, and sampling code estimation. Our experimental results reveal that the outcomes of existing light-field camera arrays and lightstage systems can be improved without additional hardware requirements or recording effort simply by realignment of cameras or light sources to change their sampling patterns.


Computer Graphics Forum | 2014

Coded exposure HDR light-field video recording

David C. Schedl; Clemens Birklbauer; Oliver Bimber

Capturing exposure sequences to compute high dynamic range (HDR) images causes motion blur in cases of camera movement. This also applies to light‐field cameras: frames rendered from multiple blurred HDR light‐field perspectives are also blurred. While the recording times of exposure sequences cannot be reduced for a single‐sensor camera, we demonstrate how this can be achieved for a camera array. Thus, we decrease capturing time and reduce motion blur for HDR light‐field video recording. Applying a spatio‐temporal exposure pattern while capturing frames with a camera array reduces the overall recording time and enables the estimation of camera movement within one light‐field video frame. By estimating depth maps and local point spread functions (PSFs) from multiple perspectives with the same exposure, regional motion deblurring can be supported. Missing exposures at various perspectives are then interpolated.


Scientific Reports | 2016

Volumetric Light-Field Excitation.

David C. Schedl; Oliver Bimber

We explain how to concentrate light simultaneously at multiple selected volumetric positions by means of a 4D illumination light field. First, to select target objects, a 4D imaging light field is captured. A light field mask is then computed automatically for this selection to avoid illumination of the remaining areas. With one-photon illumination, simultaneous generation of complex volumetric light patterns becomes possible. As a full light-field can be captured and projected simultaneously at the desired exposure and excitation times, short readout and lighting durations are supported.


international conference on computer vision and graphics | 2016

Generalized Depth-of-Field Light-Field Rendering

David C. Schedl; Clemens Birklbauer; Johann Gschnaller; Oliver Bimber

Typical light-field rendering uses a single focal plane to define the depth at which objects should appear sharp. This emulates the behavior of classical cameras. However, plenoptic cameras together with advanced light-field rendering enable depth-of-field effects that go far beyond the capabilities of conventional imaging. We present a generalized depth-of-field light-field rendering method that allows arbitrarily shaped objects to be all in focus while the surrounding fore- and background is consistently rendered out of focus based on user-defined focal plane and aperture settings. Our approach generates soft occlusion boundaries with a natural appearance which is not possible with existing techniques. It furthermore does not rely on dense depth estimation and thus allows presenting complex scenes with non-physical visual effects.


Journal of Imaging | 2018

Airborne Optical Sectioning

Indrajit Kurmi; David C. Schedl; Oliver Bimber

Drones are becoming increasingly popular for remote sensing of landscapes in archeology, cultural heritage, forestry, and other disciplines. They are more efficient than airplanes for capturing small areas, of up to several hundred square meters. LiDAR (light detection and ranging) and photogrammetry have been applied together with drones to achieve 3D reconstruction. With airborne optical sectioning (AOS), we present a radically different approach that is based on an old idea: synthetic aperture imaging. Rather than measuring, computing, and rendering 3D point clouds or triangulated 3D meshes, we apply image-based rendering for 3D visualization. In contrast to photogrammetry, AOS does not suffer from inaccurate correspondence matches and long processing times. It is cheaper than LiDAR, delivers surface color information, and has the potential to achieve high sampling resolutions. AOS samples the optical signal of wide synthetic apertures (30–100 m diameter) with unstructured video images recorded from a low-cost camera drone to support optical sectioning by image integration. The wide aperture signal results in a shallow depth of field and consequently in a strong blur of out-of-focus occluders, while images of points in focus remain clearly visible. Shifting focus computationally towards the ground allows optical slicing through dense occluder structures (such as leaves, tree branches, and coniferous trees), and discovery and inspection of concealed artifacts on the surface.


Scientific Reports | 2017

Compressive Volumetric Light-Field Excitation

David C. Schedl; Oliver Bimber

We explain how volumetric light-field excitation can be converted to a process that entirely avoids 3D reconstruction, deconvolution, and calibration of optical elements while taking scattering in the probe better into account. For spatially static probes, this is achieved by an efficient (one-time) light-transport sampling and light-field factorization. Individual probe particles (and arbitrary combinations thereof) can subsequently be excited in a dynamically controlled way while still supporting volumetric reconstruction of the entire probe in real-time based on a single light-field recording.


Computer Vision and Image Understanding | 2017

Optimized sampling for view interpolation in light fields using local dictionaries

David C. Schedl; Clemens Birklbauer; Oliver Bimber

Abstract We present an angular superresolution method for light fields captured with a sparse camera array. Our method uses local dictionaries extracted from a sampling mask for upsampling a sparse light field to a dense light field by applying compressed sensing reconstruction. We derive optimal sampling masks by minimizing the coherence for representative global dictionaries. The desired output perspectives and the number of available cameras can be arbitrarily specified. We show that our method yields qualitative improvements compared to previous techniques.


ACM Transactions on Graphics | 2016

Nonuniform Spatial Deformation of Light Fields by Locally Linear Transformations

Clemens Birklbauer; David C. Schedl; Oliver Bimber


eurographics | 2018

Optimized Sampling for View Interpolation in Light Fields with Overlapping Patches.

David C. Schedl; Oliver Bimber

Collaboration


Dive into the David C. Schedl's collaboration.

Top Co-Authors

Avatar

Oliver Bimber

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Clemens Birklbauer

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Alexander Koppelhuber

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Johann Gschnaller

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge