Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Felix Heide is active.

Publication


Featured researches published by Felix Heide.


international conference on computer graphics and interactive techniques | 2013

Low-budget transient imaging using photonic mixer devices

Felix Heide; Matthias B. Hullin; James Gregson; Wolfgang Heidrich

Transient imaging is an exciting a new imaging modality that can be used to understand light propagation in complex environments, and to capture and analyze scene properties such as the shape of hidden objects or the reflectance properties of surfaces. Unfortunately, research in transient imaging has so far been hindered by the high cost of the required instrumentation, as well as the fragility and difficulty to operate and calibrate devices such as femtosecond lasers and streak cameras. In this paper, we explore the use of photonic mixer devices (PMD), commonly used in inexpensive time-of-flight cameras, as alternative instrumentation for transient imaging. We obtain a sequence of differently modulated images with a PMD sensor, impose a model for local light/object interaction, and use an optimization procedure to infer transient images given the measurements and model. The resulting method produces transient images at a cost several orders of magnitude below existing methods, while simultaneously simplifying and speeding up the capture process.


international conference on computer graphics and interactive techniques | 2014

FlexISP: a flexible camera image processing framework

Felix Heide; Markus Steinberger; Yun-Ta Tsai; Mushfiqur Rouf; Dawid Pająk; Dikpal Reddy; Orazio Gallo; Jing Liu; Wolfgang Heidrich; Karen O. Egiazarian; Jan Kautz; Kari Pulli

Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.


ACM Transactions on Graphics | 2013

High-quality computational imaging through simple lenses

Felix Heide; Mushfiqur Rouf; Matthias B. Hullin; Björn Labitzke; Wolfgang Heidrich; Andreas Kolb

Modern imaging optics are highly complex systems consisting of up to two dozen individual optical elements. This complexity is required in order to compensate for the geometric and chromatic aberrations of a single lens, including geometric distortion, field curvature, wavelength-dependent blur, and color fringing. In this article, we propose a set of computational photography techniques that remove these artifacts, and thus allow for postcapture correction of images captured through uncompensated, simple optics which are lighter and significantly less expensive. Specifically, we estimate per-channel, spatially varying point spread functions, and perform nonblind deconvolution with a novel cross-channel term that is designed to specifically eliminate color fringing.


Optics Express | 2014

Imaging in scattering media using correlation image sensors and sparse convolutional coding

Felix Heide; Lei Xiao; Andreas Kolb; Matthias B. Hullin; Wolfgang Heidrich

Correlation image sensors have recently become popular low-cost devices for time-of-flight, or range cameras. They usually operate under the assumption of a single light path contributing to each pixel. We show that a more thorough analysis of the sensor data from correlation sensors can be used can be used to analyze the light transport in much more complex environments, including applications for imaging through scattering and turbid media. The key of our method is a new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors. This approach is enabled by an analysis of sparsity in complex transient images, and the derivation of a new physically-motivated model for transient images with drastically improved sparsity.


computer vision and pattern recognition | 2014

Diffuse Mirrors: 3D Reconstruction from Diffuse Indirect Illumination Using Inexpensive Time-of-Flight Sensors

Felix Heide; Lei Xiao; Wolfgang Heidrich; Matthias B. Hullin

The functional difference between a diffuse wall and a mirror is well understood: one scatters back into all directions, and the other one preserves the directionality of reflected light. The temporal structure of the light, however, is left intact by both: assuming simple surface reflection, photons that arrive first are reflected first. In this paper, we exploit this insight to recover objects outside the line of sight from second-order diffuse reflections, effectively turning walls into mirrors. We formulate the reconstruction task as a linear inverse problem on the transient response of a scene, which we acquire using an affordable setup consisting of a modulated light source and a time-of-flight image sensor. By exploiting sparsity in the reconstruction domain, we achieve resolutions in the order of a few centimeters for object shape (depth and laterally) and albedo. Our method is robust to ambient light and works for large room-sized scenes. It is drastically faster and less expensive than previous approaches using femtosecond lasers and streak cameras, and does not require any moving parts.


computer vision and pattern recognition | 2015

Fast and flexible convolutional sparse coding

Felix Heide; Wolfgang Heidrich; Gordon Wetzstein

Convolutional sparse coding (CSC) has become an increasingly important tool in machine learning and computer vision. Image features can be learned and subsequently used for classification and reconstruction tasks. As opposed to patch-based methods, convolutional sparse coding operates on whole images, thereby seamlessly capturing the correlation between local neighborhoods. In this paper, we propose a new approach to solving CSC problems and show that our method converges significantly faster and also finds better solutions than the state of the art. In addition, the proposed method is the first efficient approach to allow for proper boundary conditions to be imposed and it also supports feature learning from incomplete data as well as general reconstruction problems.


vision modeling and visualization | 2010

CALTag: High Precision Fiducial Markers for Camera Calibration

Bradley Atcheson; Felix Heide; Wolfgang Heidrich

We present a self-identifying marker pattern for camera calibration, together with the associated detection algorithm. The pattern is designed to support high-precision, fully-automatic localization of calibration points, as well as identification of individual markers in the presence of significant occlusions, uneven illumination, and observations under extremely acute angles. The detection algorithm is efficient and free of parameters. After calibration we obtain reprojection errors significantly lower than with state-of-the art self-identifying reference patterns.


Computer Graphics Forum | 2016

Convolutional sparse coding for high dynamic range imaging

Ana Serrano; Felix Heide; Diego Gutierrez; Gordon Wetzstein; Belen Masia

Current HDR acquisition techniques are based on either (i) fusing multibracketed, low dynamic range (LDR) images, (ii) modifying existing hardware and capturing different exposures simultaneously with multiple sensors, or (iii) reconstructing a single image with spatially‐varying pixel exposures. In this paper, we propose a novel algorithm to recover high‐quality HDRI images from a single, coded exposure. The proposed reconstruction method builds on recently‐introduced ideas of convolutional sparse coding (CSC); this paper demonstrates how to make CSC practical for HDR imaging. We demonstrate that the proposed algorithm achieves higher‐quality reconstructions than alternative methods, we evaluate optical coding schemes, analyze algorithmic parameters, and build a prototype coded HDR camera that demonstrates the utility of convolutional sparse HDRI coding with a custom hardware platform.


international conference on computer graphics and interactive techniques | 2013

Adaptive image synthesis for compressive displays

Felix Heide; Gordon Wetzstein; Ramesh Raskar; Wolfgang Heidrich

Recent years have seen proposals for exciting new computational display technologies that are compressive in the sense that they generate high resolution images or light fields with relatively few display parameters. Image synthesis for these types of displays involves two major tasks: sampling and rendering high-dimensional target imagery, such as light fields or time-varying light fields, as well as optimizing the display parameters to provide a good approximation of the target content. In this paper, we introduce an adaptive optimization framework for compressive displays that generates high quality images and light fields using only a fraction of the total plenoptic samples. We demonstrate the framework for a large set of display technologies, including several types of auto-stereoscopic displays, high dynamic range displays, and high-resolution displays. We achieve significant performance gains, and in some cases are able to process data that would be infeasible with existing methods.


international conference on computer graphics and interactive techniques | 2016

Computational imaging with multi-camera time-of-flight systems

Shikhar Shrestha; Felix Heide; Wolfgang Heidrich; Gordon Wetzstein

Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human-computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design and applications of phased multi-camera time-of-flight (ToF) systems. We develop a reproducible hardware system that allows for the exposure times and waveforms of up to three cameras to be synchronized. Using this system, we analyze waveform interference between multiple light sources in ToF applications and propose simple solutions to this problem. Building on the concept of orthogonal frequency design, we demonstrate state-of-the-art results for instantaneous radial velocity capture via Doppler time-of-flight imaging and we explore new directions for optically probing global illumination, for example by de-scattering dynamic scenes and by non-line-of-sight motion detection via frequency gating.

Collaboration


Dive into the Felix Heide's collaboration.

Top Co-Authors

Avatar

Wolfgang Heidrich

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Gregson

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Lei Xiao

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Mushfiqur Rouf

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge