Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jens Ackermann is active.

Publication


Featured researches published by Jens Ackermann.


computer vision and pattern recognition | 2012

Photometric stereo for outdoor webcams

Jens Ackermann; Fabian Langguth; Simon Fuhrmann; Michael Goesele

We present a photometric stereo technique that operates on time-lapse sequences captured by static outdoor webcams over the course of several months. Outdoor webcams produce a large set of uncontrolled images subject to varying lighting and weather conditions. We first automatically select a suitable subset of the captured frames for further processing, reducing the dataset size by several orders of magnitude. A camera calibration step is applied to recover the camera response function, the absolute camera orientation, and to compute the light directions for each image. Finally, we describe a new photometric stereo technique for non-Lambertian scenes and unknown light source intensities to recover normal maps and spatially varying materials of the scene.


international conference on computer graphics and interactive techniques | 2010

Ambient point clouds for view interpolation

Michael Goesele; Jens Ackermann; Simon Fuhrmann; Carsten Haubold; Ronny Klowsky; Drew Steedly; Richard Szeliski

View interpolation and image-based rendering algorithms often produce visual artifacts in regions where the 3D scene geometry is erroneous, uncertain, or incomplete. We introduce ambient point clouds constructed from colored pixels with uncertain depth, which help reduce these artifacts while providing non-photorealistic background coloring and emphasizing reconstructed 3D geometry. Ambient point clouds are created by randomly sampling colored points along the viewing rays associated with uncertain pixels. Our real-time rendering system combines these with more traditional rigid 3D point clouds and colored surface meshes obtained using multiview stereo. Our resulting system can handle larger-range view transitions with fewer visible artifacts than previous approaches.


vision modeling and visualization | 2010

Direct Resampling for Isotropic Surface Remeshing

Simon Fuhrmann; Jens Ackermann; Thomas Kalbe; Michael Goesele

We present a feature-sensitive remeshing algorithm for relaxation-based methods. The first stage of the algorithm creates a new mesh from scratch by resampling the reference mesh with an exact vertex budget with either uniform or non-uniform vertex distribution according to a density function. The newly introduced samples on the mesh surface are triangulated directly in 3D by constructing a mutual tessellation. The second stage of the algorithm optimizes the positions of the mesh vertices by building a weighted centroidal Voronoi tessellation to obtain a precise isotropic placement of the samples. We achieve isotropy by employing Lloyd’s relaxation method, but other relaxation schemes are applicable. The proposed algorithm handles diverse meshes of arbitrary genus and guarantees that the remeshed model has the same topology as the input mesh. The density function can be defined by the user or derived automatically from the estimated curvature at the mesh vertices. A subset of the mesh edges may be tagged as sharp features to preserve the characteristic appearance of technical models. The new method can be applied to large meshes and produces results faster than previously achievable.


IEEE Computer | 2010

Scene Reconstruction from Community Photo Collections

Michael Goesele; Jens Ackermann; Simon Fuhrmann; Ronny Klowsky; Fabian Langguth; Patrick Mücke; Martin Ritz

The literally billions of images available from online photo-sharing sites offer an I unprecedented wealth of information but also add additional layers of complexity for reconstruction applications.


Foundations and Trends in Computer Graphics and Vision | 2015

A Survey of Photometric Stereo Techniques

Jens Ackermann; Michael Goesele

Reconstructing the shape of an object from images is an important problem in computer vision that has led to a variety of solution strategies. This survey covers photometric stereo, i.e., techniques that exploit the observed intensity variations caused by illumination changes to recover the orientation of the surface. In the most basic setting, a diffuse surface is illuminated from at least three directions and captured with a static camera. Under some conditions, this allows to recover per-pixel surface normals. Modern approaches generalize photometric stereo in various ways, e.g., relaxing constraints on lighting, surface reflectance and camera placement or creating different types of local surface estimates. Starting with an introduction for readers unfamiliar with the subject, we discuss the foundations of this field of research. We then summarize important trends and developments that emerged in the last three decades. We put a focus on approaches with the potential to be applied in a broad range of scenarios. This implies, e.g., simple capture setups, relaxed model assumptions, and increased robustness requirements. The goal of this review is to provide an overview of the diverse concepts and ideas on the way towards more general techniques than traditional photometric stereo.


vision modeling and visualization | 2013

Geometric Point Light Source Calibration

Jens Ackermann; Simon Fuhrmann; Michael Goesele

We present a light position calibration technique based on a general arrangement of at least two reflective spheres in a single image. Contrary to other techniques we do not directly intersect rays for triangulation but instead solve for the optimal light position by evaluating the image-space error of the light highlights reflected from the spheres. This approach has been very successful in the field of Structure-from-Motion estimation. It has not been applied to light source calibration because determining the reflection point on the sphere to project the highlight back in the image is a challenging problem. We show a solution and define a novel, non-linear error function to recover the position of a point light source. We also introduce a light position estimation that is based on observing the light source directly in multiple images which does not use any reflections. Finally, we evaluate both proposed techniques and the classical ray intersection method in several scenarios with real data.


european conference on computer vision | 2010

Removing the example from example-based photometric stereo

Jens Ackermann; Martin Ritz; André Stork; Michael Goesele

We introduce an example-based photometric stereo approach that does not require explicit reference objects. Instead, we use a robust multi-view stereo technique to create a partial reconstruction of the scene which serves as scene-intrinsic reference geometry. Similar to the standard approach, we then transfer normals from reconstructed to unreconstructed regions based on robust photometric matching. In contrast to traditional reference objects, the scene-intrinsic reference geometry is neither noise free nor does it necessarily contain all possible normal directions for given materials. We therefore propose several modifications that allow us to reconstruct high quality normal maps. During integration, we combine both normal and positional information yielding high quality reconstructions. We show results on several datasets including an example based on data solely collected from the Internet.


Joint DAGM (German Association for Pattern Recognition) and OAGM Symposium | 2012

Consensus Multi-View Photometric Stereo

Mate Beljan; Jens Ackermann; Michael Goesele

We propose a multi-view photometric stereo technique that uses photometric normal consistency to jointly estimate surface position and orientation. The underlying scene representation is based on oriented points, yielding more flexibility compared to smoothly varying surfaces. We demonstrate that the often employed least squares error of the Lambertian image formation model fails for wide-baseline settings without known visibility information. We then introduce a multi-view normal consistency approach and demonstrate its efficiency on synthetic and real data. In particular, our approach is able to handle occlusion, shadows, and other sources of outliers.


international conference on 3d vision | 2014

Multi-view Photometric Stereo by Example

Jens Ackermann; Fabian Langguth; Simon Fuhrmann; Arjan Kuijper; Michael Goesele

We present a novel multi-view photometric stereo technique that recovers the surface of texture less objects with unknown BRDF and lighting. The camera and light positions are allowed to vary freely and change in each image. We exploit orientation consistency between the target and an example object to develop a consistency measure. Motivated by the fact that normals can be recovered more reliably than depth, we represent our surface as both a depth map and a normal map. These maps are jointly optimized and allow us to formulate constraints on depth that take surface orientation into account. Our technique does not require the visual hull or stereo reconstructions for bootstrapping and solely exploits image intensities without the need for radiometric camera calibration. We present results on real objects with varying degree of specularity and show that these can be used to create globally consistent models from multiple views.


computational color imaging workshop | 2013

How bright is the moon? recovering and using absolute luminance values from internet images

Jens Ackermann; Michael Goesele

The human visual system differs from a camera in various aspects such as spatial resolution, brightness sensitivity, dynamic range, or color perception. Several of these effects depend on the absolute luminance distribution entering the eye which is not readily available from camera images. In this paper, we argue that absolute luminance is important for correct image reproduction. We investigate to which extent it is possible to recover absolute luminance values for any pixel in images taken from the Internet, extending previous studies on camera calibration in laboratory settings that are much less challenging. We use the Moon as a calibration target to estimate the remaining error. We then evaluate this error in the context of perceptual tonemapping for low dynamic range images.

Collaboration


Dive into the Jens Ackermann's collaboration.

Top Co-Authors

Avatar

Michael Goesele

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Simon Fuhrmann

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Fabian Langguth

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Ronny Klowsky

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

André Stork

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Kay Hamacher

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Mate Beljan

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Thomas Kalbe

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge