Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philipp Jenke is active.

Publication


Featured researches published by Philipp Jenke.


ACM Transactions on Graphics | 2009

Efficient reconstruction of nonrigid shape and motion from real-time 3D scanner data

Michael Wand; Bart Adams; M. Ovsjanikov; Alexander Berner; Martin Bokeloh; Philipp Jenke; Leonidas J. Guibas; Hans-Peter Seidel; Andreas Schilling

We present a new technique for reconstructing a single shape and its nonrigid motion from 3D scanning data. Our algorithm takes a set of time-varying unstructured sample points that capture partial views of a deforming object as input and reconstructs a single shape and a deformation field that fit the data. This representation yields dense correspondences for the whole sequence, as well as a completed 3D shape in every frame. In addition, the algorithm automatically removes spatial and temporal noise artifacts and outliers from the raw input data. Unlike previous methods, the algorithm does not require any shape template but computes a fitting shape automatically from the input data. Our reconstruction framework is based upon a novel topology-aware adaptive subspace deformation technique that allows handling long sequences with complex geometry efficiently. The algorithm accesses data in multiple sequential passes, so that long sequences can be streamed from hard disk, not being limited by main memory. We apply the technique to several benchmark datasets, significantly increasing the complexity of the data that can be handled efficiently in comparison to previous work.


Computer Vision and Image Understanding | 2010

Fusion of range and color images for denoising and resolution enhancement with a non-local filter

Benjamin Huhle; Timo Schairer; Philipp Jenke; Wolfgang Straíer

We present an integrated method for post-processing of range data which removes outliers, smoothes the depth values and enhances the lateral resolution in order to achieve visually pleasing 3D models from low-cost depth sensors with additional (registered) color images. The algorithm is based on the non-local principle and adapts the original NL-Means formulation to the characteristics of typical depth data. Explicitly handling outliers in the sensor data, our denoising approach achieves unbiased reconstructions from error-prone input data. Taking intra-patch similarity into account, we reconstruct strong discontinuities without disturbing artifacts and preserve fine detail structures, obtaining piece-wise smooth depth maps. Furthermore, we exploit the dependencies of the depth data with additionally available color information and increase the lateral resolution of the depth maps. We finally discuss how to parallelize the algorithm in order to achieve fast processing times that are adequate for post-processing of data from fast depth sensors such as time-of-flight cameras.


symposium on geometry processing | 2007

Reconstruction of deforming geometry from time-varying point clouds

Michael Wand; Philipp Jenke; Qixing Huang; Martin Bokeloh; Leonidas J. Guibas; Andreas Schilling

In this paper, we describe a system for the reconstruction of deforming geometry from a time sequence of unstructured, noisy point clouds, as produced by recent real-time range scanning devices. Our technique reconstructs both the geometry and dense correspondences over time. Using the correspondences, holes due to occlusion are filled in from other frames. Our reconstruction technique is based on a statistical framework: The reconstruction should both match the measured data points and maximize prior probability densities that prefer smoothness, rigid deformation and smooth movements over time. The optimization procedure consists of an inner loop that optimizes the 4D shape using continuous numerical optimization and an outer loop that infers the discrete 4D topology of the data set using an iterative model assembly algorithm. We apply the technique to a variety of data sets, demonstrating that the new approach is capable of robustly retrieving animated models with correspondences from data sets suffering from significant noise, outliers and acquisition holes.


Computer Graphics Forum | 2006

Bayesian Point Cloud Reconstruction

Philipp Jenke; Michael Wand; Martin Bokeloh; Andreas Schilling; Wolfgang Straßer

In this paper, we propose a novel surface reconstruction technique based on Bayesian statistics: The measurement process as well as prior assumptions on the measured objects are modeled as probability distributions and Bayes’ rule is used to infer a reconstruction of maximum probability. The key idea of this paper is to define both measurements and reconstructions as point clouds and describe all statistical assumptions in terms of this finite dimensional representation. This yields a discretization of the problem that can be solved using numerical optimization techniques. The resulting algorithm reconstructs both topology and geometry in form of a well‐sampled point cloud with noise removed. In a final step, this representation is then converted into a triangle mesh. The proposed approach is conceptually simple and easy to extend. We apply the approach to reconstruct piecewise‐smooth surfaces with sharp features and examine the performance of the algorithm on different synthetic and real‐world data sets.


computer vision and pattern recognition | 2008

Robust non-local denoising of colored depth data

Benjamin Huhle; Timo Schairer; Philipp Jenke; Wolfgang Strasser

We give a brief discussion of denoising algorithms for depth data and introduce a novel technique based on the NL-means filter. A unified approach is presented that removes outliers from depth data and accordingly achieves an unbiased smoothing result. This robust denoising algorithm takes intra-patch similarity and optional color information into account in order to handle strong discontinuities and to preserve fine detail structure in the data. We achieve fast computation times with a GPU-based implementation. Results using data from a time-of-flight camera system show a significant gain in visual quality.


International Journal of Intelligent Systems Technologies and Applications | 2008

On-the-fly scene acquisition with a handy multi-sensor system

Benjamin Huhle; Philipp Jenke; Wolfgang Strasser

We present a scene acquisition system which allows for fast and simple acquisition of arbitrarily large 3D environments. We propose a small device which acquires and processes frames consisting of depth and colour information at interactive rates. This allows the operator to control the acquisition process on the fly. However, no user input or prior knowledge of the scene is required. In each step of the processing, pipeline colour and depth data are used in combination in order to gain from different strengths of the sensors. A novel registration method is introduced that combines geometry and colour information for enhanced robustness and precision. We evaluate the performance of the system and present results from acquisition in different environments.


Computers & Graphics | 2008

Special Section: Point-Based Graphics: Processing and interactive editing of huge point clouds from 3D scanners

Michael Wand; Alexander Berner; Martin Bokeloh; Philipp Jenke; Arno Fleck; Mark Hoffmann; Benjamin Maier; Dirk Staneker; Andreas Schilling; Hans-Peter Seidel

This paper describes a new out-of-core multi-resolution data structure for real-time visualization, interactive editing and externally efficient processing of large point clouds. We describe an editing system that makes use of the novel data structure to provide interactive editing and preprocessing tools for large scanner data sets. Using the new data structure, we provide a complete tool chain for 3D scanner data processing, from data preprocessing and filtering to manual touch-up and real-time visualization. In particular, we describe an out-of-core outlier removal and bilateral geometry filtering algorithm, a toolset for interactive selection, painting, transformation, and filtering of huge out-of-core point-cloud data sets and a real-time rendering algorithm, which all use the same data structure as storage backend. The interactive tools work in real-time for small model modifications. For large scale editing operations, we employ a two-resolution approach where editing is planned in real-time and executed in an externally efficient offline computation afterwards. We evaluate our implementation on example data sets of sizes up to 63GB, demonstrating that the proposed technique can be used effectively in real-world applications.


Untitled Event | 2007

Interactive Editing of Large Point Clouds

Michael Wand; Alexander Berner; Martin Bokeloh; Arno Fleck; Mark Hoffmann; Philipp Jenke; Benjamin Maier; Dirk Staneker; Andreas Schilling

This paper describes a new out-of-core multi-resolution data structure for real-time visualization and interactive editing of large point clouds. In addition, an editing system is discussed that makes use of the novel data structure to provide interactive editing tools for large scanner data sets. The new data structure provides efficient rendering and allows for handling very large data sets using out-of-core storage. Unlike related previous approaches, it also provides dynamic operations for online insertion, deletion and modification of points with time mostly independent of scene complexity. This permits local editing of huge models in real time while maintaining a full multi-resolution representation for visualization. The data structure is used to implement a prototypical editing system for large point clouds. It provides real-time local editing tools for huge data sets as well as a two-resolution scripting mode for planning large, non-local changes which are subsequently performed in an externally efficient offline computation. We evaluate our implementation on several synthetic and real-world examples of sizes up to 63GB.


Dyn3D '09 Proceedings of the DAGM 2009 Workshop on Dynamic 3D Imaging | 2009

Realistic Depth Blur for Images with Range Data

Benjamin Huhle; Timo Schairer; Philipp Jenke; Wolfgang Straßer

We present a system that allows for changing the major camera parameters after the acquisition of an image. Using the high dynamic range composition technique and additional range information captured with a small and low-cost time-of-flight camera, our setup enables us to set the main parameters of a virtual camera system and to compute the resulting image. Hence, the aperture size and shape, exposure time, as well as the focus can be changed in a postprocessing step. Since the depth-of-field computation is sensitive to proper range data, it is essential to process the color and depth data in an integrated manner. We use a non-local filtering approach to denoise and upsample the range data. The same technique is used to infer missing information regarding depth and color which occur due to the parallax between both cameras as well as due to the lens camera model that we use to simulate the depth of field in a physically correct way.


digital television conference | 2007

Self-Localization in Scanned 3DTV Sets

Philipp Jenke; Benjamin Huhle; Wolfgang Strasser

Future 3D Television applications will offer the viewer to freely choose his viewpoint during transmission. A lot of research in the field of 3DTV therefore concentrated on capturing photo-realistic 3D models of studio or movie sets. In this paper, however, we concentrate on the problem of self-localization within such scenes. As input we expect a 3D model of an arbitrary environment. Therein, we are able to localize a low-cost portable sensor-system based on a 3D time-of-flight camera. Point clouds acquired with this system from arbitrary viewpoints are registered to the given model in order to estimate its position and orientation in the scene.

Collaboration


Dive into the Philipp Jenke's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arno Fleck

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge