Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Goesele is active.

Publication


Featured researches published by Michael Goesele.


international conference on computer vision | 2007

Multi-View Stereo for Community Photo Collections

Michael Goesele; Noah Snavely; Brian Curless; Hugues Hoppe; Steven M. Seitz

We present a multi-view stereo algorithm that addresses the extreme changes in lighting, scale, clutter, and other effects in large online community photo collections. Our idea is to intelligently choose images to match, both at a per-view and per-pixel level. We show that such adaptive view selection enables robust performance even with dramatic appearance variability. The stereo matching technique takes as input sparse 3D points reconstructed from structure-from-motion methods and iteratively grows surfaces from these points. Optimizing for surface normals within a photoconsistency measure significantly improves the matching results. While the focus of our approach is to estimate high-quality depth maps, we also show examples of merging the resulting depth maps into compelling scene reconstructions. We demonstrate our algorithm on standard multi-view stereo datasets and on casually acquired photo collections of famous scenes gathered from the Internet.


ACM Transactions on Graphics | 2003

Image-based reconstruction of spatial appearance and geometric detail

Hendrik P. A. Lensch; Jan Kautz; Michael Goesele; Wolfgang Heidrich; Hans-Peter Seidel

Real-world objects are usually composed of a number of different materials that often show subtle changes even within a single material. Photorealistic rendering of such objects requires accurate measurements of the reflection properties of each material, as well as the spatially varying effects. We present an image-based measuring method that robustly detects the different materials of real objects and fits an average bidirectional reflectance distribution function (BRDF) to each of them. In order to model local changes as well, we project the measured data for each surface point into a basis formed by the recovered BRDFs leading to a truly spatially varying BRDF representation. Real-world objects often also have fine geometric detail that is not represented in an acquired mesh. To increase the detail, we derive normal maps even for non-Lambertian surfaces using our measured BRDFs. A high quality model of a real object can be generated with relatively little input data. The generated model allows for rendering under arbitrary viewing and lighting conditions and realistically reproduces the appearance of the original object.


computer vision and pattern recognition | 2006

Multi-View Stereo Revisited

Michael Goesele; Brian Curless; Steven M. Seitz

We present an extremely simple yet robust multi-view stereo algorithm and analyze its properties. The algorithm first computes individual depth maps using a window-based voting approach that returns only good matches. The depth maps are then merged into a single mesh using a straightforward volumetric approach. We show results for several datasets, showing accuracy comparable to the best of the current state of the art techniques and rivaling more complex algorithms.


british machine vision conference | 2010

Back to the Future: Learning Shape Models from 3D CAD Data.

Michael Stark; Michael Goesele; Bernt Schiele

Recognizing 3D objects from arbitrary view points is one of the most fundamental problems in computer vision. A major challenge lies in the transition between the 3D geometry of objects and 2D representations that can be robustly matched to natural images. Most approaches thus rely on 2D natural images either as the sole source of training data for building an implicit 3D representation, or by enriching 3D models with natural image features. In this paper, we go back to the ideas from the early days of computer vision, by using 3D object models as the only source of information for building a multi-view object class detector. In particular, we use these models for learning 2D shape that can be robustly matched to 2D natural images. Our experiments confirm the validity of our approach, which outperforms current state-of-the-art techniques on a multi-view detection data set.


international conference on computer graphics and interactive techniques | 2004

DISCO: acquisition of translucent objects

Michael Goesele; Hendrik P. A. Lensch; Jochen Lang; Christian Fuchs; Hans-Peter Seidel

Translucent objects are characterized by diffuse light scattering beneath the objects surface. Light enters and leaves an object at possibly distinct surface locations. This paper presents the first method to acquire this transport behavior for arbitrary inhomogeneous objects. Individual surface points are illuminated in our DISCO measurement facility and the objects impulse response is recorded with a high-dynamic range video camera. The acquired data is resampled into a hierarchical model of the objects light scattering properties. Missing values are consistently interpolated resulting in measurement-based, complete and accurate representations of real translucent objects which can be rendered with various algorithms.


Proceedings of the IEEE | 2010

Scene Reconstruction and Visualization From Community Photo Collections

Noah Snavely; Ian Simon; Michael Goesele; Richard Szeliski; Steven M. Seitz

There are billions of photographs on the Internet, representing an extremely large, rich, and nearly comprehensive visual record of virtually every famous place on Earth. Unfortunately, these massive community photo collections are almost completely unstructured, making it very difficult to use them for applications such as the virtual exploration of our world. Over the past several years, advances in computer vision have made it possible to automatically reconstruct 3-D geometry - including camera positions and scene models - from these large, diverse photo collections. Once the geometry is known, we can recover higher level information from the spatial distribution of photos, such as the most common viewpoints and paths through the scene. This paper reviews recent progress on these challenging computer vision problems, and describes how we can use the recovered structure to turn community photo collections into immersive, interactive 3-D experiences.


computer vision and pattern recognition | 2006

Mesostructure from Specularity

Tongbo Chen; Michael Goesele; Hans-Peter Seidel

We describe a simple and robust method for surface mesostructure acquisition. Our method builds on the observation that specular reflection is a reliable visual cue for surface mesostructure perception. In contrast to most photometric stereo methods, which take specularities as outliers and discard them, we propose a progressive acquisition system that captures a dense specularity field as the only information for mesostructure reconstruction. Our method can efficiently recover surfaces with fine-scale geometric details from complex real-world objects with a wide variety of reflection properties, including translucent, low albedo, and highly specular objects. We show results for a variety of objects including human skin, dried apricot, orange, jelly candy, black leather and dark chocolate.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 2005

3D Acquisition of mirroring objects using striped patterns

Marco Tarini; Hendrik P. A. Lensch; Michael Goesele; Hans-Peter Seidel

Objects with mirroring optical characteristics are left out of the scope of most 3D scanning methods. We present here a new automatic acquisition approach, shape-from-distortion, that focuses on that category of objects, requires only a still camera and a color monitor, and produces range scans (plus a normal and a reflectance map) of the target. Our technique consists of two steps: first, an improved environment matte is captured for the mirroring object, using the interference of patterns with different frequencies to obtain sub-pixel accuracy. Then, the matte is converted into a normal and a depth map by exploiting the self-coherence of a surface when integrating the normal map along different paths. The results show very high accuracy, capturing even smallest surface details. The acquired depth maps can be further processed using standard techniques to produce a complete 3D mesh of the object.


computer vision and pattern recognition | 2009

Relighting objects from image collections

Tom Haber; Christian Fuchs; Philippe Bekaer; Hans-Peter Seidel; Michael Goesele; Hendrik P. A. Lensch

We present an approach for recovering the reflectance of a static scene with known geometry from a collection of images taken under distant, unknown illumination. In contrast to previous work, we allow the illumination to vary between the images, which greatly increases the applicability of the approach. Using an all-frequency relighting framework based on wavelets, we are able to simultaneously estimate the per-image incident illumination and the per-surface point reflectance. The wavelet framework allows for incorporating various reflection models. We demonstrate the quality of our results for synthetic test cases as well as for several datasets captured under laboratory conditions. Combined with multi-view stereo reconstruction, we are even able to recover the geometry and reflectance of a scene solely using images collected from the Internet.


pacific conference on computer graphics and applications | 2002

Interactive rendering of translucent objects

Hendrik P. A. Lensch; Michael Goesele; Philippe Bekaert; Jan Kautz; Marcus Magnor; Jochen Lang; Hans-Peter Seidel

This paper presents a rendering method for translucent objects, in which view point and illumination can be modified at interactive rates. In a preprocessing step the impulse response to incoming light impinging at each surface point is computed and stored in two different ways: The local effect on close-by surface points is modeled as a per-texel filter kernel that is applied to a texture map representing the incident illumination. The global response (i.e. light shining through the object) is stored as vertex-to-vertex throughput factors for the triangle mesh of the object. During rendering, the illumination map for the object is computed according to the current lighting situation and then filtered by the precomputed kernels. The illumination map is also used to derive the incident illumination on the vertices which is distributed via the vertex-to-vertex throughput factors to the other vertices. The final image is obtained by combining the local and global response. We demonstrate the performance of our method for several models.

Collaboration


Dive into the Michael Goesele's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon Fuhrmann

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Fabian Langguth

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Jan Kautz

University College London

View shared research outputs
Top Co-Authors

Avatar

Jens Ackermann

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sven Widmer

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kay Hamacher

Technische Universität Darmstadt

View shared research outputs
Researchain Logo
Decentralizing Knowledge