Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefan Gumhold is active.

Publication


Featured researches published by Stefan Gumhold.


dagm conference on pattern recognition | 2010

Real-time dense geometry from a handheld camera

Jan Stühmer; Stefan Gumhold; Daniel Cremers

We present a novel variational approach to estimate dense depth maps from multiple images in real-time. By using robust penalizers for both data term and regularizer, our method preserves discontinuities in the depth map. We demonstrate that the integration of multiple images substantially increases the robustness of estimated depth maps to noise in the input images. The integration of our method into recently published algorithms for camera tracking allows dense geometry reconstruction in real-time using a single handheld camera. We demonstrate the performance of our algorithm with real-world data.


computer vision and pattern recognition | 2016

Uncertainty-Driven 6D Pose Estimation of Objects and Scenes from a Single RGB Image

Eric Brachmann; Frank Michel; Alexander Krull; Michael Ying Yang; Stefan Gumhold; Carsten Rother

In recent years, the task of estimating the 6D pose of object instances and complete scenes, i.e. camera localization, from a single input image has received considerable attention. Consumer RGB-D cameras have made this feasible, even for difficult, texture-less objects and scenes. In this work, we show that a single RGB image is sufficient to achieve visually convincing results. Our key concept is to model and exploit the uncertainty of the system at all stages of the processing pipeline. The uncertainty comes in the form of continuous distributions over 3D object coordinates and discrete distributions over object labels. We give three technical contributions. Firstly, we develop a regularized, auto-context regression framework which iteratively reduces uncertainty in object coordinate and object label predictions. Secondly, we introduce an efficient way to marginalize object coordinate distributions over depth. This is necessary to deal with missing depth information. Thirdly, we utilize the distributions over object labels to detect multiple objects simultaneously with a fixed budget of RANSAC hypotheses. We tested our system for object pose estimation and camera localization on commonly used data sets. We see a major improvement over competing systems.


asian conference on computer vision | 2014

6-DOF Model Based Tracking via Object Coordinate Regression

Alexander Krull; Frank Michel; Eric Brachmann; Stefan Gumhold; Stephan Ihrke; Carsten Rother

This work investigates the problem of 6-Degrees-Of-Freedom (6-DOF) object tracking from RGB-D images, where the object is rigid and a 3D model of the object is known. As in many previous works, we utilize a Particle Filter (PF) framework. In order to have a fast tracker, the key aspect is to design a clever proposal distribution which works reliably even with a small number of particles. To achieve this we build on a recently developed state-of-the-art system for single image 6D pose estimation of known 3D objects, using the concept of so-called 3D object coordinates. The idea is to train a random forest that regresses the 3D object coordinates from the RGB-D image. Our key technical contribution is a two-way procedure to integrate the random forest predictions in the proposal distribution generation. This has many practical advantages, in particular better generalization ability with respect to occlusions, changes in lighting and fast-moving objects. We demonstrate experimentally that we exceed state-of-the-art on a given, public dataset. To raise the bar in terms of fast-moving objects and object occlusions, we also create a new dataset, which will be made publicly available.


computer vision and pattern recognition | 2017

DSAC — Differentiable RANSAC for Camera Localization

Eric Brachmann; Alexander Krull; Sebastian Nowozin; Jamie Shotton; Frank Michel; Stefan Gumhold; Carsten Rother

RANSAC is an important algorithm in robust optimization and a central building block for many computer vision applications. In recent years, traditionally hand-crafted pipelines have been replaced by deep learning pipelines, which can be trained in an end-to-end fashion. However, RANSAC has so far not been used as part of such deep learning pipelines, because its hypothesis selection procedure is non-differentiable. In this work, we present two different ways to overcome this limitation. The most promising approach is inspired by reinforcement learning, namely to replace the deterministic hypothesis selection by a probabilistic selection for which we can derive the expected loss w.r.t. to all learnable parameters. We call this approach DSAC, the differentiable counterpart of RANSAC. We apply DSAC to the problem of camera localization, where deep learning has so far failed to improve on traditional approaches. We demonstrate that by directly minimizing the expected loss of the output camera poses, robustly estimated by RANSAC, we achieve an increase in accuracy. In the future, any deep learning pipeline can use DSAC as a robust optimization component.


International Journal of Intelligent Systems Technologies and Applications | 2008

Image-based motion compensation for structured light scanning of dynamic surfaces

Sören König; Stefan Gumhold

Many structured light scanning systems based on temporal pattern codification produce dense and robust results on static scenes but behave very poorly when applied to dynamic scenes in which objects are allowed to move or to deform during the acquisition process. The main reason for this lies in the wrong combination of encoded correspondence information because the same point in the projector pattern sequence can map to different points within the camera images due to depth changes over time. We present a novel approach suitable for measuring and compensating such kind of pattern motion. The described technique can be combined with existing active range scanning systems designed for static surface reconstruction making them applicable for the dynamic case. We demonstrate the benefits of our method by integrating it into a gray code-based structured light scanner, which runs at 30 3D scans per second.


2006 IEEE Symposium on Interactive Ray Tracing | 2006

Incremental Raycasting of Piecewise Quadratic Surfaces on the GPU

Carsten Stoll; Stefan Gumhold; Hans-Peter Seidel

To overcome the limitations of triangle and point based surfaces several authors have recently investigated surface representations that are based on higher order primitives. Among these are MPU, SLIM surfaces, dynamic skin surfaces and higher order iso-surfaces. Up to now these representations were not suitable for interactive applications because of the lack of an efficient rendering algorithm. In this paper we close this gap for implicit surface representations of degree two by developing highly optimized GPU implementations of the raycasting algorithm. We investigate techniques for fast incremental raycasting and cover per fragment and per quadric backface culling. We apply the approaches to the rendering of SLIM surfaces, quadratic iso-surfaces over tetrahedral meshes and bilinear quadrilaterals. Compared to triangle based surface approximations of similar geometric error we achieve only slightly lower frame rates but with much higher visual quality due to the quadratic approximation power of the underlying surfaces


computer vision and pattern recognition | 2017

Global Hypothesis Generation for 6D Object Pose Estimation

Frank Michel; Alexander Kirillov; Eric Brachmann; Alexander Krull; Stefan Gumhold; Bogdan Savchynskyy; Carsten Rother

This paper addresses the task of estimating the 6D-pose of a known 3D object from a single RGB-D image. Most modern approaches solve this task in three steps: i) compute local features, ii) generate a pool of pose-hypotheses, iii) select and refine a pose from the pool. This work focuses on the second step. While all existing approaches generate the hypotheses pool via local reasoning, e.g. RANSAC or Hough-Voting, we are the first to show that global reasoning is beneficial at this stage. In particular, we formulate a novel fully-connected Conditional Random Field (CRF) that outputs a very small number of pose-hypotheses. Despite the potential functions of the CRF being non-Gaussian, we give a new, efficient two-step optimization procedure, with some guarantees for optimality. We utilize our global hypotheses generation procedure to produce results that exceed state-of-the-art for the challenging Occluded Object Dataset.


Computer Graphics Forum | 2014

Visual Analysis of Trajectories in Multi-Dimensional State Spaces

Sebastian Grottel; Julian Heinrich; Daniel Weiskopf; Stefan Gumhold

Multi‐dimensional data originate from many different sources and are relevant for many applications. One specific sub‐type of such data is continuous trajectory data in multi‐dimensional state spaces of complex systems. We adapt the concept of spatially continuous scatterplots and spatially continuous parallel coordinate plots to such trajectory data, leading to continuous‐time scatterplots and continuous‐time parallel coordinates. Together with a temporal heat map representation, we design coordinated views for visual analysis and interactive exploration. We demonstrate the usefulness of our visualization approach for three case studies that cover examples of complex dynamic systems: cyber‐physical systems consisting of heterogeneous sensors and actuators networks (the collection of time‐dependent sensor network data of an exemplary smart home environment), the dynamics of robot arm movement and motion characteristics of humanoids.


eurographics | 2015

Visualization of Particle-based Data with Transparency and Ambient Occlusion

Joachim Staib; Sebastian Grottel; Stefan Gumhold

Particle‐based simulation techniques, like the discrete element method or molecular dynamics, are widely used in many research fields. In real‐time explorative visualization it is common to render the resulting data using opaque spherical glyphs with local lighting only. Due to massive overlaps, however, inner structures of the data are often occluded rendering visual analysis impossible. Furthermore, local lighting is not sufficient as several important features like complex shapes, holes, rifts or filaments cannot be perceived well.


Computer Graphics Forum | 2011

Diffusion-Based Snow Cover Generation

Niels von Festenberg; Stefan Gumhold

We present a method to generate snow covers on complex scene geometries. Both volumetric snow shapes and photorealistic texturing are computed. We formulate snow accumulation as a diffusive distribution process on a ground scene. Our theoretical framework is motivated by models for granular material deposition. With the framework we can capture the most relevant features of natural snow cover geometries in a concise local computation scheme. Snow bridges and overhangs are also included. Snow surface texture coordinates are computed to create realistic ground–snow interfaces. Several example scenes and a supplementary snow cover growth animation demonstrate the methods efficiency.

Collaboration


Dive into the Stefan Gumhold's collaboration.

Top Co-Authors

Avatar

Marcel Spehr

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Eric Brachmann

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carsten Rother

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Frank Michel

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joachim Staib

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Nico Schertler

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Stefan Hesse

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Niels von Festenberg

Dresden University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge