Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kai Ruhl is active.

Publication


Featured researches published by Kai Ruhl.


vision modeling and visualization | 2011

Markerless Motion Capture using multiple Color-Depth Sensors

Kai Berger; Kai Ruhl; Yannic Schroeder; Christian Bruemmer; Alexander Scholz; Marcus A. Magnor

With the advent of the Microsoft Kinect, renewed focus has been put on monocular depth-based motion capturing. However, this approach is limited in that an actor has to move facing the camera. Due to the active light nature of the sensor, no more than one device has been used for motion capturing so far. In effect, any pose estimation must fail for poses occluded to the depth camera. Our work investigates on reducing or mitigating the detrimental effects of multiple active light emitters, thereby allowing motion capture from all angles. We systematically evaluate the concurrent use of one to four Kinects, including calibration, error measures and analysis, and present a time-multiplexing approach.


Time-of-Flight and Depth Imaging | 2013

A Survey on Time-of-Flight Stereo Fusion

Rahul Nair; Kai Ruhl; Frank Lenzen; Stephan Meister; Henrik Schäfer; Christoph S. Garbe; Martin Eisemann; Marcus A. Magnor; Daniel Kondermann

Due to the demand for depth maps of higher quality than possible with a single depth imaging technique today, there has been an increasing interest in the combination of different depth sensors to produce a “super-camera” that is more than the sum of the individual parts. In this survey paper, we give an overview over methods for the fusion of Time-of-Flight (ToF) and passive stereo data as well as applications of the resulting high quality depth maps. Additionally, we provide a tutorial-based introduction to the principles behind ToF stereo fusion and the evaluation criteria used to benchmark these methods.


international conference on computer vision | 2011

The capturing of turbulent gas flows using multiple Kinects

Kai Berger; Kai Ruhl; Mark Albers; Yannic Schröder; Alexander Scholz; Jan Kokemüller; Stefan Guthe; Marcus A. Magnor

We introduce the Kinect as a tool for capturing gas flows around occluders using objects of different aerodynamic properties. Previous approaches have been invasive or require elaborate setups including large printed sheets of complex noise patterns and neat lighting. Our method is easier to set up while still producing good results. We show that three Kinects are sufficient to qualitatively reconstruct nonstationary time varying gas flows in the presence of occluders.


conference on visual media production | 2011

Flowlab - An Interactive Tool for Editing Dense Image Correspondences

Felix Klose; Kai Ruhl; Christian Lipski; Marcus A. Magnor

Finding dense correspondences between two images is a well-researched but still unsolved problem. For various tasks in computer graphics, e.g. image interpolation, obtaining plausible correspondences is a vital component. We present an interactive tool that allows the user to modify and correct dense correspondence maps between two given images. Incorporating state-of-the art algorithms in image segmentation, correspondence estimation and optical flow, our tool assists the user in selecting and correcting mismatched correspondences.


conference on visual media production | 2011

Making of Who Cares? HD Stereoscopic Free Viewpoint Video

Christian Lipski; Felix Klose; Kai Ruhl; Marcus A. Magnor

We present a detailed blueprint of our stereoscopic freeviewpoint video system. Using unsynchronized footage as input, we can render virtual camera paths in the post-production stage. The movement of the virtual camera also extends to the temporal domain, so that slow-motion and freeze-and-rotate shots are possible. As a proof-of-concept, a full length stereoscopic HD music video has been produced using our approach.


conference on visual media production | 2012

Integrating approximate depth data into dense image correspondence estimation

Kai Ruhl; Felix Klose; Christian Lipski; Marcus A. Magnor

High-quality dense image correspondence estimation between two images is an essential prerequisite for many tasks in visual media production, one prominent example being view interpolation. Due to the ill-posed nature of the correspondence estimation problem, errors occur frequently for a number of problematic conditions, among them occlusions, large displacements and low-textured regions. In this paper, we propose to use approximate depth data from low-resolution depth sensors or coarse geometric proxies to guide the high-resolution image correspondence estimation. We counteract the effect of uncertainty in the prior by exploiting the coarse-to-fine image pyramid used in our estimation algorithm. Our results show that even with only approximate priors, visual quality improves considerably compared to an unguided algorithm or a pure depth-based interpolation.


international conference on computer graphics and interactive techniques | 2011

Integrating multiple depth sensors into the virtual video camera

Kai Ruhl; Kai Berger; Christian Lipski; Felix Klose; Yannic Schroeder; Alexander Scholz; Marcus A. Magnor

In this ongoing work, we present our efforts to incorporate depth sensors [Microsoft Corp 2010] into a multi camera system for free view-point video [Lipski et al. 2010]. Both the video cameras and the depth sensors are consumer grade. Our free-viewpoint system, the Virtual Video Camera, uses image-based rendering to create novel views between widely spaced (up to 15 degrees) cameras, using dense image correspondences. The introduction of multiple depth sensors into the system allows us to obtain approximate depth information for many pixels, thereby providing a valuable hint for estimating pixel correspondences between cameras.


acm multimedia | 2015

Interactive Scene Flow Editing for Improved Image-based Rendering and Virtual Spacetime Navigation

Kai Ruhl; Martin Eisemann; Anna Hilsmann; Peter Eisert; Marcus A. Magnor

High-quality stereo and optical flow maps are essential for a multitude of tasks in visual media production, e.g. virtual camera navigation, disparity adaptation or scene editing. Rather than estimating stereo and optical flow separately, scene flow is a valid alternative since it combines both spatial and temporal information and recently surpassed the former two in terms of accuracy. However, since automated scene flow estimation is non-accurate in a number of situations, resulting rendering artifacts have to be corrected manually in each output frame, an elaborate and time-consuming task. We propose a novel workflow to edit the scene flow itself, catching the problem at its source and yielding a more flexible instrument for further processing. By integrating user edits in early stages of the optimization, we allow the use of approximate scribbles instead of accurate editing, thereby reducing interaction times. Our results show that editing the scene flow improves the quality of visual results considerably while requiring vastly less editing effort.


acm multimedia | 2012

Improving dense image correspondence estimation with interactive user guidance

Kai Ruhl; Benjamin Hell; Felix Klose; Christian Lipski; Sören Petersen; Marcus A. Magnor

High quality dense image correspondence estimation between two images is an essential pre-requisite for view interpolation in visual media production. Due to the ill-posed nature of the problem, automated estimation approaches are prone to erroneous correspondences and subsequent quality degradation, e.g. in the presence of ambiguous movements that require human scene understanding to resolve. Where visually convincing results are essential, artifacts resulting from estimation errors must be repaired by hand with image editing tools. In this paper, we propose a new workflow alternative by fixing the correspondences instead of fixing the interpolated images. We combine realtime interactive correspondence display, multi-level user guidance and algorithmic subpixel precision to counteract failure cases of automated estimation algorithms. Our results show that already few interactions improve the visual quality considerably.


Image and Vision Computing | 2012

A loop-consistency measure for dense correspondences in multi-view video

Anita Sellent; Kai Ruhl; Marcus A. Magnor

Many applications in computer vision and computer graphics require dense correspondences between images of multi-view video streams. Most state-of-the-art algorithms estimate correspondences by considering pairs of images. However, in multi-view videos, several images capture nearly the same scene. In this article we show that this redundancy can be exploited to estimate more robust and consistent correspondence fields. We use the multi-video data structure to establish a confidence measure based on the consistency of the correspondences in a loop of three images. This confidence measure can be applied after flow estimation is terminated to find the pixels for which the estimate is reliable. However, including the measure directly into the estimation process yields dense and highly accurate correspondence fields. Additionally, application of the loop consistency confidence measure allows us to include sparse feature matches directly into the dense optical flow estimation. With the confidence measure, spurious matches can be successfully suppressed during optical flow estimation while correct matches contribute to increase the accuracy of the flow.

Collaboration


Dive into the Kai Ruhl's collaboration.

Top Co-Authors

Avatar

Marcus A. Magnor

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Christian Lipski

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Felix Klose

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexander Scholz

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kai Berger

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Eisemann

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Stefan Guthe

University of Tübingen

View shared research outputs
Top Co-Authors

Avatar

Yannic Schroeder

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yannic Schröder

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Anita Sellent

Braunschweig University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge