Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Felix Klose is active.

Publication


Featured researches published by Felix Klose.


IEEE Transactions on Circuits and Systems for Video Technology | 2014

Correspondence and Depth-Image Based Rendering a Hybrid Approach for Free-Viewpoint Video

Christian Lipski; Felix Klose; Marcus A. Magnor

We present a novel approach to free-viewpoint video. Our main contribution is the formulation of a hybrid approach between image morphing and depth-image based rendering. When rendering the scene from novel viewpoints, we use both dense pixel correspondences between image pairs as well as an underlying, view-dependent geometrical model. Our novel reconstruction scheme iteratively refines geometric and correspondence information. By combining the strengths of both depth and correspondence estimation, our approach enables free-viewpoint video also for challenging scenes as well as for recordings that may violate typical constraints in multiview reconstruction. For example, our method is robust against inaccurate camera calibration, asynchronous capture, and imprecise depth reconstruction. Rendering results for different scenes and applications demonstrate the versatility and robustness of our approach.


ACM Transactions on Graphics | 2014

Garment Replacement in Monocular Video Sequences

Lorenz Rogge; Felix Klose; Michael Stengel; Martin Eisemann; Marcus A. Magnor

We present a semi-automatic approach to exchange the clothes of an actor for arbitrary virtual garments in conventional monocular video footage as a postprocess. We reconstruct the actors body shape and motion from the input video using a parameterized body model. The reconstructed dynamic 3D geometry of the actor serves as an animated mannequin for simulating the virtual garment. It also aids in scene illumination estimation, necessary to realistically light the virtual garment. An image-based warping technique ensures realistic compositing of the rendered virtual garment and the original video. We present results for eight real-world video sequences featuring complex test cases to evaluate performance for different types of motion, camera settings, and illumination conditions.


international conference on computer vision | 2011

On performance analysis of optical flow algorithms

Daniel Kondermann; Steffen Abraham; Gabriel J. Brostow; Wolfgang Förstner; Stefan K. Gehrig; Atsushi Imiya; Bernd Jähne; Felix Klose; Marcus A. Magnor; Helmut Mayer; Rudolf Mester; Tomas Pajdla; Ralf Reulke; Henning Zimmer

Literally thousands of articles on optical flow algorithms have been published in the past thirty years. Only a small subset of the suggested algorithms have been analyzed with respect to their performance. These evaluations were based on black-box tests, mainly yielding information on the average accuracy on test-sequences with ground truth. No theoretically sound justification exists on why this approach meaningfully and/or exhaustively describes the properties of optical flow algorithms. In practice, design choices are often made based on unmotivated criteria or by trial and error. This article is a position paper questioning current methods in performance analysis. Without empirical results, we discuss more rigorous and theoretically sound approaches which could enable scientists and engineers alike to make sufficiently motivated design choices for a given motion estimation task.


conference on visual media production | 2011

Flowlab - An Interactive Tool for Editing Dense Image Correspondences

Felix Klose; Kai Ruhl; Christian Lipski; Marcus A. Magnor

Finding dense correspondences between two images is a well-researched but still unsolved problem. For various tasks in computer graphics, e.g. image interpolation, obtaining plausible correspondences is a vital component. We present an interactive tool that allows the user to modify and correct dense correspondence maps between two given images. Incorporating state-of-the art algorithms in image segmentation, correspondence estimation and optical flow, our tool assists the user in selecting and correcting mismatched correspondences.


vision modeling and visualization | 2010

Reconstructing Shape and Motion from Asynchronous Cameras

Felix Klose; Christian Lipski; Marcus A. Magnor

We present an algorithm for scene flow reconstruction from multi-view data. The main contribution is its ability to cope with asynchronously captured videos. Our holistic approach simultaneously estimates depth, orientation and 3D motion, as a result we obtain a quasi-dense surface patch representation of the dynamic scene. The reconstruction starts with the generation of a sparse set of patches from the input views which are then iteratively expanded along the object surfaces. We show that the approach performs well for scenes ranging from single objects to cluttered real world scenarios. This is the author version of the paper. The definitive version is available at digilib.eg.org.


conference on visual media production | 2011

Making of Who Cares? HD Stereoscopic Free Viewpoint Video

Christian Lipski; Felix Klose; Kai Ruhl; Marcus A. Magnor

We present a detailed blueprint of our stereoscopic freeviewpoint video system. Using unsynchronized footage as input, we can render virtual camera paths in the post-production stage. The movement of the virtual camera also extends to the temporal domain, so that slow-motion and freeze-and-rotate shots are possible. As a proof-of-concept, a full length stereoscopic HD music video has been produced using our approach.


Proceedings of the 2010 international conference on Video Processing and Computational Video | 2010

Towards plenoptic raumzeit reconstruction

Martin Eisemann; Felix Klose; Marcus A. Magnor

The goal of image-based rendering is to evoke a visceral sense of presense in a scene using only photographs or videos. A huge variety of different approaches have been developed during the last decade. Examining the underlying models we find three different main categories: view interpolation based on geometry proxies, pure image interpolation techniques and complete scene flow reconstruction. In this paper we present three approaches for free-viewpoint video, one for each of these categories and discuss their individual benefits and drawbacks. We hope that studying the different approaches will help others in making important design decisions when planning a free-viewpoint video system.


conference on visual media production | 2012

Integrating approximate depth data into dense image correspondence estimation

Kai Ruhl; Felix Klose; Christian Lipski; Marcus A. Magnor

High-quality dense image correspondence estimation between two images is an essential prerequisite for many tasks in visual media production, one prominent example being view interpolation. Due to the ill-posed nature of the correspondence estimation problem, errors occur frequently for a number of problematic conditions, among them occlusions, large displacements and low-textured regions. In this paper, we propose to use approximate depth data from low-resolution depth sensors or coarse geometric proxies to guide the high-resolution image correspondence estimation. We counteract the effect of uncertainty in the prior by exploiting the coarse-to-fine image pyramid used in our estimation algorithm. Our results show that even with only approximate priors, visual quality improves considerably compared to an unguided algorithm or a pure depth-based interpolation.


international conference on computer graphics and interactive techniques | 2011

Integrating multiple depth sensors into the virtual video camera

Kai Ruhl; Kai Berger; Christian Lipski; Felix Klose; Yannic Schroeder; Alexander Scholz; Marcus A. Magnor

In this ongoing work, we present our efforts to incorporate depth sensors [Microsoft Corp 2010] into a multi camera system for free view-point video [Lipski et al. 2010]. Both the video cameras and the depth sensors are consumer grade. Our free-viewpoint system, the Virtual Video Camera, uses image-based rendering to create novel views between widely spaced (up to 15 degrees) cameras, using dense image correspondences. The introduction of multiple depth sensors into the system allows us to obtain approximate depth information for many pixels, thereby providing a valuable hint for estimating pixel correspondences between cameras.


acm multimedia | 2015

Web-based Interactive Free-Viewpoint Streaming: A framework for high quality interactive free viewpoint navigation

Matthias Ueberheide; Felix Klose; Tilak Varisetty; Markus Fidler; Marcus A. Magnor

Recent advances in free-viewpoint rendering techniques as well as the continued improvements of the internet network infrastructure open the door for challenging new applications. In this paper, we present a framework for interactive free-viewpoint streaming with open standards and software. Network bandwidth, encoding strategy as well as codec support for open source browsers are key constraints to be considered for our interactive streaming applications. Our framework is capable of real-time server-side rendering and interactively streaming the output by means of open source streaming. To enable viewer interaction with the free-viewpoint video rendering back-end in a standard browser, user events are captured with Javascript and transmitted using WebSockets. The rendered video is streamed to the browser using the FFmpeg free software project. This paper discusses the applicability of open source streaming and presents timing measurements for video-frame transmission over network.

Collaboration


Dive into the Felix Klose's collaboration.

Top Co-Authors

Avatar

Marcus A. Magnor

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Christian Lipski

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kai Ruhl

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kai Berger

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Linz

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johannes Morgenroth

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kai Homeier

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Lars C. Wolf

Braunschweig University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge