Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kai Berger is active.

Publication


Featured researches published by Kai Berger.


vision modeling and visualization | 2011

Markerless Motion Capture using multiple Color-Depth Sensors

Kai Berger; Kai Ruhl; Yannic Schroeder; Christian Bruemmer; Alexander Scholz; Marcus A. Magnor

With the advent of the Microsoft Kinect, renewed focus has been put on monocular depth-based motion capturing. However, this approach is limited in that an actor has to move facing the camera. Due to the active light nature of the sensor, no more than one device has been used for motion capturing so far. In effect, any pose estimation must fail for poses occluded to the depth camera. Our work investigates on reducing or mitigating the detrimental effects of multiple active light emitters, thereby allowing motion capture from all angles. We systematically evaluate the concurrent use of one to four Kinects, including calibration, error measures and analysis, and present a time-multiplexing approach.


southwest symposium on image analysis and interpretation | 2008

A Fast and Robust Approach to Lane Marking Detection and Lane Tracking

Christian Lipski; Björn Scholz; Kai Berger; Christian Linz; Timo Stich; Marcus A. Magnor

We present a lane detection algorithm that robustly detects and tracks various lane markings in real-time. The first part is a feature detection algorithm that transforms several input images into a top view perspective and analyzes local histograms. For this part we make use of state-of-the-art graphics hardware. The second part fits a very simple and flexible lane model to these lane marking features. The algorithm was thoroughly tested on an autonomous vehicle that was one of the finalists in the 2007 DARPA Urban Challenge. In combination with other sensors, i.e. a lidar, radar and vision based obstacle detection and surface classification, the autonomous vehicle is able to drive in an urban scenario at up to 15 mp/h.


Computer Graphics Forum | 2010

Virtual Video Camera: Image‐Based Viewpoint Navigation Through Space and Time

Christian Lipski; Christian Linz; Kai Berger; Anita Sellent; Marcus A. Magnor

We present an image‐based rendering system to viewpoint‐navigate through space and time of complex real‐world, dynamic scenes. Our approach accepts unsynchronized, uncalibrated multivideo footage as input. Inexpensive, consumer‐grade camcorders suffice to acquire arbitrary scenes, for example in the outdoors, without elaborate recording setup procedures, allowing also for hand‐held recordings. Instead of scene depth estimation, layer segmentation or 3D reconstruction, our approach is based on dense image correspondences, treating view interpolation uniformly in space and time: spatial viewpoint navigation, slow motion or freeze‐and‐rotate effects can all be created in the same way. Acquisition simplification, integration of moving cameras, generalization to difficult scenes and space–time symmetric interpolation amount to a widely applicable virtual video camera system.


international conference on computer vision | 2011

The capturing of turbulent gas flows using multiple Kinects

Kai Berger; Kai Ruhl; Mark Albers; Yannic Schröder; Alexander Scholz; Jan Kokemüller; Stefan Guthe; Marcus A. Magnor

We introduce the Kinect as a tool for capturing gas flows around occluders using objects of different aerodynamic properties. Previous approaches have been invasive or require elaborate setups including large printed sheets of complex noise patterns and neat lighting. Our method is easier to set up while still producing good results. We show that three Kinects are sufficient to qualitatively reconstruct nonstationary time varying gas flows in the presence of occluders.


international conference on computer graphics and interactive techniques | 2009

Virtual video camera: image-based viewpoint navigation through space and time

Christian Lipski; Christian Linz; Kai Berger; Marcus A. Magnor

We present an image-based rendering system to viewpoint-navigate through space and time of complex real-world, dynamic scenes. Our approach accepts unsynchronized, uncalibrated multi-video footage as input. Inexpensive, consumer-grade camcorders suffice to acquire arbitrary scenes, e.g., in the outdoors, without elaborate recording setup procedures. Instead of scene depth estimation, layer segmentation, or 3D reconstruction, our approach is based on dense image correspondences, treating view interpolation uniformly in space and time: spatial viewpoint navigation, slow motion, and freeze-and-rotate effects can all be created in the same fashion. Acquisition simplification, generalization to difficult scenes, and space-time symmetric interpolation amount to a widely applicable Virtual Video Camera system.


IEEE Computer Graphics and Applications | 2012

Modeling and Verifying the Polarizing Reflectance of Real-World Metallic Surfaces

Kai Berger; Andrea Weidlich; Alexander Wilkie; Marcus A. Magnor

Using measurements of real-world samples of metals, the proposed approach verifies predictions of bidirectional reflectance distribution function (BRDF) models. It employs ellipsometry to verify both the actual polarizing effect and the overall reflectance behavior of the metallic surfaces.


international conference on computer graphics and interactive techniques | 2011

Integrating multiple depth sensors into the virtual video camera

Kai Ruhl; Kai Berger; Christian Lipski; Felix Klose; Yannic Schroeder; Alexander Scholz; Marcus A. Magnor

In this ongoing work, we present our efforts to incorporate depth sensors [Microsoft Corp 2010] into a multi camera system for free view-point video [Lipski et al. 2010]. Both the video cameras and the depth sensors are consumer grade. Our free-viewpoint system, the Virtual Video Camera, uses image-based rendering to create novel views between widely spaced (up to 15 degrees) cameras, using dense image correspondences. The introduction of multiple depth sensors into the system allows us to obtain approximate depth information for many pixels, thereby providing a valuable hint for estimating pixel correspondences between cameras.


international symposium on consumer electronics | 2010

A ghosting artifact detector for interpolated image quality assessment

Kai Berger; Christian Lipski; Christian Linz; Anita Sellent; Marcus A. Magnor

We present a no-reference image quality metric for image interpolation. The approach is capable of detecting blurry regions as well as ghosting artifacts, e.g., in image based rendering scenarios. Based on the assumption that ghosting artifacts can be detected locally, perceived visual quality can be predicted from the amount of regions that are affected by ghosting. Because the approach does not require any reference image, it is very suitable, e.g., for assessing quality of image-based rendering techniques in general settings.


Archive | 2009

Tomographic Reconstruction and Efficient Rendering of Refractive Gas Flows

Ivo Ihrke; Kai Berger; Bradley Atcheson; Marcus A. Magnor; Wolfgang Heidrich

This chapter introduces techniques for the capture and efficient display of dynamic three-dimensional non-stationary gas flows. We describe a flexible Schlieren-tomographic system consisting of multiple consumer camcorders. A special choice of background pattern for Background Oriented Schlieren (BOS) imaging provides for flexibility in the experimental setup. Optical flow techniques are used to measure image space deflections due to heated air flows from arbitrary camera positions. A specially tailored sparse-view algebraic reconstruction algorithm is employed to tomographically recover a refractive index gradient field. After robust integration of these gradient fields, time-varying, fully three-dimensional refractive index fields are obtained. These can be rendered efficiently using a ray-casting style algorithm that is suitable for graphics hardware acceleration. Additional optical properties can be rendered within the same computational framework.


vision modeling and visualization | 2011

Measuring BRDFs of immersed materials

Kai Berger; Ilya Reshetouski; Marcus Magnor; Ivo Ihrke

We investigate the effect of immersing real-world materials into media of different refractive indices. We show, that only some materials follow the Fresnel-governed behaviour. In reality, many materials exhibit unexpected effects such as stronger localized highlights or a significant increase in the glossy reflection due to microgeometry. In this paper, we propose a new measurement technique that allows for measuring the BRDFs of materials that are immersed into different media.

Collaboration


Dive into the Kai Berger's collaboration.

Top Co-Authors

Avatar

Marcus A. Magnor

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Christian Lipski

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Scholz

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kai Ruhl

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Felix Klose

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Guthe

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge