Christian Riechert
Heinrich Hertz Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christian Riechert.
Journal of Visual Communication and Image Representation | 2014
Frederik Zilly; Christian Riechert; Marcus Müller; Peter Eisert; Thomas Sikora; Peter Kauff
Content production for stereoscopic 3D-TV displays has become mature in the past years while huge progress has also been achieved in the improvement of the image quality of glasses-free auto-stereoscopic displays and light-field displays. Concerning the latter two display families, the content production workflow is less elaborated and more complex, as the number of required views not only differs considerably but is also likely to increase in the near future. As a co-existence of all 3D display families can be expected for the next years, one aims to establish an efficient content production workflow which yields to high quality content for all 3D-TV displays. Against this background we present a real-time capable multi-view video plus depth (MVD) content production workflow based on a four-camera rig with mixed narrow and wide baseline. Results show the suitability of the approach to simultaneously produce high quality MVD4 and native stereoscopic 3D content.
IEEE Transactions on Consumer Electronics | 2014
Martin Werner; Benno Stabernack; Christian Riechert
Disparity estimation is a common task in stereo vision and usually requires a high computational effort. High resolution disparity maps are necessary to provide a good image quality on autostereoscopic displays which deliver stereo content without the need for 3D glasses. In this paper, an FPGA architecture for a disparity estimation algorithm is proposed, that is capable of processing high-definition content in real-time. The resulting architecture is efficient in terms of power consumption and can be easily scaled to support higher resolutions.
conference on visual media production | 2011
Frederik Zilly; Christian Riechert; Peter Eisert; Peter Kauff
This paper presents a new approach for feature description used in image processing and robust image recognition algorithms such as 3D camera tracking, view reconstruction or 3D scene analysis. State of the art feature detectors distinguish interest point detection and description. The former is commonly performed in scale space, while the latter is used to describe a normalized support region using histograms of gradients or similar derivatives of the grayscale image patch. This approach has proven to be very successful. However, the descriptors are usually of high dimensionality in order to achieve a high descriptiveness. Against this background, we propose a binarized descriptor which has a low memory usage and good matching performance. The descriptor is composed of binarized responses resulting from a set of folding operations applied to the normalized support region. We demonstrate the real-time capabilities of the feature descriptor in a stereo matching environment.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2011
Marcus Mueller; Frederik Zilly; Christian Riechert; Peter Kauff
The demand for high quality depth maps from stereo and multi-camera videos increases constantly. The main application for these depth maps is rendering new perspectives of the captured scene by means of Depth Image Based Rendering (DIBR). Accurate depth maps are the linchpin of DIBR. On the basis of a four-camera set-up, we show that combining hybrid recursive matching with motion estimation, cross -bilateral post-processing and mutual depth map fusion produces spatio-temporal consistent depth maps appropriate for artifact-free view synthesis.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2012
Frederik Zilly; Christian Riechert; Marcus Müller; Peter Kauff
Content production for stereoscopic 3D-TV displays has become mature in the past years. The content is usually shot using two cameras as the glasses-based target devices require two views as input. Beside stereoscopic 3D-TVs, huge progress has also been achieved in the improvement of the image quality of glasses-free auto-stereoscopic displays and light-field displays. Concerning the latter two display families, the content production workflow is less elaborated and more complex, as the number of required views differs considerably and is likely to increase in the near future. As a co-existence of all 3D display families can be expected for the next years, one aims to establish an efficient content production workflow which yields to high quality content for all 3D-TV displays. Against this background, we present a content production workflow which uses as acquisition device a four camera rig involving a central narrow baseline and a wide baseline with two satellite cameras and propose a multi-view video plus depth generation workflow optimized for the four-camera setup.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2012
Christian Riechert; Frederik Zilly; Marcus Müller; Peter Kauff
Depth image based rendering (DIBR) is a powerful tool for both stereo repurposing in post-production as well as allowing a consumer to influence the stereo impression of stereoscopic content at home. For this the quality of the rendered view should be as good as possible. Up to now most DIBR implementation use a forwards mapping from source to target image and thus introduce artifacts by rounding errors and aliasing. In this paper a two-step rendering method is proposed and evaluated which allows you to use well known interpolation filters for the rendering of virtual camera images. By first forwards mapping the disparity map and in a second step rendering the new image via backwards mapping, aliasing and jagged lines which are specifically visible in fine textured image regions can be reduced significantly by using advanced interpolation filters in the back-wards mapping process.
IVMSP 2013 | 2013
Oliver Schreer; M. Bertzen; Nicole Atzpadin; Christian Riechert; Wolfgang Waizenegger; Ingo Feldmann
Multi-view camera calibration is an essential task in the filed of 3D reconstruction which holds especially for immersive media applications like 3D videocommunication. Although the problem of multi-view calibration is basically solved, there is still space to improve the calibration process and to increase the accuracy during acquisition of calibration patterns. It is commonly known that robust and accurate calibration requires feature points that are equally distributed in 3D space covering the whole volume of interest. In this paper, we propose a user guided calibration based on a graphical user interface, which drastically simplifies the correct acquisition of calibration patterns. Based on an optimized selection of patterns and their corresponding feature points, the multi-view calibration becomes much faster in terms of data acquisition as well as computational effort by reaching the same accuracy with standard unguided acquisitions of calibration pattern.
Archive | 2017
Marcus Müller; Christian Riechert; Ingo Feldmann; Gao Shan; Lu Yadong
international conference on pattern recognition | 2012
Christian Riechert; Frederik Zilly; Marcus Müller; Peter Kauff
Archive | 2012
Peter Kauff; Christian Riechert; Frederik Zilly; Marcus Mueller