Christian Lipski
Braunschweig University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christian Lipski.
southwest symposium on image analysis and interpretation | 2008
Christian Lipski; Björn Scholz; Kai Berger; Christian Linz; Timo Stich; Marcus A. Magnor
We present a lane detection algorithm that robustly detects and tracks various lane markings in real-time. The first part is a feature detection algorithm that transforms several input images into a top view perspective and analyzes local histograms. For this part we make use of state-of-the-art graphics hardware. The second part fits a very simple and flexible lane model to these lane marking features. The algorithm was thoroughly tested on an autonomous vehicle that was one of the finalists in the 2007 DARPA Urban Challenge. In combination with other sensors, i.e. a lidar, radar and vision based obstacle detection and surface classification, the autonomous vehicle is able to drive in an urban scenario at up to 15 mp/h.
Computer Graphics Forum | 2010
Christian Lipski; Christian Linz; Kai Berger; Anita Sellent; Marcus A. Magnor
We present an image‐based rendering system to viewpoint‐navigate through space and time of complex real‐world, dynamic scenes. Our approach accepts unsynchronized, uncalibrated multivideo footage as input. Inexpensive, consumer‐grade camcorders suffice to acquire arbitrary scenes, for example in the outdoors, without elaborate recording setup procedures, allowing also for hand‐held recordings. Instead of scene depth estimation, layer segmentation or 3D reconstruction, our approach is based on dense image correspondences, treating view interpolation uniformly in space and time: spatial viewpoint navigation, slow motion or freeze‐and‐rotate effects can all be created in the same way. Acquisition simplification, integration of moving cameras, generalization to difficult scenes and space–time symmetric interpolation amount to a widely applicable virtual video camera system.
IEEE Transactions on Circuits and Systems for Video Technology | 2014
Christian Lipski; Felix Klose; Marcus A. Magnor
We present a novel approach to free-viewpoint video. Our main contribution is the formulation of a hybrid approach between image morphing and depth-image based rendering. When rendering the scene from novel viewpoints, we use both dense pixel correspondences between image pairs as well as an underlying, view-dependent geometrical model. Our novel reconstruction scheme iteratively refines geometric and correspondence information. By combining the strengths of both depth and correspondence estimation, our approach enables free-viewpoint video also for challenging scenes as well as for recordings that may violate typical constraints in multiview reconstruction. For example, our method is robust against inaccurate camera calibration, asynchronous capture, and imprecise depth reconstruction. Rendering results for different scenes and applications demonstrate the versatility and robustness of our approach.
conference on visual media production | 2010
Christian Lipski; C. Linz; Thomas Neumann; M. Wacker; Marcus A. Magnor
We present an algorithm for estimating dense image correspondences. Our versatile approach lends itself to various tasks typical for video post-processing, including image morphing, optical flow estimation, stereo rectification, disparity/depth reconstruction and baseline adjustment. We incorporate recent advances in feature matching, energy minimization, stereo vision and data clustering into our approach. At the core of our correspondence estimation we use Efficient Belief Propagation for energy minimization. While state-of-the-art algorithms only work on thumbnail-sized images, our novel feature downsampling scheme in combination with a simple, yet efficient data term compression can cope with high-resolution data. The incorporation of SIFT features into data term computation further resolves matching ambiguities, making long-range correspondence estimation possible. We detect occluded areas by evaluating the correspondence symmetry, we further apply Geodesic matting to automatically in paint these regions.
international conference on computer graphics and interactive techniques | 2008
Christoph Salge; Christian Lipski; Tobias Mahlmann; Brigitte Mathiak
Fun in computer games depends on many factors. While some factors like uniqueness and humor can only be measured by human subjects, in a strategical game, the rule system is an important and measurable factor. Classics like chess and GO have a millennia-old story of success, based on clever rule design. They only have a few rules, are relatively easy to understand, but still they have myriads of possibilities. Testing the deepness of a rule-set is very hard, especially for a rule system as complex as in a classic strategic computer game. It is necessary, though, to ensure prolonged gaming fun. In our approach, we use artificial intelligence (AI) to simulate hours of beta-testing the given rules, tweaking the rules to provide more game-playing fun and deepness. To avoid making the AI a mirror of its programmers gaming preferences, we not only evolved the AI with a genetic algorithm, but also used three fundamentally different AI paradigms to find boring loopholes, inefficient game mechanisms and, last but not least, complex erroneous behavior.
international conference on computer graphics and interactive techniques | 2009
Christian Lipski; Christian Linz; Kai Berger; Marcus A. Magnor
We present an image-based rendering system to viewpoint-navigate through space and time of complex real-world, dynamic scenes. Our approach accepts unsynchronized, uncalibrated multi-video footage as input. Inexpensive, consumer-grade camcorders suffice to acquire arbitrary scenes, e.g., in the outdoors, without elaborate recording setup procedures. Instead of scene depth estimation, layer segmentation, or 3D reconstruction, our approach is based on dense image correspondences, treating view interpolation uniformly in space and time: spatial viewpoint navigation, slow motion, and freeze-and-rotate effects can all be created in the same fashion. Acquisition simplification, generalization to difficult scenes, and space-time symmetric interpolation amount to a widely applicable Virtual Video Camera system.
conference on visual media production | 2011
Felix Klose; Kai Ruhl; Christian Lipski; Marcus A. Magnor
Finding dense correspondences between two images is a well-researched but still unsolved problem. For various tasks in computer graphics, e.g. image interpolation, obtaining plausible correspondences is a vital component. We present an interactive tool that allows the user to modify and correct dense correspondence maps between two given images. Incorporating state-of-the art algorithms in image segmentation, correspondence estimation and optical flow, our tool assists the user in selecting and correcting mismatched correspondences.
vision modeling and visualization | 2010
Felix Klose; Christian Lipski; Marcus A. Magnor
We present an algorithm for scene flow reconstruction from multi-view data. The main contribution is its ability to cope with asynchronously captured videos. Our holistic approach simultaneously estimates depth, orientation and 3D motion, as a result we obtain a quasi-dense surface patch representation of the dynamic scene. The reconstruction starts with the generation of a sparse set of patches from the input views which are then iteratively expanded along the object surfaces. We show that the approach performs well for scenes ranging from single objects to cluttered real world scenarios. This is the author version of the paper. The definitive version is available at digilib.eg.org.
international conference on computer graphics and interactive techniques | 2010
Christian Linz; Christian Lipski; Marcus A. Magnor
Multi-image interpolation in space and time has recently received considerable attention. Typically, the interpolated image is synthesized by adaptively blending several forward-warped images. Blending itself is a low-pass filtering operation: the interpolated images are prone to blurring and ghosting artifacts as soon as the underlying correspondence fields are imperfect. We address both issues and propose a multi-image interpolation algorithm that avoids blending. Instead, our algorithm decides for each pixel in the synthesized view from which input image to sample. Combined with a symmetrical long-range optical flow formulation for correspondence field estimation, our approach yields crisp interpolated images without ghosting artifacts.
Proceedings of the 1st international workshop on 3D video processing | 2010
Christian Linz; Christian Lipski; Lorenz Rogge; Christian Theobalt; Marcus A. Magnor
Space-time visual effects play an increasingly prominent role in recent motion picture productions as well as TV commercials. Currently, these effects must be meticulously planned before extensive, specialized camera equipment can be precisely positioned and aligned at the set; once recorded, the effect cannot be altered or edited anymore. In this paper, we present an alternative approach to space-time visual effects creation that allows flexible generation and interactive editing of a multitude of different effects during the post-production stage. The approach requires neither expensive, special recording equipment nor elaborate on-set alignment or calibration procedures. Rather, a handful of off-the-shelf camcorders, positioned around a real-world scene suffice, to record the input data. We synthesize various space-time visual effects from unsynchronized, sparse multi-view video footage by making use of recent advances in image interpolation. Based on a representation in a distinct navigation space, our space-time visual effects (STF/X) editor allows us to interactively create and edit on-the-fly various effects such as slow motion, stop motion, freeze-rotate, motion blur, multi-exposure, flash trail and motion distortion. As the input to our approach consists solely of video frames, various image-based artistic stylizations, such as speed lines and particle effects are also integrated into the editor. Finally, different effects can be combined, enabling the creation of new visual effects that are impossible to record with the conventional on-set approach.