Kurt Cornelis
Katholieke Universiteit Leuven
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kurt Cornelis.
International Journal of Computer Vision | 2004
Marc Pollefeys; Luc Van Gool; Maarten Vergauwen; Frank Verbiest; Kurt Cornelis; Jan Tops; Reinhard Koch
In this paper a complete system to build visual models from camera images is presented. The system can deal with uncalibrated image sequences acquired with a hand-held camera. Based on tracked or matched features the relations between multiple views are computed. From this both the structure of the scene and the motion of the camera are retrieved. The ambiguity on the reconstruction is restricted from projective to metric through self-calibration. A flexible multi-view stereo matching scheme is used to obtain a dense estimation of the surface geometry. From the computed data different types of visual models are constructed. Besides the traditional geometry- and image-based approaches, a combined approach with view-dependent geometry and texture is presented. As an application fusion of real and virtual scenes is also shown.
computer vision and pattern recognition | 2007
Bastian Leibe; N. Cornelis; Kurt Cornelis; L. Van Gool
In this paper, we present a system that integrates fully automatic scene geometry estimation, 2D object detection, 3D localization, trajectory estimation, and tracking for dynamic scene interpretation from a moving vehicle. Our sole input are two video streams from a calibrated stereo rig on top of a car. From these streams, we estimate structure-from-motion (SfM) and scene geometry in real-time. In parallel, we perform multi-view/multi-category object recognition to detect cars and pedestrians in both camera images. Using the SfM self-localization, 2D object detections are converted to 3D observations, which are accumulated in a world coordinate frame. A subsequent tracking module analyzes the resulting 3D observations to find physically plausible spacetime trajectories. Finally, a global optimization criterion takes object-object interactions into account to arrive at accurate 3D localization and trajectory estimates for both cars and pedestrians. We demonstrate the performance of our integrated system on challenging real-world data showing car passages through crowded city areas.
International Journal of Computer Vision | 2008
Nico Cornelis; Bastian Leibe; Kurt Cornelis; Luc Van Gool
Abstract Supplying realistically textured 3D city models at ground level promises to be useful for pre-visualizing upcoming traffic situations in car navigation systems. Because this pre-visualization can be rendered from the expected future viewpoints of the driver, the required maneuver will be more easily understandable. 3D city models can be reconstructed from the imagery recorded by surveying vehicles. The vastness of image material gathered by these vehicles, however, puts extreme demands on vision algorithms to ensure their practical usability. Algorithms need to be as fast as possible and should result in compact, memory efficient 3D city models for future ease of distribution and visualization. For the considered application, these are not contradictory demands. Simplified geometry assumptions can speed up vision algorithms while automatically guaranteeing compact geometry models. In this paper, we present a novel city modeling framework which builds upon this philosophy to create 3D content at high speed. Objects in the environment, such as cars and pedestrians, may however disturb the reconstruction, as they violate the simplified geometry assumptions, leading to visually unpleasant artifacts and degrading the visual realism of the resulting 3D city model. Unfortunately, such objects are prevalent in urban scenes. We therefore extend the reconstruction framework by integrating it with an object recognition module that automatically detects cars in the input video streams and localizes them in 3D. The two components of our system are tightly integrated and benefit from each other’s continuous input. 3D reconstruction delivers geometric scene context, which greatly helps improve detection precision. The detected car locations, on the other hand, are used to instantiate virtual placeholder models which augment the visual realism of the reconstructed city model.
computer vision and pattern recognition | 2006
Nico Cornelis; Kurt Cornelis; L. Van Gool
Nowadays, GPS-based car navigation systems mainly use speech and aerial views of simplified road maps to guide drivers to their destination. However, drivers often experience difficulties in linking the simple 2D aerial map with the visual impression that they get from the real environment, which is inherently ground-level based. Therefore, supplying realistically textured 3D city models at ground-level proves very useful for pre-visualizing an upcoming traffic situation. Because this pre-visualization can be rendered from the expected future viewpoints of the driver, the latter will more easily understand the required maneuver. 3D city models can be reconstructed from the imagery recorded by surveying vehicles. The vastness of image material gathered by these vehicles, however, puts extreme demands on vision algorithms to ensure their practical usability. Algorithms need to be as fast as possible and should result in compact, memory efficient 3D city models for future ease of distribution and visualization. For the considered application, these are not contradictory demands. Simplified geometry assumptions can speed up vision algorithms while automatically guaranteeing compact geometry models. We present a novel city modeling framework which builds upon this philosophy to create 3D content at high speed which could allow for pre-visualization of any conceivable traffic situation by car navigation modules.
IEEE Computer Graphics and Applications | 2003
Marc Pollefeys; L. Van Gool; Maarten Vergauwen; Kurt Cornelis; Frank Verbiest; Jan Tops
Until recently, archaeologists have had limited 3D recording options because of the complexity and expense of the necessary recording equipment. We outline a system that helps archaeologists acquire 3D models without using equipment more complex or delicate than a standard digital camera.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2004
Kurt Cornelis; Frank Verbiest; L. Van Gool
In sequential Structure from Motion algorithms for extended image or video sequences, error build up caused by drift poses a problem as feature tracks that normally represent a single scene point will have distinct 3D reconstructions. For the final bundle adjustment to remove this drift, it must be told about these 3D-3D correspondences through a change in the cost function. However, as a bundle adjustment is a nonlinear optimization technique, the drift needs to be removed from the supplied initial solution to allow for convergence of the bundle adjustment to the real global optimum. Before drift can be removed, it has to be detected. This is accomplished through understanding of the long term behavior of drift which leaves 3D reconstructions from short sequences intact. Drift detection boils down to identifying reconstructions of the same scene part that only differ up to a projective transformation. After detection, the drift can be removed from future processed images and an Adapted Bundle Adjustment using correspondences supplied by the drift detection can remove the drift from previous images. Several experiments on real video sequences demonstrate the merit of drift detection and removal.
visual analytics science and technology | 2001
Marc Pollefeys; Luc Van Gool; Maarten Vergauwen; Kurt Cornelis; Frank Verbiest; Jan Tops
In this paper an approach is presented that obtains virtual models from sequences of images. The system can deal with uncalibrated image sequences acquired with a hand-held camera. Based on tracked or matched features the relations between multiple views are computed. From this both the structure of the scene and the motion of the camera are retrieved. The ambiguity on the reconstruction is restricted from projective to metric through auto-calibration. A flexible multi-view stereo matching scheme is used to obtain a dense estimation of the surface geometry. From the computed data virtual models can be constructed or, inversely, virtual models can be included in the original images.
european workshop on 3d structure from multiple images of large scale environments | 2000
Kurt Cornelis; Marc Pollefeys; Maarten Vergauwen; Luc Van Gool
Augmented Reality (AR) aims at merging the real and the virtual in order to enrich a real environment with virtual information. Augmentations range from simple text annotations accompanying real objects to virtual mimics of real-life objects inserted into a real environment. In the latter case the ultimate goal is to make it impossible to differentiate between real and virtual objects. Several problems need to be overcome before realizing this goal. Amongst them are the rigid registration of virtual objects into the real environment, the problem of mutual occlusion of real and virtual objects and the extraction of the illumination distribution of the real environment in order to render the virtual objects with this illumination model. This paper will unfold how we proceeded to implement an Augmented Reality System that registers virtual objects into a totally uncalibrated video sequence of a real environment that may contain some moving parts. The other problems of occlusion and illumination will not be discussed in this paper but are left as future research topics.
virtual reality software and technology | 2001
Kurt Cornelis; Marc Pollefeys; Luc Van Gool
Augmented Reality (AR) can hardly be called uncharted territory. Much research in this area revealed solutions to the three most prominent challenges of AR: accurate camera state retrieval, resolving occlusions between real and virtual objects and extraction of environment illumination distribution. Solving these three challenges improves the illusion of virtual entities belonging to our reality. This paper demonstrates an elaborated framework that recovers accurate camera states from a video sequence based on feature tracking. Without prior calibration knowledge, it is able to create AR Video products with negligible/invisible jitter or drift of virtual entities starting from general input video sequences. Together with the referenced papers, this work describes a readily implementable and robust AR-System.
European Physical Journal A | 1978
M. Huyse; Kurt Cornelis; G. Dumont; G. Lhersonneau; J Verplancke; Wb Walters
The decays of neutron-deficient 21-sec97Ag, 44.5-sec98Ag, 15-sec99mAg, and 124-sec99gAg nuclides have been investigated with the LISOL facility. Sources were produced by the92Mo(14N, ypxn) reactions on an enriched92Mo target. Positron-, x-, and γ-ray singles spectra have been performed on mass separated samples. The results are consistent with 9/2+ ground state in99Ag and a high (6+ or 7+) spin and parity for98Ag.