Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where L. Van Gool is active.

Publication


Featured researches published by L. Van Gool.


international conference on computer vision | 1998

Self-calibration and metric reconstruction in spite of varying and unknown internal camera parameters

Marc Pollefeys; Reinhard Koch; L. Van Gool

In this paper the feasibility of self-calibration in the presence of varying internal camera parameters is under investigation. A self-calibration method is presented which efficiently deals with all kinds of constraints on the internal camera parameters. Within this framework a practical method is proposed which can retrieve metric reconstruction from image sequences obtained with uncalibrated zooming/focusing cameras. The feasibility of the approach is illustrated on real and synthetic examples.


computer vision and pattern recognition | 2008

On benchmarking camera calibration and multi-view stereo for high resolution imagery

Christoph Strecha; W. von Hansen; L. Van Gool; Pascal Fua; U. Thoennessen

In this paper we want to start the discussion on whether image based 3D modelling techniques can possibly be used to replace LIDAR systems for outdoor 3D data acquisition. Two main issues have to be addressed in this context: (i) camera calibration (internal and external) and (ii) dense multi-view stereo. To investigate both, we have acquired test data from outdoor scenes both with LIDAR and cameras. Using the LIDAR data as reference we estimated the ground-truth for several scenes. Evaluation sets are prepared to evaluate different aspects of 3D model building. These are: (i) pose estimation and multi-view stereo with known internal camera parameters; (ii) camera calibration and multi-view stereo with the raw images as the only input and (iii) multi-view stereo.


international conference on computer vision | 2009

You'll never walk alone: Modeling social behavior for multi-target tracking

Stefano Pellegrini; Andreas Ess; Konrad Schindler; L. Van Gool

Object tracking typically relies on a dynamic model to predict the objects location from its past trajectory. In crowded scenarios a strong dynamic model is particularly important, because more accurate predictions allow for smaller search regions, which greatly simplifies data association. Traditional dynamic models predict the location for each target solely based on its own history, without taking into account the remaining scene objects. Collisions are resolved only when they happen. Such an approach ignores important aspects of human behavior: people are driven by their future destination, take into account their environment, anticipate collisions, and adjust their trajectories at an early stage in order to avoid them. In this work, we introduce a model of dynamic social behavior, inspired by models developed for crowd simulation. The model is trained with videos recorded from birds-eye view at busy locations, and applied as a motion model for multi-people tracking from a vehicle-mounted camera. Experiments on real sequences show that accounting for social interactions and scene knowledge improves tracking performance, especially during occlusions.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Online Multiperson Tracking-by-Detection from a Single, Uncalibrated Camera

Michael D. Breitenstein; Fabian Reichlin; Bastian Leibe; Esther Koller-Meier; L. Van Gool

In this paper, we address the problem of automatically detecting and tracking a variable number of persons in complex scenes using a monocular, potentially moving, uncalibrated camera. We propose a novel approach for multiperson tracking-by-detection in a particle filtering framework. In addition to final high-confidence detections, our algorithm uses the continuous confidence of pedestrian detectors and online-trained, instance-specific classifiers as a graded observation model. Thus, generic object category knowledge is complemented by instance-specific information. The main contribution of this paper is to explore how these unreliable information sources can be used for robust multiperson tracking. The algorithm detects and tracks a large number of dynamically moving people in complex scenes with occlusions, does not rely on background modeling, requires no camera or ground plane calibration, and only makes use of information from the past. Hence, it imposes very few restrictions and is suitable for online applications. Our experiments show that the method yields good tracking performance in a large variety of highly dynamic scenarios, such as typical surveillance videos, webcam footage, or sports sequences. We demonstrate that our algorithm outperforms other methods that rely on additional information. Furthermore, we analyze the influence of different algorithm components on the robustness.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Hough Forests for Object Detection, Tracking, and Action Recognition

Juergen Gall; Angela Yao; Nima Razavi; L. Van Gool; Victor S. Lempitsky

The paper introduces Hough forests, which are random forests adapted to perform a generalized Hough transform in an efficient way. Compared to previous Hough-based systems such as implicit shape models, Hough forests improve the performance of the generalized Hough transform for object detection on a categorical level. At the same time, their flexibility permits extensions of the Hough transform to new domains such as object tracking and action recognition. Hough forests can be regarded as task-adapted codebooks of local appearance that allow fast supervised training and fast matching at test time. They achieve high detection accuracy since the entries of such codebooks are optimized to cast Hough votes with small variance and since their efficiency permits dense sampling of local image patches or video cuboids during detection. The efficacy of Hough forests for a set of computer vision tasks is validated through experiments on a large set of publicly available benchmark data sets and comparisons with the state-of-the-art.


computer vision and pattern recognition | 2008

Action snippets: How many frames does human action recognition require?

Konrad Schindler; L. Van Gool

Visual recognition of human actions in video clips has been an active field of research in recent years. However, most published methods either analyse an entire video and assign it a single action label, or use relatively large look-ahead to classify each frame. Contrary to these strategies, human vision proves that simple actions can be recognised almost instantaneously. In this paper, we present a system for action recognition from very short sequences (ldquosnippetsrdquo) of 1-10 frames, and systematically evaluate it on standard data sets. It turns out that even local shape and optic flow for a single frame are enough to achieve ap90% correct recognitions, and snippets of 5-7 frames (0.3-0.5 seconds of video) are enough to achieve a performance similar to the one obtainable with the entire video sequence.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 1985

Texture analysis anno 1983

L. Van Gool; Piet Dewaele; André Oosterlinck

Abstract In this paper the texture analysis methods being used at present are reviewed. Statistical as well as structural approaches are included and their performances are compared. Concerning the former approach, the gray level difference method, filter mask texture measures, Fourier power spectrum analysis, cooccurrence features, gray level run lengths, autocorrelation features, methods derived from texture models, relative extrema measures, and gray level profiles are discussed. Structural methods which describe texture by its primitives and some placement rules are treated as well. Attention has to be paid to some essential preprocessing steps and to the influence of rotation and scale on the texture analysis methods. Finally the problem of texture segmentation is briefly discussed.


international conference on computer vision | 2007

Depth and Appearance for Mobile Scene Analysis

Andreas Ess; Bastian Leibe; L. Van Gool

In this paper, we address the challenging problem of simultaneous pedestrian detection and ground-plane estimation from video while walking through a busy pedestrian zone. Our proposed system integrates robust stereo depth cues, ground-plane estimation, and appearance-based object detection in a principled fashion using a graphical model. Object-object occlusions lead to complex interactions in this model that make an exact solution computationally intractable. We therefore propose a novel iterative approach that first infers scene geometry using belief propagation and then resolves interactions between objects using a global optimization procedure. This approach leads to a robust solution in few iterations, while allowing object detection to benefit from geometry estimation and vice versa. We quantitatively evaluate the performance of our proposed approach on several challenging test sequences showing strolls through busy shopping streets. Comparisons to various baseline systems show that it outperforms both a system using no scene geometry and one just relying on structure-from-motion without dense stereo.


computer vision and pattern recognition | 2008

A mobile vision system for robust multi-person tracking

Andreas Ess; Bastian Leibe; Konrad Schindler; L. Van Gool

We present a mobile vision system for multi-person tracking in busy environments. Specifically, the system integrates continuous visual odometry computation with tracking-by-detection in order to track pedestrians in spite of frequent occlusions and egomotion of the camera rig. To achieve reliable performance under real-world conditions, it has long been advocated to extract and combine as much visual information as possible. We propose a way to closely integrate the vision modules for visual odometry, pedestrian detection, depth estimation, and tracking. The integration naturally leads to several cognitive feedback loops between the modules. Among others, we propose a novel feedback connection from the object detector to visual odometry which utilizes the semantic knowledge of detection to stabilize localization. Feedback loops always carry the danger that erroneous feedback from one module is amplified and causes the entire system to become instable. We therefore incorporate automatic failure detection and recovery, allowing the system to continue when a module becomes unreliable. The approach is experimentally evaluated on several long and difficult video sequences from busy inner-city locations. Our results show that the proposed integration makes it possible to deliver stable tracking performance in scenes of previously infeasible complexity.


international conference on computer vision | 2005

Modeling scenes with local descriptors and latent aspects

Pedro Quelhas; Florent Monay; Jean-Marc Odobez; Daniel Gatica-Perez; T. Tuytelaars; L. Van Gool

We present a new approach to model visual scenes in image collections, based on local invariant features and probabilistic latent space models. Our formulation provides answers to three open questions:(l) whether the invariant local features are suitable for scene (rather than object) classification; (2) whether unsupennsed latent space models can be used for feature extraction in the classification task; and (3) whether the latent space formulation can discover visual co-occurrence patterns, motivating novel approaches for image organization and segmentation. Using a 9500-image dataset, our approach is validated on each of these issues. First, we show with extensive experiments on binary and multi-class scene classification tasks, that a bag-of-visterm representation, derived from local invariant descriptors, consistently outperforms state-of-the-art approaches. Second, we show that probabilistic latent semantic analysis (PLSA) generates a compact scene representation, discriminative for accurate classification, and significantly more robust when less training data are available. Third, we have exploited the ability of PLSA to automatically extract visually meaningful aspects, to propose new algorithms for aspect-based image ranking and context-sensitive image segmentation.

Collaboration


Dive into the L. Van Gool's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tinne Tuytelaars

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

André Oosterlinck

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Marc Proesmans

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Theodoor Moons

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maarten Vergauwen

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Eric Pauwels

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge