Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philip Lenz is active.

Publication


Featured researches published by Philip Lenz.


computer vision and pattern recognition | 2012

Are we ready for autonomous driving? The KITTI vision benchmark suite

Andreas Geiger; Philip Lenz; Raquel Urtasun

Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.


The International Journal of Robotics Research | 2013

Vision meets robotics: The KITTI dataset

Andreas Geiger; Philip Lenz; Christoph Stiller; Raquel Urtasun

We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10–100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.


international conference on computer vision | 2015

FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation

Philip Lenz; Andreas Geiger; Raquel Urtasun

One of the most popular approaches to multi-target tracking is tracking-by-detection. Current min-cost flow algorithms which solve the data association problem optimally have three main drawbacks: they are computationally expensive, they assume that the whole video is given as a batch, and they scale badly in memory and computation with the length of the video sequence. In this paper, we address each of these issues, resulting in a computationally and memory-bounded solution. First, we introduce a dynamic version of the successive shortest-path algorithm which solves the data association problem optimally while reusing computation, resulting in faster inference than standard solvers. Second, we address the optimal solution to the data association problem when dealing with an incoming stream of data (i.e., online setting). Finally, we present our main contribution which is an approximate online solution with bounded memory and computation which is capable of handling videos of arbitrary length while performing tracking in real time. We demonstrate the effectiveness of our algorithms on the KITTI and PETS2009 benchmarks and show state-of-the-art performance, while being significantly faster than existing solvers.


ieee intelligent vehicles symposium | 2010

Camera-based bidirectional reflectance measurement for road surface reflectivity classification

Martin Roser; Philip Lenz

In this paper we propose a novel framework for road reflectivity classification in cluttered traffic scenarios by measuring the bidirectional reflectance distribution function of road surfaces from inside a moving vehicle. The predominant restrictions in our application are a strongly limited field of observations and a weakly defined illumination environment. To overcome these problems, we estimate the parameters of an extended Oren-Nayar model that considers the diffuse and specular behavior of real-world surfaces and extrapolate the surface reflectivity measurements to unobservable angle combinations. Model ambiguities are decreased by utilizing standardized as well as customized reflection characteristics. In contrast to existing approaches that require special measurement setups, our approach can be implemented in vision-based driver assistance systems using radiometrically uncalibrated gray value cameras and GPS information. The effectiveness of our approach is demonstrated by a successful classification of the road surface reflectance of expressway scenes with low error rates.


international conference on smart cities and green ict systems | 2015

Laser Scanner and Camera Fusion for Automatic Obstacle Classification in ADAS Application

Aurelio Ponz; C. H. Rodríguez-Garavito; Fernando García; Philip Lenz; Christoph Stiller; José María Armingol

Reliability and accuracy are key in state of the art Driving Assistance Systems and Autonomous Driving applications. These applications make use of sensor fusion for trustable obstacle detection and classification in any meteorological and illumination condition. Laser scanner and camera are widely used as sensors to fuse because of its complementary capabilities. This paper presents some novel techniques for automatic and unattended data alignment between sensors, and Artificial Intelligence techniques are used to use laser point clouds not only for obstacle detection but also for classification.. Information fusion with classification information from both laser scanner and camera improves overall system reliability.


Archive | 2015

Efficient Min-cost Flow Tracking with Bounded Memory and Computation

Philip Lenz

This thesis is a contribution to solving multi-target tracking in an optimal fashion for real-time demanding computer vision applications. We introduce a challenging benchmark, recorded with our autonomous driving platform AnnieWAY. Three main challenges of tracking are addressed: Solving the data association (min-cost flow) problem faster than standard solvers, extending this approach to an online setting, and making it real-time capable by a tight approximation of the optimal solution.


International Conference on Vehicle Technology and Intelligent Transport Systems | 2015

Automatic Obstacle Classification using Laser and Camera Fusion

Aurelio Ponz; C. H. Rodríguez-Garavito; Fernando García; Philip Lenz; Christoph Stiller; José María Armingol

State of the art Driving Assistance Systems and Autonomous Driving applications are employing sensor fusion in order to achieve trustable obstacle detection and classification under any meteorological and illumination condition. Fusion between laser and camera is widely used in ADAS applications in order to overcome the difficulties and limitations inherent to each of the sensors. In the system presented, some novel techniques for automatic and unattended data alignment are used and laser point clouds are exploited using Artificial Intelligence techniques to improve the reliability of the obstacle classification. New approaches to the problem of clustering sparse point clouds have been adopted, maximizing the information obtained from low resolution lasers. After improving cluster detection, AI techniques have been used to classify the obstacle not only with vision, but also with laser information. The fusion of the information acquired from both sensors, adding the classification capabilities of the laser, improves the reliability of the


IV | 2011

Sparse scene flow segmentation for moving object detection in urban environments

Philip Lenz; Julius Ziegler; Andreas Geiger; Martin Roser


british machine vision conference | 2018

CEREALS - Cost-Effective REgion-based Active Learning for Semantic Segmentation.

Radek Mackowiak; Philip Lenz; Omair Ghori; Ferran Diego; Oliver Lange; Carsten Rother


IV | 2011

Novel two-stage algorithm for non-parametric cast shadow recognition

Martin Roser; Philip Lenz

Collaboration


Dive into the Philip Lenz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christoph Stiller

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Roser

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carsten Rother

Dresden University of Technology

View shared research outputs
Top Co-Authors

Avatar

Julius Ziegler

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ferran Diego

Autonomous University of Barcelona

View shared research outputs
Researchain Logo
Decentralizing Knowledge