Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Uwe Franke is active.

Publication


Featured researches published by Uwe Franke.


computer vision and pattern recognition | 2016

The Cityscapes Dataset for Semantic Urban Scene Understanding

Marius Cordts; Mohamed Omran; Sebastian Ramos; Timo Rehfeld; Markus Enzweiler; Rodrigo Benenson; Uwe Franke; Stefan Roth; Bernt Schiele

Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark.


IEEE Intelligent Systems & Their Applications | 1998

Autonomous driving goes downtown

Uwe Franke; D. Gavrila; S. Gorzig; F. Lindner; F. Puetzold; C. Wohler

Most computer-vision systems for vehicle guidance are for highway scenarios. Developing autonomous or driver-assistance systems for complex urban traffic poses new algorithmic and system-architecture challenges. To address these issues, the authors introduce their intelligent Stop&Go system and discuss appropriate algorithms and approaches for vision-module control.


IEEE Intelligent Transportation Systems Magazine | 2014

Making Bertha Drive?An Autonomous Journey on a Historic Route

Julius Ziegler; Philipp Bender; Markus Schreiber; Henning Lategahn; Tobias Strauss; Christoph Stiller; Thao Dang; Uwe Franke; Nils Appenrodt; Christoph Gustav Keller; Eberhard Kaus; Ralf Guido Herrtwich; Clemens Rabe; David Pfeiffer; Frank Lindner; Fridtjof Stein; Friedrich Erbs; Markus Enzweiler; Carsten Knöppel; Jochen Hipp; Martin Haueis; Maximilian Trepte; Carsten Brenk; Andreas Tamke; Mohammad Ghanaat; Markus Braun; Armin Joos; Hans Fritz; Horst Mock; Martin Hein

125 years after Bertha Benz completed the first overland journey in automotive history, the Mercedes Benz S-Class S 500 INTELLIGENT DRIVE followed the same route from Mannheim to Pforzheim, Germany, in fully autonomous manner. The autonomous vehicle was equipped with close-to-production sensor hardware and relied solely on vision and radar sensors in combination with accurate digital maps to obtain a comprehensive understanding of complex traffic situations. The historic Bertha Benz Memorial Route is particularly challenging for autonomous driving. The course taken by the autonomous vehicle had a length of 103 km and covered rural roads, 23 small villages and major cities (e.g. downtown Mannheim and Heidelberg). The route posed a large variety of difficult traffic scenarios including intersections with and without traffic lights, roundabouts, and narrow passages with oncoming traffic. This paper gives an overview of the autonomous vehicle and presents details on vision and radar-based perception, digital road maps and video-based self-localization, as well as motion planning in complex urban scenarios.


dagm conference on pattern recognition | 2005

6D-vision: fusion of stereo and motion for robust environment perception

Uwe Franke; Clemens Rabe; Hernán Badino; Stefan K. Gehrig

Obstacle avoidance is one of the most important challenges for mobile robots as well as future vision based driver assistance systems. This task requires a precise extraction of depth and the robust and fast detection of moving objects. In order to reach these goals, this paper considers vision as a process in space and time. It presents a powerful fusion of depth and motion information for image sequences taken from a moving observer. 3D-position and 3D-motion for a large number of image points are estimated simultaneously by means of Kalman-Filters. There is no need of prior error-prone segmentation. Thus, one gets a rich 6D representation that allows the detection of moving obstacles even in the presence of partial occlusion of foreground or background.


ieee intelligent vehicles symposium | 2000

Real-time stereo vision for urban traffic scene understanding

Uwe Franke; A. Joos

This paper presents a precise correlation-based stereo vision approach that allows real-time interpretation of traffic scenes and autonomous Stop&Go on a standard PC. The high speed is achieved by means of a multiresolution analysis. It delivers the stereo disparities with sub-pixel accuracy and allows precise distance estimates. Traffic applications using this method are described.


european conference on computer vision | 2008

Efficient Dense Scene Flow from Sparse or Dense Stereo Data

Andreas Wedel; Clemens Rabe; Tobi Vaudrey; Thomas Brox; Uwe Franke; Daniel Cremers

This paper presents a technique for estimating the three-dimensional velocity vector field that describes the motion of each visible scene point (scene flow). The technique presented uses two consecutive image pairs from a stereo sequence. The main contribution is to decouple the position and velocity estimation steps, and to estimate dense velocities using a variational approach. We enforce the scene flow to yield consistent displacement vectors in the left and right images. The decoupling strategy has two main advantages: Firstly, we are independent in choosing a disparity estimation technique, which can yield either sparse or dense correspondences, and secondly, we can achieve frame rates of 5 fps on standard consumer hardware. The approach provides dense velocity estimates with accurate results at distances up to 50 meters.


International Journal of Computer Vision | 2011

Stereoscopic Scene Flow Computation for 3D Motion Understanding

Andreas Wedel; Thomas Brox; Tobi Vaudrey; Clemens Rabe; Uwe Franke; Daniel Cremers

Building upon recent developments in optical flow and stereo matching estimation, we propose a variational framework for the estimation of stereoscopic scene flow, i.e., the motion of points in the three-dimensional world from stereo image sequences. The proposed algorithm takes into account image pairs from two consecutive times and computes both depth and a 3D motion vector associated with each point in the image. In contrast to previous works, we partially decouple the depth estimation from the motion estimation, which has many practical advantages. The variational formulation is quite flexible and can handle both sparse or dense disparity maps. The proposed method is very efficient; with the depth map being computed on an FPGA, and the scene flow computed on the GPU, the proposed algorithm runs at frame rates of 20 frames per second on QVGA images (320×240 pixels). Furthermore, we present solutions to two important problems in scene flow estimation: violations of intensity consistency between input images, and the uncertainty measures for the scene flow result.


ieee intelligent vehicles symposium | 2010

Efficient representation of traffic scenes by means of dynamic stixels

David Pfeiffer; Uwe Franke

Correlation based stereo vision has proven its power in commercially available driver assistance systems. Recently, real-time dense stereo vision has become available on inexpensive FPGA hardware. In order to manage the huge amount of data, a medium-level representation named “Stixel World” has been proposed for further analysis. In this representation the free space in front of the vehicle is limited by adjacent rectangular sticks of a certain width. Distance and height of each so called stixel are determined by those parts of the obstacle it represents. This Stixel World is a compact but flexible representation of the three-dimensional traffic situation. The underlying model assumption is that objects stand on the ground and have approximately vertical pose with a flat surface. So far, this representation is static since it is computed for each frame independently. Driver assistance, however, is most interested in pose and motion of moving obstacles. For this reason, we introduce tracking of stixels in this paper. Using the 6D-Vision Kalman filter framework, lateral as well as longitudinal motion is estimated for each stixel. That way, the grouping of stixels based on similar motion as well as the detection of moving obstacles turns out to be significantly simplified. The new dynamic Stixel World has proven to be well suited as a common basis for the scene understanding tasks of driver assistance and autonomous systems.


international conference on computer vision | 2013

Making Bertha See

Uwe Franke; David Pfeiffer; Clemens Rabe; Carsten Knoeppel; Markus Enzweiler; Fridtjof Stein; Ralf Guido Herrtwich

With the market introduction of the 2014 Mercedes-Benz S-Class vehicle equipped with a stereo camera system, autonomous driving has become a reality, at least in low speed highway scenarios. This raises hope for a fast evolution of autonomous driving that also extends to rural and urban traffic situations. In August 2013, an S-Class vehicle with close-to-production sensors drove completely autonomously for about 100 km from Mannheim to Pforzheim, Germany, following the well-known historic Bertha Benz Memorial Route. Next-generation stereo vision was the main sensing component and as such formed the basis for the indispensable comprehensive understanding of complex traffic situations, which are typical for narrow European villages. This successful experiment has proved both the maturity and the significance of machine vision for autonomous driving. This paper presents details of the employed vision algorithms for object recognition and tracking, free-space analysis, traffic light recognition, lane recognition, as well as self-localization.


ieee intelligent vehicles symposium | 2000

Advanced lane recognition-fusing vision and radar

Axel Gern; Uwe Franke; Paul Levi

One major problem of the common vision-based lane recognition systems is their susceptibility to weather. These problems mainly stem from the fact, that they only look for road structures. From the position of other cars in front, the run of the curve can be estimated. This paper presents our fusion approach, that takes leading vehicles into account which have been detected by radar. The Kalman filter applied here does not only deliver improved measurements of the run of the curve, but also a precise estimate of the lateral position of the observed cars. This information can be used to improve the lane assignment of ACC systems.

Collaboration


Dive into the Uwe Franke's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rudolf Mester

Goethe University Frankfurt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hernán Badino

Goethe University Frankfurt

View shared research outputs
Researchain Logo
Decentralizing Knowledge