Benjamin Ranft
Center for Information Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Benjamin Ranft.
IEEE Transactions on Intelligent Transportation Systems | 2012
Andreas Geiger; Martin Lauer; Frank Moosmann; Benjamin Ranft; Holger H. Rapp; Christoph Stiller; Julius Ziegler
In this paper, we present the concepts and methods developed for the autonomous vehicle known as AnnieWAY, which is our winning entry to the 2011 Grand Cooperative Driving Challenge. We describe algorithms for sensor fusion, vehicle-to-vehicle communication, and cooperative control. Furthermore, we analyze the performance of the proposed methods and compare them with those of competing teams. We close with our results from the competition and lessons learned.
IEEE Transactions on Intelligent Vehicles | 2016
Benjamin Ranft; Christoph Stiller
Humans assimilate information from the traffic environment mainly through visual perception. Obviously, the dominant information required to conduct a vehicle can be acquired with visual sensors. However, in contrast to most other sensor principles, video signals contain relevant information in a highly indirect manner and hence visual sensing requires sophisticated machine vision and image understanding techniques. This paper provides an overview on the state of research in the field of machine vision for intelligent vehicles. The functional spectrum addressed covers the range from advanced driver assistance systems to autonomous driving. The organization of the article adopts the typical order in image processing pipelines that successively condense the rich information and vast amount of data in video sequences. Data-intensive low-level “early vision” techniques first extract features that are later grouped and further processed to obtain information of direct relevance for vehicle guidance. Recognition and classification schemes allow to identify specific objects in a traffic scene. Recently, semantic labeling techniques using convolutional neural networks have achieved impressive results in this field. High-level decisions of intelligent vehicles are often influenced by map data. The emerging role of machine vision in the mapping and localization process is illustrated at the example of autonomous driving. Scene representation methods are discussed that organize the information from all sensors and data sources and thus build the interface between perception and planning. Recently, vision benchmarks have been tailored to various tasks in traffic scene perception that provide a metric for the rich diversity of machine vision methods. Finally, the paper addresses computing architectures suited to real-time implementation. Throughout the paper, numerous specific examples and real world experiments with prototype vehicles are presented.
international conference on intelligent transportation systems | 2010
Bernd Kitt; Benjamin Ranft; Henning Lategahn
In this paper we propose an approach for dynamic scene perception from a moving vehicle equipped with a stereo camera rig. The approach is solely based on visual information, hence it is applicable to a large class of autonomous robots working in indoor as well as in outdoor environments. The proposed approach consists of an egomotion estimation based on disparity and optical flow using the Longuet-Higgins-Equations combined with an implicit extended Kalman-Filter. Based on this egomotion estimation a moving object detection and tracking is performed. Each tracked object is labeled with a unique ID while visible in the images. The proposed algorithm was evaluated on numerous challenging real world image sequences.
international conference on intelligent transportation systems | 2014
Benjamin Ranft; Tobias Strauß
Stereo cameras enable a 3D reconstruction of viewed scenes and are therefore well-suited sensors for many advanced driver assistance systems and autonomous driving. Modern algorithms for estimating distances for every image pixel achieve high-quality results, but their real-time capability is very limited. In contrast, window-based local methods can be implemented very efficiently but are more prone to errors. This is particularly true for spatial changes of distance within the matching window, most prominently on surfaces such as the road which are not parallel to but rather slanted towards the image plane. In this paper we present a method to compensate the impact of this effect for arbitrarily oriented sets of planes. It does not depend on any modifications to the actual distance estimation. Instead, it only applies specific transformations to input images and intermediate results. By combining this approach with existing implementations which efficiently use either multi-core or graphics processors, we were able to significantly increase quality while maintaining real-time throughputs on a compact target system.
international conference on intelligent transportation systems | 2010
Bernd Kitt; Benjamin Ranft; Henning Lategahn
In this paper we propose a new block-matching based approach for the estimation of nearly dense optical flow fields in image sequences. We focus on applications to autonomous vehicles where a dominant movement of the camera along its optical axis is present. The presented algorithm exploits the geometric relations between the two viewpoints induced by the epipolar geometry, hence it is applicable for the static parts of the scene. These relations are used to remap the images so that the resulting virtual images are similar to images captured by an axial stereo camera setup. This alignment dramatically reduces the computational complexity of the correspondence search and avoids false correspondences e.g. caused by repeated patterns. Experiments on challenging real-world sequences show the accuracy of the proposed approach.
international parallel and distributed processing symposium | 2014
Benjamin Ranft; Oliver Denninger; Philip Pfaffe
Modern processors have the potential of executing compute-intensive programs quickly and efficiently, but require applications to be adapted to their ever increasing parallelism. Here, heterogeneous systems add complexity by combining processing units with different characteristics. Scheduling should thus consider the performance of each processor as well as competing workloads and varying inputs. To assist programmers of stream processing applications in facing this challenge we present libHawaii, an open source library for cooperatively using all processors of heterogeneous systems easily and efficiently. It supports exploiting data flow, data element and task parallelism via pipelining, partitioning and demand-based allocation of consecutive work items. Scheduling is automatically adapted on-line to continuously optimize performance and energy efficiency. Our C++ library does not depend on specific hardware architectures or parallel computing frameworks. However, it facilitates maximizing the throughput of compatible GPUs by overlapping computations and memory transfers while maintaining low latencies. This paper describes the algorithms and implementation of libHawaii and demonstrates its usage on existing applications. We experimentally evaluate our library using two examples: General matrix multiplication (GEMM) is a simple yet important building block of many high-performance computing applications. Complementarily, the detection, extraction and matching of sparse image features exhibits greater complexity, including indeterministic memory access and synchronization.
ieee international conference on high performance computing data and analytics | 2012
Benjamin Ranft; Oliver Denninger
Todays systems from smartphones to workstations are becoming increasingly parallel and heterogeneous: Processing units not only consist of more and more identical cores - furthermore, systems commonly contain either a discrete general-purpose GPU alongside with their CPU or even integrate both on a single chip. To benefit from this trend, software should utilize all available resources and adapt to varying configurations, including different CPU and GPU performance or competing processes. This paper investigates parallelization and adaptation strategies applied to the example application of dense stereo vision, which forms a basis i.a. for advanced driver assistance systems, robotics or gesture recognition and represents a broad range of similar computer vision methods. For this problem, task-driven as well as data element- and data flow-driven parallelization approaches are feasible. To achieve real-time performance, we first utilize data element-parallelism individually on each device. On this basis, we develop and implement strategies for cooperation between heterogeneous processing units and for automatic adaptation to the hardware available at run-time. Each approach is described concerning i.a. the propagation of data to processors and its relation to established methods. An experimental evaluation with multiple test systems reveals advantages and limitations of each strategy.
Archive | 2013
Benjamin Ranft
IV | 2011
Benjamin Ranft; Timo Schoenwald; Bernd Kitt
conference on design and architectures for signal and image processing | 2012
Alexander Koch; Benjamin Ranft; Alexander Viehl; Oliver Bringmann; Wolfgang Rosenstiel