Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Friedrich Fraundorfer is active.

Publication


Featured researches published by Friedrich Fraundorfer.


IEEE Robotics & Automation Magazine | 2011

Visual Odometry [Tutorial]

Davide Scaramuzza; Friedrich Fraundorfer

Visual odometry (VO) is the process of estimating the egomotion of an agent (e.g., vehicle, human, and robot) using only the input of a single or If multiple cameras attached to it. Application domains include robotics, wearable computing, augmented reality, and automotive. The term VO was coined in 2004 by Nister in his landmark paper. The term was chosen for its similarity to wheel odometry, which incrementally estimates the motion of a vehicle by integrating the number of turns of its wheels over time. Likewise, VO operates by incrementally estimating the pose of the vehicle through examination of the changes that motion induces on the images of its onboard cameras. For VO to work effectively, there should be sufficient illumination in the environment and a static scene with enough texture to allow apparent motion to be extracted. Furthermore, consecutive frames should be captured by ensuring that they have sufficient scene overlap.


IEEE Robotics & Automation Magazine | 2012

Visual Odometry : Part II: Matching, Robustness, Optimization, and Applications

Friedrich Fraundorfer; Davide Scaramuzza

Part II of the tutorial has summarized the remaining building blocks of the VO pipeline: specifically, how to detect and match salient and repeatable features across frames and robust estimation in the presence of outliers and bundle adjustment. In addition, error propagation, applications, and links to publicly available code are included. VO is a well understood and established part of robotics. VO has reached a maturity that has allowed us to successfully use it for certain classes of applications: space, ground, aerial, and underwater. In the presence of loop closures, VO can be used as a building block for a complete SLAM algorithm to reduce motion drift. Challenges that still remain are to develop and demonstrate large-scale and long-term implementations, such as driving autonomous cars for hundreds of miles. Such systems have recently been demonstrated using Lidar and Radar sensors [86]. However, for VO to be used in such systems, technical issues regarding robustness and, especially, long-term stability have to be resolved. Eventually, VO has the potential to replace Lidar-based systems for egomotion estimation, which are currently leading the state of the art in accuracy, robustness, and reliability. VO offers a cheaper and mechanically easier-to-manufacture solution for egomotion estimation, while, additionally, being fully passive. Furthermore, the ongoing miniaturization of digital cameras offers the possibility to develop smaller and smaller robotic systems capable of ego-motion estimation.


intelligent robots and systems | 2012

Vision-based autonomous mapping and exploration using a quadrotor MAV

Friedrich Fraundorfer; Lionel Heng; Dominik Honegger; Gim Hee Lee; Lorenz Meier; Petri Tanskanen; Marc Pollefeys

In this paper, we describe our autonomous vision-based quadrotor MAV system which maps and explores unknown environments. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a front-looking stereo camera as the main exteroceptive sensor, our quadrotor achieves these capabilities with both the Vector Field Histogram+ (VFH+) algorithm for local navigation, and the frontier-based exploration algorithm. In addition, we implement the Bug algorithm for autonomous wall-following which could optionally be selected as the substitute exploration algorithm in sparse environments where the frontier-based exploration under-performs. We incrementally build a 3D global occupancy map on-board the MAV. The map is used by the VFH+ and frontier-based exploration in dense environments, and the Bug algorithm for wall-following in sparse environments. During the exploration phase, images from the front-looking camera are transmitted over Wi-Fi to the ground station. These images are input to a large-scale visual SLAM process running off-board on the ground station. SLAM is carried out with pose-graph optimization and loop closure detection using a vocabulary tree. We improve the robustness of the pose estimation by fusing optical flow and visual odometry. Optical flow data is provided by a customized downward-looking camera integrated with a microcontroller while visual odometry measurements are derived from the front-looking stereo camera. We verify our approaches with experimental results.


Autonomous Robots | 2012

PIXHAWK: A micro aerial vehicle design for autonomous flight using onboard computer vision

Lorenz Meier; Petri Tanskanen; Lionel Heng; Gim Hee Lee; Friedrich Fraundorfer; Marc Pollefeys

We describe a novel quadrotor Micro Air Vehicle (MAV) system that is designed to use computer vision algorithms within the flight control loop. The main contribution is a MAV system that is able to run both the vision-based flight control and stereo-vision-based obstacle detection parallelly on an embedded computer onboard the MAV. The system design features the integration of a powerful onboard computer and the synchronization of IMU-Vision measurements by hardware timestamping which allows tight integration of IMU measurements into the computer vision pipeline. We evaluate the accuracy of marker-based visual pose estimation for flight control and demonstrate marker-based autonomous flight including obstacle detection using stereo vision. We also show the benefits of our IMU-Vision synchronization for egomotion estimation in additional experiments where we use the synchronized measurements for pose estimation using the 2pt+gravity formulation of the PnP problem.


international conference on robotics and automation | 2009

Real-time monocular visual odometry for on-road vehicles with 1-point RANSAC

Davide Scaramuzza; Friedrich Fraundorfer; Roland Siegwart

This paper presents a system capable of recovering the trajectory of a vehicle from the video input of a single camera at a very high frame-rate. The overall frame-rate is limited only by the feature extraction process, as the outlier removal and the motion estimation steps take less than 1 millisecond with a normal laptop computer. The algorithm relies on a novel way of removing the outliers of the feature matching process.We show that by exploiting the nonholonomic constraints of wheeled vehicles it is possible to use a restrictive motion model which allows us to parameterize the motion with only 1 feature correspondence. Using a single feature correspondence for motion estimation is the lowest model parameterization possible and results in the most efficient algorithms for removing outliers. Here we present two methods for outlier removal. One based on RANSAC and the other one based on histogram voting. We demonstrate the approach using an omnidirectional camera placed on a vehicle during a peak time tour in the city of Zurich. We show that the proposed algorithm is able to cope with the large amount of clutter of the city (other moving cars, buses, trams, pedestrians, sudden stops of the vehicle, etc.). Using the proposed approach, we cover one of the longest trajectories ever reported in real-time from a single omnidirectional camera and in cluttered urban scenes, up to 3 kilometers.


international conference on robotics and automation | 2011

PIXHAWK: A system for autonomous flight using onboard computer vision

Lorenz Meier; Petri Tanskanen; Friedrich Fraundorfer; Marc Pollefeys

We provide a novel hardware and software system for micro air vehicles (MAV) that allows high-speed, low-latency onboard image processing. It uses up to four cameras in parallel on a miniature rotary wing platform. The MAV navigates based on onboard processed computer vision in GPS-denied in- and outdoor environments. It can process in parallel images and inertial measurement information from multiple cameras for multiple purposes (localization, pattern recognition, obstacle avoidance) by distributing the images on a central, low-latency image hub. Furthermore the system can utilize low-bandwith radio links for communication and is designed and optimized to scale to swarm use. Experimental results show successful flight with a range of onboard computer vision algorithms, including localization, obstacle avoidance and pattern recognition.


intelligent robots and systems | 2007

Topological mapping, localization and navigation using image collections

Friedrich Fraundorfer; Christopher Engels; David Nister

In this paper we present a highly scalable vision-based localization and mapping method using image collections. A topological world representation is created online during robot exploration by adding images to a database and maintaining a link graph. An efficient image matching scheme allows real-time mapping and global localization. The compact image representation allows us to create image collections containing millions of images, which enables mapping of very large environments. A path planning method using graph search is proposed and local geometric information is used to navigate in the topological map. Experiments show the good performance of the image matching for global localization and demonstrate path planning and navigation.


IEEE Robotics & Automation Magazine | 2014

Vision-Controlled Micro Flying Robots: From System Design to Autonomous Navigation and Mapping in GPS-Denied Environments

Davide Scaramuzza; Michael Achtelik; Lefteris Doitsidis; Friedrich Fraundorfer; Elias B. Kosmatopoulos; Agostino Martinelli; Markus W. Achtelik; Margarita Chli; Savvas A. Chatzichristofis; Laurent Kneip; Daniel Gurdan; Lionel Heng; Gim Hee Lee; Simon Lynen; Lorenz Meier; Marc Pollefeys; Alessandro Renzaglia; Roland Siegwart; Jan Stumpf; Petri Tanskanen; Chiara Troiani; Stephan Weiss

Autonomous microhelicopters will soon play a major role in tasks like search and rescue, environment monitoring, security surveillance, and inspection. If they are further realized in small scale, they can also be used in narrow outdoor and indoor environments and represent only a limited risk for people. However, for such operations, navigating based only on global positioning system (GPS) information is not sufficient. Fully autonomous operation in cities or other dense environments requires microhelicopters to fly at low altitudes, where GPS signals are often shadowed, or indoors and to actively explore unknown environments while avoiding collisions and creating maps. This involves a number of challenges on all levels of helicopter design, perception, actuation, control, and navigation, which still have to be solved. The Swarm of Micro Flying Robots (SFLY) project was a European Union-funded project with the goal of creating a swarm of vision-controlled microaerial vehicles (MAVs) capable of autonomous navigation, three-dimensional (3-D) mapping, and optimal surveillance coverage in GPS-denied environments. The SFLY MAVs do not rely on remote control, radio beacons, or motion-capture systems but can fly all by themselves using only a single onboard camera and an inertial measurement unit (IMU). This article describes the technical challenges that have been faced and the results achieved from hardware design and embedded programming to vision-based navigation and mapping, with an overview of how all the modules work and how they have been integrated into the final system. Code, data sets, and videos are publicly available to the robotics community. Experimental results demonstrating three MAVs navigating autonomously in an unknown GPS-denied environment and performing 3-D mapping and optimal surveillance coverage are presented.


ieee intelligent vehicles symposium | 2013

Toward automated driving in cities using close-to-market sensors: An overview of the V-Charge Project

Paul Timothy Furgale; Ulrich Schwesinger; Martin Rufli; Wojciech Waclaw Derendarz; Hugo Grimmett; Peter Mühlfellner; Stefan Wonneberger; Julian Timpner; Stephan Rottmann; Bo Li; Bastian Schmidt; Thien-Nghia Nguyen; Elena Cardarelli; Stefano Cattani; Stefan Brüning; Sven Horstmann; Martin Stellmacher; Holger Mielenz; Kevin Köser; Markus Beermann; Christian Häne; Lionel Heng; Gim Hee Lee; Friedrich Fraundorfer; Rene Iser; Rudolph Triebel; Ingmar Posner; Paul Newman; Lars C. Wolf; Marc Pollefeys

Future requirements for drastic reduction of CO2 production and energy consumption will lead to significant changes in the way we see mobility in the years to come. However, the automotive industry has identified significant barriers to the adoption of electric vehicles, including reduced driving range and greatly increased refueling times. Automated cars have the potential to reduce the environmental impact of driving, and increase the safety of motor vehicle travel. The current state-of-the-art in vehicle automation requires a suite of expensive sensors. While the cost of these sensors is decreasing, integrating them into electric cars will increase the price and represent another barrier to adoption. The V-Charge Project, funded by the European Commission, seeks to address these problems simultaneously by developing an electric automated car, outfitted with close-to-market sensors, which is able to automate valet parking and recharging for integration into a future transportation system. The final goal is the demonstration of a fully operational system including automated navigation and parking. This paper presents an overview of the V-Charge system, from the platform setup to the mapping, perception, and planning sub-systems.


international conference on robotics and automation | 2011

Autonomous obstacle avoidance and maneuvering on a vision-guided MAV using on-board processing

Lionel Heng; Lorenz Meier; Petri Tanskanen; Friedrich Fraundorfer; Marc Pollefeys

We present a novel stereo-based obstacle avoidance system on a vision-guided micro air vehicle (MAV) that is capable of fully autonomous maneuvers in unknown and dynamic environments. All algorithms run exclusively on the vehicles on-board computer, and at high frequencies that allow the MAV to react quickly to obstacles appearing in its flight trajectory. Our MAV platform is a quadrotor aircraft equipped with an inertial measurement unit and two stereo rigs. An obstacle mapping algorithm processes stereo images, producing a 3D map representation of the environment; at the same time, a dynamic anytime path planner plans a collision-free path to a goal point.

Collaboration


Dive into the Friedrich Fraundorfer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Horst Bischof

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Gim Hee Lee

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Mostegel

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Thomas Holzmann

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Markus Rumpler

Graz University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge