David Fischinger
Vienna University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Fischinger.
Robotics and Autonomous Systems | 2016
David Fischinger; Peter Einramhof; Konstantinos E. Papoutsakis; Walter Wohlkinger; Peter Mayer; Paul Panek; Stefan Hofmann; Tobias Koertner; Astrid Weiss; Antonis A. Argyros; Markus Vincze
One option to address the challenge of demographic transition is to build robots that enable aging in place. Falling has been identified as the most relevant factor to cause a move to a care facility. The Hobbit project combines research from robotics, gerontology, and human-robot interaction to develop a care robot which is capable of fall prevention and detection as well as emergency detection and handling. Moreover, to enable daily interaction with the robot, other functions are added, such as bringing objects, offering reminders, and entertainment. The interaction with the user is based on a multimodal user interface including automatic speech recognition, text-to-speech, gesture recognition, and a graphical touch-based user interface. We performed controlled laboratory user studies with a total of 49 participants (aged 70 plus) in three EU countries (Austria, Greece, and Sweden). The collected user responses on perceived usability, acceptance, and affordability of the robot demonstrate a positive reception of the robot from its target user group. This article describes the principles and system components for navigation and manipulation in domestic environments, the interaction paradigm and its implementation in a multimodal user interface, the core robot tasks, as well as the results from the user studies, which are also reflected in terms of lessons we learned and we believe are useful to fellow researchers. We present a care robot for aging in place by means of fall prevention/detection.Detailed description of sensor set-up, hardware, and the multimodal user interface.Detailed description of major software components and implemented robot tasks.Proof-of-concept user study (49 user) on usability, acceptance, and affordability.
intelligent robots and systems | 2012
David Fischinger; Markus Vincze
This paper presents a novel approach to emptying a basket filled with a pile of objects. Form, size, position, orientation and constellation of the objects are unknown. Additional challenges are to localize the basket and treat it as an obstacle, and to cope with incomplete point cloud data. There are three key contributions. First, we introduce Height Accumulated Features (HAF) which provide an efficient way of calculating grasp related feature values. The second contribution is an extensible machine learning system for binary classification of grasp hypotheses based on raw point cloud data. Finally, a practical heuristic for selection of the most robust grasp hypothesis is introduced. We evaluate our system in experiments where a robot was required to autonomously empty a basket with unknown objects on a pile. Despite the challenging scenarios, our system succeeded each time.
international conference on robotics and automation | 2013
David Fischinger; Markus Vincze; Yun Jiang
In this paper, we propose a method for grasping unknown objects from piles or cluttered scenes, given a point cloud from a single depth camera. We introduce a shape-based method - Symmetry Height Accumulated Features (SHAF) - that reduces the scene description complexity such that the use of machine learning techniques becomes feasible. We describe the basic Height Accumulated Features and the Symmetry Features and investigate their quality using an F-score metric. We discuss the gain from Symmetry Features for grasp classification and demonstrate the expressive power of Height Accumulated Features by comparing it to a simple height based learning method. In robotic experiments of grasping single objects, we test 10 novel objects in 150 trials and show significant improvement of 34% over a state-of-the-art method, achieving a success rate of 92%. An improvement of 29% over the competitive method was achieved for a task of clearing a table with 5 to 10 objects and overall 90 trials. Furthermore we show that our approach is easily adaptable for different manipulators by running our experiments on a second platform.
The International Journal of Robotics Research | 2015
David Fischinger; Astrid Weiss; Markus Vincze
We present a system for grasping unknown objects, even from piles or cluttered scenes, given a point cloud. Our method is based on the topography of a given scene and abstracts grasp-relevant structures to enable machine learning techniques for grasping tasks. We describe how Height Accumulated Features (HAF) and their extension, Symmetry Height Accumulated Features, extract grasp relevant local shapes. We investigate grasp quality using an F-score metric. We demonstrate the gain and the expressive power of HAF by comparing its trained classifier with one that resulted from training on simple height grids. An efficient way to calculate HAF is presented. We describe how the trained grasp classifier is used to explore the whole grasp space and introduce a heuristic to find the most robust grasp. We show how to use our approach to adapt the gripper opening width before grasping. In robotic experiments we demonstrate different aspects of our system on three robot platforms: a Schunk seven-degree-of-freedom arm, a PR2 and a Kuka LWR arm. We perform tasks to grasp single objects, autonomously unload a box and clear the table. Thereby we show that our approach is easily adaptable and robust with respect to different manipulators. As part of the experiments we compare our algorithm with a state-of-the-art method and show significant improvements. Concrete examples are used to illustrate the benefit of our approach compared with established grasp approaches. Finally, we show advantages of the symbiosis between our approach and object recognition.
IFAC Proceedings Volumes | 2012
David Fischinger; Markus Vincze
Abstract This paper presents a novel approach to clearing a table with a heap of objects. Form, size, position, orientation and constellation of the objects are a priori unknown. Coping with incomplete point cloud data is an additional challenge. There are three key contributions. First, we introduce Height Accumulated Features (HAF) which provide an efficient way of calculating grasp related feature values. The second contribution is an extensible machine learning system for binary classification of grasp hypotheses based on raw point cloud data. Finally, a practical heuristic for selection of the most robust grasp hypothesis is introduced. We evaluate our system in experiments where a robot was required to autonomously clear a table with a heap of unknown objects. Despite the complexity of the scenarios, our system cleared the table each time without human interaction and with a grasp failure rate below three percent.
european conference on computer vision | 2016
Markus Vincze; Markus Bajones; Markus Suchi; Daniel Wolf; Astrid Weiss; David Fischinger; Paloma da la Puente
Older adults reported that a robot in their homes would be of great help if it could find objects that users regularly search for. We propose an interactive method to learn objects directly with the user and the robot and then use the RGB-D model to search for the object in the scene. The robot presents a turntable to the user for rotating the part in front of its camera and obtain a full 3D model. The user is asked to turn the object upside down and the two half-models are merged. The model is then used at predefined search locations for detecting the object on tables or other horizontal surfaces. Experiments in three environments, up to 14 objects and a total of 1080 scenes indicate that present detection methods need to be considerably improved to provide a good service to users. We analyse the results and contribute to the discussion on how to overcome limited image quality and resolution by exploiting the robotic system.
Journal of Robotics | 2018
Markus Bajones; David Fischinger; Astrid Weiss; Daniel Wolf; Markus Vincze; Paloma de la Puente; Tobias Körtner; Markus Weninger; Konstantinos E. Papoutsakis; Damien Michel; Ammar Qammaz; Paschalis Panteleris; Michalis Foukarakis; Ilia Adami; Danai Ioannidi; Asterios Leonidis; Margherita Antona; Antonis A. Argyros; Peter Mayer; Paul Panek; Håkan Eftring; Susanne Frennert
We present the robot developed within the Hobbit project, a socially assistive service robot aiming at the challenge of enabling prolonged independent living of elderly people in their own homes. We present the second prototype (Hobbit PT2) in terms of hardware and functionality improvements following first user studies. Our main contribution lies within the description of all components developed within the Hobbit project, leading to autonomous operation of 371 days during field trials in Austria, Greece, and Sweden. In these field trials, we studied how 18 elderly users (aged 75 years and older) lived with the autonomously interacting service robot over multiple weeks. To the best of our knowledge, this is the first time a multifunctional, low-cost service robot equipped with a manipulator was studied and evaluated for several weeks under real-world conditions. We show that Hobbit’s adaptive approach towards the user increasingly eased the interaction between the users and Hobbit. We provide lessons learned regarding the need for adaptive behavior coordination, support during emergency situations, and clear communication of robotic actions and their consequences for fellow researchers who are developing an autonomous, low-cost service robot designed to interact with their users in domestic contexts. Our trials show the necessity to move out into actual user homes, as only there can we encounter issues such as misinterpretation of actions during unscripted human-robot interaction.
international conference on intelligent autonomous systems | 2016
Ester Martinez-Martin; David Fischinger; Markus Vincze; Angel P. del Pobil
The ability to grasp is a fundamental requirement for service robots in order to perform meaningful tasks in ordinary environments. However, its robustness can be compromised by the inaccuracy (or lack) of tactile and proprioceptive sensing, especially in the presence of unforeseen slippage. As a solution, vision can be instrumental in detecting grasp errors. In this paper, we present an RGB-D visual application for discerning the success or failure in robot grasping of unknown objects, when a poor proprioceptive information and/or a deformable gripper without tactile information is used. The proposed application is divided into two stages: the visual gripper detection and recognition, and the grasping assessment (i.e. checking whether a grasping error has occurred). For that, three different visual cues are combined: colour, depth and edges. This development is supported by the experimental results on the Hobbit robot which is provided with an elastically deformable gripper.
Journal of Intelligent and Robotic Systems | 2018
Paloma de la Puente; Markus Bajones; Christian Reuther; Daniel Wolf; David Fischinger; Markus Vincze
Future home and service robots will require advanced navigation and interaction capabilities. In particular, domestic environments present open challenges that are hard to identify by conducting controlled tests in home-like settings: there is the need to test and evaluate navigation in the actual homes of users. This paper presents the experiences of operating a mobile robot with manipulation capabilities and an open set of tasks in extensive trials with real users, in their own homes. The main difficulties encountered are the requirement to move safely in cluttered 3D environments, the problems related to navigation in narrow spaces, and the need for an adaptive rather than fixed way to approach the users. We describe our solutions based on RGB-D perception and evaluate the integrated system for navigation in real home environments, pointing out remaining challenges towards more advanced commercial solutions.
Elektrotechnik Und Informationstechnik | 2012
Markus Vincze; Walter Wohlkinger; Aitor Aldoma; Sven Olufs; Peter Einramhof; Kai Zhou; Ekaterina Potapova; David Fischinger; Michael Zillich
SummaryA main task for domestic robots is to recognize object categories. Image-based approaches learn from large data bases but have no access to contextual knowledge such as available to a robot navigating in the rooms at home. Consequently, we set out to exploit the knowledge available to the robot to constrain the task of object classification. Based on the estimation of free ground space in front of the robot, which is essential for the save navigation in a home setting, we show that we can greatly simplify self-localization, the detection of support surfaces, and the classification of objects. We further show that object classes can be efficiently acquired from 3D models of the Web if learned from automatically generated view data. We modelled 200 object classes (available from www.3d-net.org) and provide sample data of scenes for testing. Using the situated approach we can detect, e.g., chairs with 93 per cent detection rate.ZusammenfassungEine Hauptaufgabe für Roboter ist es, Objekte und Objektklassen zu erkennen, um diese zu finden und handzuhaben. Bild-basierte Ansätze lernen aus großen Datenbanken, haben aber keinen Zugriff auf Kontextwissen zur Verfügung, zum Beispiel, wie Roboter in Zimmern navigieren. Wir schlagen daher den Ansatz des situierten Sehens vor, um kontextuelles Wissen über die Aufgabe und die Anwendung zur Verbesserung der Objekterkennung zu verwenden. Basierend auf der Bestimmung des freien Bodens vor dem Roboter, der für die sichere Navigation notwendig ist, zeigen wir, dass dadurch die Lokalisierung, das Erkennung von Flächen und die Kategorisierung von Objekten vereinfacht werden. Wir zeigen ferner, dass Objektklassen effizient aus 3D-Web-Daten gelernt werden können, wenn das Lernen virtuelle 2,5D-Ansichten verwendet, um die Sicht der Sensoren des Roboters auf die reale Welt anzunehmen. Mit diesem Ansatz wurden 200 Objektklassen (zu finden unter www.3d-net.org) modelliert und in Szenen erkannt, z. B. Stühle mit einer Erkennungsrate von 93 Prozent.