Walter Wohlkinger
Vienna University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Walter Wohlkinger.
IEEE Robotics & Automation Magazine | 2012
Aitor Aldoma; Zoltan-Csaba Marton; Federico Tombari; Walter Wohlkinger; Christian Potthast; Bernhard Zeisl; Radu Bogdan Rusu; Suat Gedikli; Markus Vincze
With the advent of new-generation depth sensors, the use of three-dimensional (3-D) data is becoming increasingly popular. As these sensors are commodity hardware and sold at low cost, a rapidly growing group of people can acquire 3- D data cheaply and in real time.
robotics and biomimetics | 2011
Walter Wohlkinger; Markus Vincze
This work addresses the problem of real-time 3D shape based object class recognition, its scaling to many categories and the reliable perception of categories. A novel shape descriptor for partial point clouds based on shape functions is presented, capable of training on synthetic data and classifying objects from a depth sensor in a single partial view in a fast and robust manner. The classification task is stated as a 3D retrieval task finding the nearest neighbors from synthetically generated views of CAD-models to the sensed point cloud with a Kinect-style depth sensor. The presented shape descriptor shows that the combination of angle, point-distance and area shape functions gives a significant boost in recognition rate against the baseline descriptor and outperforms the state-of-the-art descriptors in our experimental evaluation on a publicly available dataset of real-world objects in table scene contexts with up to 200 categories.
Robotics and Autonomous Systems | 2016
David Fischinger; Peter Einramhof; Konstantinos E. Papoutsakis; Walter Wohlkinger; Peter Mayer; Paul Panek; Stefan Hofmann; Tobias Koertner; Astrid Weiss; Antonis A. Argyros; Markus Vincze
One option to address the challenge of demographic transition is to build robots that enable aging in place. Falling has been identified as the most relevant factor to cause a move to a care facility. The Hobbit project combines research from robotics, gerontology, and human-robot interaction to develop a care robot which is capable of fall prevention and detection as well as emergency detection and handling. Moreover, to enable daily interaction with the robot, other functions are added, such as bringing objects, offering reminders, and entertainment. The interaction with the user is based on a multimodal user interface including automatic speech recognition, text-to-speech, gesture recognition, and a graphical touch-based user interface. We performed controlled laboratory user studies with a total of 49 participants (aged 70 plus) in three EU countries (Austria, Greece, and Sweden). The collected user responses on perceived usability, acceptance, and affordability of the robot demonstrate a positive reception of the robot from its target user group. This article describes the principles and system components for navigation and manipulation in domestic environments, the interaction paradigm and its implementation in a multimodal user interface, the core robot tasks, as well as the results from the user studies, which are also reflected in terms of lessons we learned and we believe are useful to fellow researchers. We present a care robot for aging in place by means of fall prevention/detection.Detailed description of sensor set-up, hardware, and the multimodal user interface.Detailed description of major software components and implemented robot tasks.Proof-of-concept user study (49 user) on usability, acceptance, and affordability.
international conference on robotics and automation | 2012
Walter Wohlkinger; Aitor Aldoma; Radu Bogdan Rusu; Markus Vincze
3D object and object class recognition gained momentum with the arrival of low-cost RGB-D sensors and enables robotics tasks not feasible years ago. Scaling object class recognition to hundreds of classes still requires extensive time and many objects for learning. To overcome the training issue, we introduce a methodology for learning 3D descriptors from synthetic CAD-models and classification of never-before-seen objects at the first glance, where classification rates and speed are suited for robotics tasks. We provide this in 3DNet (3d-net.org), a free resource for object class recognition and 6DOF pose estimation from point cloud data. 3DNet provides a large-scale hierarchical CAD-model databases with increasing numbers of classes and difficulty with 10, 50, 100 and 200 object classes together with evaluation datasets that contain thousands of scenes captured with a RGB-D sensor. 3DNet further provides an open-source framework based on the Point Cloud Library (PCL) for testing new descriptors and benchmarking of state-of-the-art descriptors together with pose estimation procedures to enable robotics tasks such as search and grasping.
19th International Workshop on Robotics in Alpe-Adria-Danube Region (RAAD 2010) | 2010
Walter Wohlkinger; Markus Vincze
Building knowledge for robots can be tedious, especially if focused on object class recognition in home environments where hundreds of everyday-objects - some with a huge intra class variability - can be found. Object recognition and especially object class recognition is a key capability in home-robotics. Achieving deployable results from state-of-.the-art algorithms is not yet achievable when the number of classes increases and near real-time is the goal. Hence, we propose to exploit contextual knowledge by using sensor and hardware constraints from the robotics and home domains and show how to use the internet as a source for obtaining the required data for building a fast, vision based object categorization system for robotics. In this paper, we give an overview of the available constraints and advantages of using a robot to set priors for object classification and propose a system which covers automated model acquisition from the web, domain simulation, descriptor generation, 3D data processing from dense stereo and classification for a - not too far - robot scenario in an internet-connected home-environment. In this work we show that this system is capable of being used in home robotics in a fast and robust way for recognition of object classes commonly found in such environments, including but not limited to chairs and mugs. We also discuss challenges and missing pieces in the framework and useful extensions.
machine vision applications | 2010
Georg Biegelbauer; Markus Vincze; Walter Wohlkinger
Fast detection of objects in a home or office environment is relevant for robotic service and assistance applications. In this work we present the automatic localization of a wide variety of differently shaped objects scanned with a laser range sensor from one view in a cluttered setting. The daily-life objects are modeled using approximated Superquadrics, which can be obtained from showing the object or another modeling process. Detection is based on a hierarchical RANSAC search to obtain fast detection results and the voting of sorted quality-of-fit criteria. The probabilistic search starts from low resolution and refines hypotheses at increasingly higher resolution levels. Criteria for object shape and the relationship of object parts together with a ranking procedure and a ranked voting process result in a combined ranking of hypothesis using a minimum number of parameters. The experimental evaluation of the method and experiments from cluttered table top scenes demonstrate the effectiveness and robustness of the approach, feasible for real world object localization and robot grasp planning.
IFAC Proceedings Volumes | 2012
Jeannette Bohg; Kai Welke; Beatriz León; Martin Do; Dan Song; Walter Wohlkinger; Marianna Madry; Aitor Aldoma; Markus Przybylski; Tamim Asfour; Higinio Martí; Danica Kragic; Antonio Morales; Markus Vincze
In this paper, we present an approach towards autonomous grasping of objects according to their category and a given task. Recent advances in the field of object segmentation and categorization as well as task-based grasp inference have been leveraged by integrating them into one pipeline. This allows us to transfer task-specific grasp experience between objects of the same category. The effectiveness of the approach is demonstrated on the humanoid robot ARMAR-IIIa.
international conference on robotics and automation | 2007
Georg Biegelbauer; Mario Richtsfeld; Walter Wohlkinger; Markus Vincze; Manuel Herkt
Nowadays, robust and light-weight parts used in the automobile and aeronautics industry are made of carbon fibres. To increase the mechanical toughness of the parts the carbon fibres are stitched in the preforming process using a sewing robot. However, current systems miss high flexibility and rely on manual programming of each part. The main target of this work is to develop an automatic system that autonomously sets the structure strengthening seams. Therefore, a rapid and flexible following of the carbon textile edges is required. Due to the black and reflective carbon fibres a laser-stripe sensor is necessary and the processing of the range data is a challenging task. The paper proposes a real time approach where different edge detection methodologies are combined in a voting scheme to increase the edge tracking robustness. The experimental results demonstrate the feasibility of a fully automated, sensor-guided robotic sewing process. The seam can be located to within 0.65mm at a detection rate of 99.3% for individual scans.
leveraging applications of formal methods | 2011
Markus Vincze; Walter Wohlkinger; Sven Olufs; Peter Einramhof; Robert Schwarz; Karthik Mahesh Varadarajan
A main task for domestic robots is to navigate safely at home, find places and detect objects. We set out to exploit the knowledge available to the robot to constrain the task of understanding the structure of its environment, i.e., ground for safe motion and walls for localisation, to simplify object detection and classification. We start from exploiting the known geometry and kinematics of the robot to obtain ground point disparities. This considerably improves robustness in combination with a histogram approach over patches in the disparity image. We then show that stereo data can be used for localisation and eventually for object detection classification and that this system approach improves object detection and classification rates considerably.
Elektrotechnik Und Informationstechnik | 2012
Markus Vincze; Walter Wohlkinger; Aitor Aldoma; Sven Olufs; Peter Einramhof; Kai Zhou; Ekaterina Potapova; David Fischinger; Michael Zillich
SummaryA main task for domestic robots is to recognize object categories. Image-based approaches learn from large data bases but have no access to contextual knowledge such as available to a robot navigating in the rooms at home. Consequently, we set out to exploit the knowledge available to the robot to constrain the task of object classification. Based on the estimation of free ground space in front of the robot, which is essential for the save navigation in a home setting, we show that we can greatly simplify self-localization, the detection of support surfaces, and the classification of objects. We further show that object classes can be efficiently acquired from 3D models of the Web if learned from automatically generated view data. We modelled 200 object classes (available from www.3d-net.org) and provide sample data of scenes for testing. Using the situated approach we can detect, e.g., chairs with 93 per cent detection rate.ZusammenfassungEine Hauptaufgabe für Roboter ist es, Objekte und Objektklassen zu erkennen, um diese zu finden und handzuhaben. Bild-basierte Ansätze lernen aus großen Datenbanken, haben aber keinen Zugriff auf Kontextwissen zur Verfügung, zum Beispiel, wie Roboter in Zimmern navigieren. Wir schlagen daher den Ansatz des situierten Sehens vor, um kontextuelles Wissen über die Aufgabe und die Anwendung zur Verbesserung der Objekterkennung zu verwenden. Basierend auf der Bestimmung des freien Bodens vor dem Roboter, der für die sichere Navigation notwendig ist, zeigen wir, dass dadurch die Lokalisierung, das Erkennung von Flächen und die Kategorisierung von Objekten vereinfacht werden. Wir zeigen ferner, dass Objektklassen effizient aus 3D-Web-Daten gelernt werden können, wenn das Lernen virtuelle 2,5D-Ansichten verwendet, um die Sicht der Sensoren des Roboters auf die reale Welt anzunehmen. Mit diesem Ansatz wurden 200 Objektklassen (zu finden unter www.3d-net.org) modelliert und in Szenen erkannt, z. B. Stühle mit einer Erkennungsrate von 93 Prozent.