Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Einramhof is active.

Publication


Featured researches published by Peter Einramhof.


Robotics and Autonomous Systems | 2016

Hobbit, a care robot supporting independent living at home

David Fischinger; Peter Einramhof; Konstantinos E. Papoutsakis; Walter Wohlkinger; Peter Mayer; Paul Panek; Stefan Hofmann; Tobias Koertner; Astrid Weiss; Antonis A. Argyros; Markus Vincze

One option to address the challenge of demographic transition is to build robots that enable aging in place. Falling has been identified as the most relevant factor to cause a move to a care facility. The Hobbit project combines research from robotics, gerontology, and human-robot interaction to develop a care robot which is capable of fall prevention and detection as well as emergency detection and handling. Moreover, to enable daily interaction with the robot, other functions are added, such as bringing objects, offering reminders, and entertainment. The interaction with the user is based on a multimodal user interface including automatic speech recognition, text-to-speech, gesture recognition, and a graphical touch-based user interface. We performed controlled laboratory user studies with a total of 49 participants (aged 70 plus) in three EU countries (Austria, Greece, and Sweden). The collected user responses on perceived usability, acceptance, and affordability of the robot demonstrate a positive reception of the robot from its target user group. This article describes the principles and system components for navigation and manipulation in domestic environments, the interaction paradigm and its implementation in a multimodal user interface, the core robot tasks, as well as the results from the user studies, which are also reflected in terms of lessons we learned and we believe are useful to fellow researchers. We present a care robot for aging in place by means of fall prevention/detection.Detailed description of sensor set-up, hardware, and the multimodal user interface.Detailed description of major software components and implemented robot tasks.Proof-of-concept user study (49 user) on usability, acceptance, and affordability.


leveraging applications of formal methods | 2011

Object Detection and Classification for Domestic Robots

Markus Vincze; Walter Wohlkinger; Sven Olufs; Peter Einramhof; Robert Schwarz; Karthik Mahesh Varadarajan

A main task for domestic robots is to navigate safely at home, find places and detect objects. We set out to exploit the knowledge available to the robot to constrain the task of understanding the structure of its environment, i.e., ground for safe motion and walls for localisation, to simplify object detection and classification. We start from exploiting the known geometry and kinematics of the robot to obtain ground point disparities. This considerably improves robustness in combination with a histogram approach over patches in the disparity image. We then show that stereo data can be used for localisation and eventually for object detection classification and that this system approach improves object detection and classification rates considerably.


international conference on image analysis and processing | 2007

Real-Time SLAM with a High-Speed CMOS Camera

Peter Gemeiner; Wolfgang Ponweiser; Peter Einramhof; Markus Vincze

A typical task in mobile robotics or augmented reality applications is self-localization in an unknown environment. In the robotics community the localization and mapping of the unknown environment is called SLAM (simultaneous localization and mapping). Important constraints in SLAM using visual input are real-time processing and robustness against motion blur or jitter. The contribution of this paper is in enhancing the performance of a well known SLAM method using a high-speed CMOS camera. Benefits of this camera are that it allows fast image processing, little motion blur and low localization uncertainty. SLAM performance with a high-speed camera is demonstrated on a robotic arm. The result of the experiments is that for SLAM applications in robotics, where the motion is not smooth, the high-speed CMOS camera is a more suitable sensor than a standard CCD camera.


Elektrotechnik Und Informationstechnik | 2008

Roboternavigation in Büros und Wohnungen

Markus Vincze; Sven Olufs; Peter Einramhof; Horst Wildenauer

ZusammenfassungIn diesem Artikel werden neue Entwicklungen gezeigt, die es mobilen Robotern ermöglichen sollen, in Büros und Wohnungen zu navigieren. Insbesondere wird demonstriert, dass der Einsatz eines vertikal montierten Stereokopfes die prinzipielle Zuverlässigkeit der Stereobilddatenverarbeitung erhöht und somit für diese Anwendungen besonders geeignet ist. Anschließend werden die Stereodaten verwendet, um die beiden wichtigsten Funktionen eines mobilen Roboters, die Hinderniserkennung und die Lokalisation im Raum, zu erreichen. Experimente in typischen Wohnumgebungen und einem großen Möbelhaus zeigen vielversprechende Ergebnisse im Genauigkeitsbereich von rund 10 cm. Dies ist für die meisten Anwendungen ausreichend, auch wenn für das Durchfahren von engen Türen noch Verbesserungen wünschenswert sind.SummaryThis article presents developments for navigating robots in office and home environments. Specifically, it is demonstrated that the use of a vertically mounted stereo camera head improves the reliability of stereo data processing, which is particularly suited for these applications. The stereo data is then used to realize the two most important functions of a mobile robot, obstacle avoidance and localization in rooms. Experiments in typical home scenarios and in a large furniture warehouse show promising results and an accuracy in the range of 10 cm. This is sufficient for many home and office applications, even though for some cases as traversing doors further improvement is necessary.


Elektrotechnik Und Informationstechnik | 2012

Situiertes Sehen für bessere Erkennung von Objekten und Objektklassen

Markus Vincze; Walter Wohlkinger; Aitor Aldoma; Sven Olufs; Peter Einramhof; Kai Zhou; Ekaterina Potapova; David Fischinger; Michael Zillich

SummaryA main task for domestic robots is to recognize object categories. Image-based approaches learn from large data bases but have no access to contextual knowledge such as available to a robot navigating in the rooms at home. Consequently, we set out to exploit the knowledge available to the robot to constrain the task of object classification. Based on the estimation of free ground space in front of the robot, which is essential for the save navigation in a home setting, we show that we can greatly simplify self-localization, the detection of support surfaces, and the classification of objects. We further show that object classes can be efficiently acquired from 3D models of the Web if learned from automatically generated view data. We modelled 200 object classes (available from www.3d-net.org) and provide sample data of scenes for testing. Using the situated approach we can detect, e.g., chairs with 93 per cent detection rate.ZusammenfassungEine Hauptaufgabe für Roboter ist es, Objekte und Objektklassen zu erkennen, um diese zu finden und handzuhaben. Bild-basierte Ansätze lernen aus großen Datenbanken, haben aber keinen Zugriff auf Kontextwissen zur Verfügung, zum Beispiel, wie Roboter in Zimmern navigieren. Wir schlagen daher den Ansatz des situierten Sehens vor, um kontextuelles Wissen über die Aufgabe und die Anwendung zur Verbesserung der Objekterkennung zu verwenden. Basierend auf der Bestimmung des freien Bodens vor dem Roboter, der für die sichere Navigation notwendig ist, zeigen wir, dass dadurch die Lokalisierung, das Erkennung von Flächen und die Kategorisierung von Objekten vereinfacht werden. Wir zeigen ferner, dass Objektklassen effizient aus 3D-Web-Daten gelernt werden können, wenn das Lernen virtuelle 2,5D-Ansichten verwendet, um die Sicht der Sensoren des Roboters auf die reale Welt anzunehmen. Mit diesem Ansatz wurden 200 Objektklassen (zu finden unter www.3d-net.org) modelliert und in Szenen erkannt, z. B. Stühle mit einer Erkennungsrate von 93 Prozent.


international conference on ubiquitous robots and ambient intelligence | 2011

Range image analysis for controlling an adaptive 3D camera

Peter Einramhof; Robert Schwarz; Markus Vincze

Human vision is the reference when designing perception systems for cognitive service robots, especially its ability to quickly identify task-relevant regions in a scene and to foveate on these regions. An adaptive 3D camera currently under development aims at mimicking these properties for endowing service robots with a higher level of perception and interaction capabilities with respect to everyday objects and environments. A scene is coarsely scanned and analyzed. Based on the result of analysis and the task, relevant regions within the scene are identified and data acquisition is concentrated on details of interest allowing for higher resolution 3D sampling of these details. To set the stage we first briefly describe the sensor hardware and focus then on the analysis of range images captured by the hardware. Two approaches - one based on saliency maps and the other on range image segmentation - and preliminary results are presented.


international conference on robotics and automation | 2007

MOVEMENT -Modular Versatile Mobility Enhancement System

Peter Mayer; Georg Edelmayer; Gert Jan Gelderblom; Markus Vincze; Peter Einramhof; Marnix Nuttin; Thomas Fuxreiter; Gernot Kronreif


Proceedings ELMAR-2010 | 2010

Stereo-based real-time scene segmentation for a home robot

Peter Einramhof; Markus Vincze


Archive | 2011

FAST RANGE IMAGE SEGMENTATION FOR A DOMESTIC SERVICE ROBOT

Peter Einramhof; Robert Schwarz; Markus Vincze


german conference on robotics | 2010

Towards Bringing Robots into Homes

Markus Vincze; Walter Wohlkinger; Sven Olufs; Peter Einramhof; Robert Schwarz

Collaboration


Dive into the Peter Einramhof's collaboration.

Top Co-Authors

Avatar

Markus Vincze

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Sven Olufs

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert Schwarz

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Walter Wohlkinger

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Mayer

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

David Fischinger

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Horst Wildenauer

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Gert Jan Gelderblom

Zuyd University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Marnix Nuttin

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Aitor Aldoma

Vienna University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge