Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrea Scheidig is active.

Publication


Featured researches published by Andrea Scheidig.


Robotics and Autonomous Systems | 2006

Multi-modal sensor fusion using a probabilistic aggregation scheme for people detection and tracking

Christian Märtin; Erik Schaffernicht; Andrea Scheidig; Horst-Michael Gross

Efficient and robust techniques for people detection and tracking are basic prerequisites when dealing with Human‐Robot Interaction (HRI) in real-world scenarios. In this paper, we introduce a new approach for the integration of several sensor modalities and present a multi-modal, probability-based people detection and tracking system and its application using the different sensory systems of our mobile interaction robot HOROS. These include a laser range-finder, a sonar system, and a fisheye-based omni-directional camera. For each of these sensory systems, separate and specific Gaussian probability distributions are generated to model the belief in observing one or several persons. These probability distributions are further merged into a robot-centered map by means of a flexible probabilistic aggregation scheme based on Covariance Intersection (CI). The main advantages of this approach are the simple extensibility by the integration of further sensory channels, even with different update frequencies, and the usability in real-world HRI tasks. Finally, the first promising experimental results achieved for people detection and tracking in a real-world environment (our institute building) are presented. c 2006 Elsevier B.V. All rights reserved.


international conference on robotics and automation | 2007

Adaptive Noise Reduction and Voice Activity Detection for improved Verbal Human-Robot Interaction using Binaural Data

Robert Brueckmann; Andrea Scheidig; Horst-Michael Gross

Speech has become an important part in human robot interaction (HRI), e.g. for person detection systems by using localized sound sources or for applications in automatic speech recognition (ASR) systems. By using speech in HRI in real world environments, we have to deal with mostly high and varying background noise, reverberation and also with different sound sources superimposing speech and other noises. Therefore, for real world scenarios a suitable signal preprocessing is essential. In this paper, we present a part of the artificial auditory system implemented on the mobile interaction robot HOROS using only two low cost microphones. We combined neural voice activity detection (VAD) and adaptive noise reduction which are essential aspects for HRI using mobile robot systems in changing and populated real-world environments. In the result, our system is able to robustly react on speech signals from its human interaction partner while ignoring other sound sources. Experiments show a significantly improved ASR performance in demanding environments making the system suitable for the use in real-world scenarios.


intelligent robots and systems | 2015

Robot companion for domestic health assistance: Implementation, test and case study under everyday conditions in private apartments

Horst-Michael Gross; Steffen Mueller; Christof Schroeter; Michael Volkhardt; Andrea Scheidig; Klaus Debes; Katja Richter; Nicola Doering

This paper presents the implementation and evaluation results of the German research project SERROGA (2012 till mid 2015), which aimed at developing a robot companion for domestic health assistance for older people that helps keeping them physically and mentally fit to remain living independently in their own homes for as long as possible. The paper gives an overview of the developed companion robot, its system architecture, and essential skills, behaviors, and services required for a robotic health assistant. Moreover, it presents a new approach allowing a quantitative description and assessment of the navigation complexity of apartments to make them objectively comparable for function tests under real-life conditions. Based on this approach, the results of function tests executed in 12 apartments of project staff and seniors are described. Furthermore, the paper presents findings of a case study conducted with nine seniors (aged 68-92) in their own homes, investigating both instrumental and social-emotional functions of a robotic health assistant. The robot accompanied the seniors in their homes for up to three days assisting with tasks of their daily schedule and health care, without any supervising person being present on-site. Results revealed that the seniors appreciated the robots health-related instrumental functions and even built emotional bonds with it.


systems, man and cybernetics | 2014

Mobile Robotic Rehabilitation Assistant for walking and orientation training of Stroke Patients: A report on work in progress

Horst-Michael Gross; Klaus Debes; Erik Einhorn; S. Mueller; Andrea Scheidig; Ch. Weinrich; Andreas Bley; Ch. Martin

As report on work in progress, this paper describes the objectives and the current state of implementation of the ongoing research project ROREAS (Robotic Rehabilitation Assistant for Stroke Patients), which aims at developing a robotic rehabilitation assistant for walking and orientation exercising in self-training during clinical stroke follow-up care. This requires strongly user-centered, polite and attentive social navigation and interaction behaviors that can motivate the patients to start, continue, and regularly repeat their self-training. Against this background, the paper gives an overview of the constraints and requirements arising from the rehabilitation scenario and the operational environment, a heavily populated multi-level rehabilitation center, and presents the robot platform ROREAS which is currently used for developing the demonstrators (walking coach and orientation coach). Moreover, it gives an overview of the robots functional system architecture and presents selected advanced navigation and HRI functionalities required for a personal robotic trainer that can successfully operate in such a challenging real-world environment, up to the results of ongoing functionality tests and upcoming user studies.


International Journal of Advanced Robotic Systems | 2007

A Monocular Pointing Pose Estimator for Gestural Instruction of a Mobile Robot

Jan Richarz; Andrea Scheidig; Christian Märtin; Steffen Müller; Horst-Michael Gross

We present an important aspect of our human-robot communication interface which is being developed in the context of our long-term research framework PERSES dealing with highly interactive mobile companion robots. Based on a multi-modal people detection and tracking system, we present a hierarchical neural architecture that estimates a target point at the floor indicated by a pointing pose, thus enabling a user to navigate a mobile robot to a specific target position in his local surroundings by means of pointing. In this context, we were especially interested in determining whether it is possible to accomplish such a target point estimator using only monocular images of low-cost cameras. The estimator has been implemented and experimentally investigated on our mobile robotic assistant HOROS. Although only monocular image data of relatively poor quality were utilized, the estimator accomplishes a good estimation performance, achieving an accuracy better than that of a human viewer on the same data. The achieved recognition results demonstrate that it is in fact possible to realize a user-independent pointing direction estimation using monocular images only, but further efforts are necessary to improve the robustness of this approach for everyday application.


robot and human interactive communication | 2006

Generating Persons Movement Trajectories on a Mobile Robot

Andrea Scheidig; Steffen Mueller; Christian Märtin; Horst-Michael Gross

For socially interactive robots it is essential to be able to estimate the interest of people to interact with them. Based on this estimation the robot can adapt its dialog strategy to the different peoples behaviors. Consequently, efficient and robust techniques for people detection and tracking are basic prerequisites when dealing with human-robot interaction (HRI) in real-world scenarios. In this paper, we introduce an imposed approach for integration of several sensor modalities and present a multimodal, probability-based people detection and tracking system and its application using the different sensory systems of our mobile interaction robot HOROS. For each of these sensory cues, separate and specific Gaussian distributed hypotheses are generated and further merged into a robot-centered map by means of a flexible probabilistic aggregation scheme based on covariance intersection (CI). The main advantages of this approach are the simple extensibility by integration of further sensory channels, even with different update frequencies, and the usability in real-world HRI tasks. Finally, promising experimental results achieved for people tracking in a real-world environment, and university building, are presented


Autonomous Robots | 2017

ROREAS: robot coach for walking and orientation training in clinical post-stroke rehabilitation--prototype implementation and evaluation in field trials

Horst-Michael Gross; Andrea Scheidig; Klaus Debes; Erik Einhorn; Markus Eisenbach; Steffen Mueller; Thomas Schmiedel; Thanh Q. Trinh; Christoph Weinrich; Tim Wengefeld; Andreas Bley; Christian Märtin

This paper describes the objectives and the state of implementation of the ROREAS project which aims at developing a socially assistive robot coach for walking and orientation training of stroke patients in the clinical rehabilitation. The robot coach is to autonomously accompany the patients during their exercises practicing their mobility skills. This requires strongly user-centered, polite and attentive social navigation and interaction abilities that can motivate the patients to start, continue, and regularly repeat their self-training. The paper gives an overview of the training scenario and describes the constraints and requirements arising from the scenario and the operational environment. Moreover, it presents the mobile robot ROREAS and gives an overview of the robot’s system architecture and the required human- and situation-aware navigation and interaction skills. Finally, it describes our three-stage approach in conducting function and user tests in the clinical environment: pre-tests with technical staff, followed by function tests with clinical staff and user trials with volunteers from the group of stroke patients, and presents the results of these tests conducted so far.


robot and human interactive communication | 2008

Whom to talk to? Estimating user interest from movement trajectories

Steffen Müller; Sven Hellbach; Erik Schaffernicht; Antje Ober; Andrea Scheidig; Horst-Michael Gross

Correctly identifying people who are interested in an interaction with a mobile robot is an essential task for a smart Human-Robot Interaction. In this paper an approach is presented for selecting suitable trajectory features in a task specific manner from a huge amount of different forms of possible representations. Different sub-sampling techniques are proposed to generate trajectory sequences from which features are extracted. The trajectory data was generated in real world experiments that include extensive user interviews to acquire information about user behaviors and intentions. Using those feature vectors in a classification method enables the robot to estimate the users interaction interest. For generating low-dimensional feature vectors, a common method, the Principle Component Analysis, is applied. The selection and combination of useful features out of a set of possible features is carried out by an information theoretic approach based on the Mutual Information and Joint Mutual Information with respect to the users interaction interest. The introduced procedure is evaluated with neural classifiers, which are trained with the extracted features of the trajectories and the user behavior gained by observation as well as user interviewing. The results achieved indicate that an estimation of the users interaction interest using trajectory information is feasible.


robot and human interactive communication | 2006

There You Go! - Estimating Pointing Gestures In Monocular Images For Mobile Robot Instruction

Jan Richarz; Christian Märtin; Andrea Scheidig; Horst-Michael Gross

In this paper, we present a neural architecture that is capable of estimating a target point from a pointing gesture, thus enabling a user to command a mobile robot to a specific position in his local surroundings by means of pointing. In this context, we were especially interested to determine whether it is possible to implement a target point estimator using only monocular images of low-cost Webcams. The feature extraction is also quite straightforward: We use a gabor jet to extract the feature vector from the normalized camera images; and a cascade of multi layer perceptron (MLP) classifiers as estimator. The system was implemented and tested on our mobile robotic assistant HOROS. The results indicate that it is in fact possible to realize a pointing estimator using monocular image data, but further efforts are necessary to improve the accuracy and robustness of our approach


Lecture Notes in Computer Science | 2005

A probabilistic multimodal sensor aggregation scheme applied for a mobile robot

Erik Schaffernicht; Christian Märtin; Andrea Scheidig; Horst-Michael Gross

Dealing with methods of human-robot interaction and using a real mobile robot, stable methods for people detection and tracking are fundamental features of such a system and require information from different sensory. In this paper, we discuss a new approach for integrating several sensor modalities and we present a multimodal people detection and tracking system and its application using the different sensory systems of our mobile interaction robot Horos working in a real office environment. These include a laser-range-finder, a sonar system, and a fisheye-based omnidirectional camera. For each of these sensory information, a separate Gaussian probability distribution is generated to model the belief of the observation of a person. These probability distributions are further combined using a flexible probabilistic aggregation scheme. The main advantages of this approach are a simple integration of further sensory channels, even with different update frequencies and the usability in real-world environments. Finally, promising experimental results achieved in a real office environment will be presented.

Collaboration


Dive into the Andrea Scheidig's collaboration.

Top Co-Authors

Avatar

Horst-Michael Gross

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Christian Märtin

Augsburg University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Steffen Mueller

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Klaus Debes

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Steffen Müller

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Thanh Q. Trinh

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Tim Wengefeld

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Markus Eisenbach

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Christof Schroeter

Technische Universität Ilmenau

View shared research outputs
Researchain Logo
Decentralizing Knowledge