Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andre Gaschler is active.

Publication


Featured researches published by Andre Gaschler.


international conference on multimodal interfaces | 2012

Two people walk into a bar: dynamic multi-party social interaction with a robot agent

Mary Ellen Foster; Andre Gaschler; Manuel Giuliani; Amy Isard; Maria Pateraki; Ronald P. A. Petrick

We introduce a humanoid robot bartender that is capable of dealing with multiple customers in a dynamic, multi-party social setting. The robot system incorporates state-of-the-art components for computer vision, linguistic processing, state management, high-level reasoning, and robot control. In a user evaluation, 31 participants interacted with the bartender in a range of social situations. Most customers successfully obtained a drink from the bartender in all scenarios, and the factors that had the greatest impact on subjective satisfaction were task success and dialogue efficiency.


international conference on multimodal interfaces | 2013

Comparing task-based and socially intelligent behaviour in a robot bartender

Manuel Giuliani; Ronald P. A. Petrick; Mary Ellen Foster; Andre Gaschler; Amy Isard; Maria Pateraki; Markos Sigalas

We address the question of whether service robots that interact with humans in public spaces must express socially appropriate behaviour. To do so, we implemented a robot bartender which is able to take drink orders from humans and serve drinks to them. By using a high-level automated planner, we explore two different robot interaction styles: in the task only setting, the robot simply fulfils its goal of asking customers for drink orders and serving them drinks; in the socially intelligent setting, the robot additionally acts in a manner socially appropriate to the bartender scenario, based on the behaviour of humans observed in natural bar interactions. The results of a user study show that the interactions with the socially intelligent robot were somewhat more efficient, but the two implemented behaviour settings had only a small influence on the subjective ratings. However, there were objective factors that influenced participant ratings: the overall duration of the interaction had a positive influence on the ratings, while the number of system order requests had a negative influence. We also found a cultural difference: German participants gave the system higher pre-test ratings than participants who interacted in English, although the post-test scores were similar.


intelligent robots and systems | 2013

KVP: A knowledge of volumes approach to robot task planning

Andre Gaschler; Ronald P. A. Petrick; Manuel Giuliani; Markus Rickert; Alois Knoll

Robot task planning is an inherently challenging problem, as it covers both continuous-space geometric reasoning about robot motion and perception, as well as purely symbolic knowledge about actions and objects. This paper presents a novel “knowledge of volumes” framework for solving generic robot tasks in partially known environments. In particular, this approach (abbreviated, KVP) combines the power of symbolic, knowledge-level AI planning with the efficient computation of volumes, which serve as an intermediate representation for both robot action and perception. While we demonstrate the effectiveness of our framework in a bimanual robot bartender scenario, our approach is also more generally applicable to tasks in automation and mobile manipulation, involving arbitrary numbers of manipulators.


international conference on multimodal interfaces | 2013

How can i help you': comparing engagement classification strategies for a robot bartender

Mary Ellen Foster; Andre Gaschler; Manuel Giuliani

A robot agent existing in the physical world must be able to understand the social states of the human users it interacts with in order to respond appropriately. We compared two implemented methods for estimating the engagement state of customers for a robot bartender based on low-level sensor data: a rule-based version derived from the analysis of human behaviour in real bars, and a trained version using supervised learning on a labelled multimodal corpus. We first compared the two implementations using cross-validation on real sensor data and found that nearly all classifier types significantly outperformed the rule-based classifier. We also carried out feature selection to see which sensor features were the most informative for the classification task, and found that the position of the head and hands were relevant, but that the torso orientation was not. Finally, we performed a user study comparing the ability of the two classifiers to detect the intended user engagement of actual customers of the robot bartender; this study found that the trained classifier was faster at detecting initial intended user engagement, but that the rule-based classifier was more stable.


intelligent robots and systems | 2013

Virtual whiskers — Highly responsive robot collision avoidance

Thomas Schlegl; Torsten Kröger; Andre Gaschler; Oussama Khatib; Hubert Zangl

All mammals but humans use whiskers in order to rapidly acquire information about objects in the vicinity of the head. Collisions of the head and objects can be avoided as the contact point is moved from the body surface to the whiskers. Such a behavior is also highly desirable during many robot tasks such as for human-robot interaction. Using novel capacitive proximity sensors, robots sense when they approach a human (or an object) and react before they actually collide with it. We propose a sensor and control concept that mimics the behavior of whiskers by means of capacitive sensors. Major advantages are the absence of physical whiskers, the absence of blind spots and a very short response time. The sensors are flexible and thin so that they feature skin-like properties and can be attached to various robotic link and joint shapes. In comparison to capacitive proximity sensors, the proposed virtual whiskers offer better sensitivity towards small conductive as well as non conductive objects. Equipped with the new proximity sensors, a seven-joint robot for human-robot interaction tasks shows the efficiency and responsiveness of our concept.


Signal Processing | 2015

Combining unsupervised learning and discrimination for 3D action recognition

Guang Chen; Daniel Clarke; Manuel Giuliani; Andre Gaschler; Alois Knoll

Previous work on 3D action recognition has focused on using hand-designed features, either from depth videos or 2D videos. In this work, we present an effective way to combine unsupervised feature learning with discriminative feature mining. Unsupervised feature learning allows us to extract spatio-temporal features from unlabeled video data. With this, we can avoid the cumbersome process of designing feature extraction by hand. We propose an ensemble approach using a discriminative learning algorithm, where each base learner is a discriminative multi-kernel-learning classifier, trained to learn an optimal combination of joint-based features. Our evaluation includes a comparison to state-of-the-art methods on the MSRAction 3D dataset, where our method, abbreviated EnMkl, outperforms earlier methods. Furthermore, we analyze the efficiency of our approach in a 3D action recognition system. HighlightsWe deal with recognizing 3D human actions by combining two ideas: unsupervised feature learning and discriminative feature mining.We are the first work to use unsupervised learning to represent 3D depth video data.We propose an ensemble approach with a discriminative multi-kernel learning algorithm to model 3D human actions.


international conference on robotics and automation | 2014

Action Recognition Using Ensemble Weighted Multi-Instance Learning

Guang Chen; Manuel Giuliani; Daniel Clarke; Andre Gaschler; Alois Knoll

This paper deals with recognizing human actions in depth video data. Current state-of-the-art action recognition methods use hand-designed features, which are difficult to produce and time-consuming to extend to new modalities. In this paper, we propose a novel, 3.5D representation of a depth video for action recognition. A 3.5D graph of the depth video consists of a set of nodes that are the joints of the human body. Each joint is represented by a set of spatio-temporal features, which are computed by an unsupervised learning approach. However, if occlusions occur, the 3D positions of the joints are noisy which increases the intra-class variations in action classes. To address this problem, we propose the Ensemble Weighted Multi-Instance Learning approach (EnwMi) for the action recognition task. It considers the class imbalance and intra-class variations. We formulate the action recognition task with depth videos as a weighted multi-instance problem. We further integrate an ensemble learning method into the weighted multi-instance learning framework. Our approach is evaluated on Microsoft Research Action3D dataset, and the results show that it outperforms state-of-the-art methods.


intelligent robots and systems | 2012

Social behavior recognition using body posture and head pose for human-robot interaction

Andre Gaschler; Sören Jentzsch; Manuel Giuliani; Kerstin Huth; Jan de Ruiter; Alois Knoll

Robots that interact with humans in everyday situations, need to be able to interpret the nonverbal social cues of their human interaction partners. We show that humans use body posture and head pose as social signals to initiate and terminate interaction when ordering drinks at a bar. For that, we record and analyze 108 interactions of humans interacting with a human bartender. Based on these findings, we train a Hidden Markov Model (HMM) using automatic body posture and head pose estimation. With this model, the bartender robot of the project JAMES can recognize typical social behaviors of human customers. Evaluation shows a recognition rate of 82.9 % for all implemented social behaviors and in particular a recognition rate of 91.2 % for bartender attention requests, which will allow the robot to interact with multiple humans in a robust and socially appropriate way.


european conference on computer vision | 2014

Multi-modality Gesture Detection and Recognition with Un-supervision, Randomization and Discrimination

Guang Chen; Daniel Clarke; Manuel Giuliani; Andre Gaschler; Di Wu; David Weikersdorfer; Alois Knoll

We describe in this paper our gesture detection and recognition system for the 2014 ChaLearn Looking at People (Track 3: Gesture Recognition) organized by ChaLearn in conjunction with the ECCV 2014 conference. The competition’s task was to learn a vacabulary of 20 types of Italian gestures and detect them in sequences. Our system adopts a multi-modality approach for detecting as well as recognizing the gestures. The goal of our approach is to identify semantically meaningful contents from dense sampling spatio-temporal feature space for gesture recognition. To achieve this, we develop three concepts under the random forest framework: un-supervision; discrimination; and randomization. Un-supervision learns spatio-temporal features from two channels (grayscale and depth) of RGB-D video in an unsupervised way. Discrimination extracts the information in dense sampling spatio-temporal space effectively. Randomization explores the dense sampling spatio-temporal feature space efficiently. An evaluation of our approach shows that we achieve a mean Jaccard Index of \(0.6489\), and a mean average accuracy of \(90.3\,\%\) over the test dataset.


intelligent robots and systems | 2012

Single camera visual odometry based on Random Finite Set Statistics

Feihu Zhang; Hauke Stähle; Andre Gaschler; Christian Buckl; Alois Knoll

This paper presents a novel approach based on Random Finite Set (RFS) Statistics for estimating a vehicles trajectory in complex urban environments by using a fixed single camera. For this, we extend our earlier works which used Probability Hypothesis Density (PHD) filtering under sensor fusion framework and are among the first to apply this technique to visual odometry in real traffic scenes. We consider features acquired from the camera as a group targets, use the PHD filter to update the overall group state and then estimate the ego-motion vector of the camera. Compared to other approaches, our approach presents a recursive filtering algorithm that provides dynamic estimation of multiple-targets states in the presence of clutter and avoids the association problem. Experimental results show that this method provides good robustness under real traffic scenarios.

Collaboration


Dive into the Andre Gaschler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amy Isard

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge