Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sven Wachsmuth is active.

Publication


Featured researches published by Sven Wachsmuth.


intelligent robots and systems | 2002

Multi-modal human-machine communication for instructing robot grasping tasks

Patrick C. McGuire; Jannik Fritsch; Jochen J. Steil; Frank Röthling; Gernot A. Fink; Sven Wachsmuth; Gerhard Sagerer; Helge Ritter

A major challenge for the realization of intelligent robots is to supply them with cognitive abilities in order to allow ordinary users to program them easily and intuitively. One approach to such programming is teaching work tasks by interactive demonstration. To make this effective and convenient for the user, the machine must be capable of establishing a common focus of attention and be able to use and integrate spoken instructions, visual perception, and non-verbal clues like gestural commands. We report progress in building a hybrid architecture that combines statistical methods, neural networks, and finite state machines into an integrated system for instructing grasping tasks by man-machine interaction. The system combines the GRAVIS-robot for visual attention and gestural instruction with an intelligent interface for speech recognition and linguistic interpretation, and a modality fusion module to allow multi-modal task-oriented man-machine communication with respect to dextrous robot manipulation of objects.


international conference on robotics and automation | 2010

The bielefeld anthropomorphic robot head “Flobi”

Ingo Lütkebohle; Frank Hegel; Simon Schulz; Matthias Hackel; Britta Wrede; Sven Wachsmuth; Gerhard Sagerer

A robots head is important both for directional sensors and, in human-directed robotics, as the single most visible interaction interface. However, designing a robots head faces contradicting requirements when integrating powerful sensing with social expression. Furher, reactions of the general public show that current head designs often cause negative user reactions and distract from the functional capabilities.


international conference on robotics and automation | 2009

The curious robot - Structuring interactive robot learning

Ingo Lütkebohle; Julia Peltason; Lars Schillingmann; Britta Wrede; Sven Wachsmuth; Christof Elbrechter; Robert Haschke

If robots are to succeed in novel tasks, they must be able to learn from humans. To improve such human-robot interaction, a system is presented that provides dialog structure and engages the human in an exploratory teaching scenario. Thereby, we specifically target untrained users, who are supported by mixed-initiative interaction using verbal and non-verbal modalities. We present the principles of dialog structuring based on an object learning and manipulation scenario. System development is following an interactive evaluation approach and we will present both an extensible, event-based interaction architecture to realize mixed-initiative and evaluation results based on a video-study of the system. We show that users benefit from the provided dialog structure to result in predictable and successful human-robot interaction.


computational intelligence in robotics and automation | 2001

An integrated system for cooperative man-machine interaction

Christian Bauckhage; Gernot A. Fink; Jannik Fritsch; F. Kummmert; Frank Lömker; Gerhard Sagerer; Sven Wachsmuth

To establish robotic application in human environments as, e.g. offices or private homes the robotic systems must be instructable by ordinary users in a natural way. In interpersonal communication humans usually apply different sensory information and are capable of integrating all perceptual cues fast and consistently. Additionally, knowledge acquired during the communication process is directly used to resolve ambiguities. As a step towards realizing similar capabilities in automatic devices this paper presents an integrated system combining automatic speech processing and image understanding. The system is intended to be an intelligent interface of a robot which manipulates objects in its surroundings according to the instructions of a human. The enhanced capabilities necessary for carrying out a multimodal man-machine dialog are realized by combining statistical and declarative methods for inference and knowledge representation. The effectiveness of this approach is demonstrated using an exemplary dialog from our construction task domain.


international conference on robotics and automation | 2009

Laser-based navigation enhanced with 3D time-of-flight data

Fang Yuan; Agnes Swadzba; Roland Philippsen; Orhan Engin; Marc Hanheide; Sven Wachsmuth

Navigation and obstacle avoidance in robotics using planar laser scans has matured over the last decades. They basically enable robots to penetrate highly dynamic and populated spaces, such as peoples home, and move around smoothly. However, in an unconstrained environment the two-dimensional perceptual space of a fixed mounted laser is not sufficient to ensure safe navigation. In this paper, we present an approach that pools a fast and reliable motion generation approach with modern 3D capturing techniques using a Time-of-Flight camera. Instead of attempting to implement full 3D motion control, which is computationally more expensive and simply not needed for the targeted scenario of a domestic robot, we introduce a “virtual laser”. For the originally solely laser-based motion generation the technique of fusing real laser measurements and 3D point clouds into a continuous data stream is 100% compatible and transparent. The paper covers the general concept, the necessary extrinsic calibration of two very different types of sensors, and exemplarily illustrates the benefit which is to avoid obstacles not being perceivable in the original laser scan.


international conference on computer vision systems | 2006

Integration and Coordination in a Cognitive Vision System

Sebastian Wrede; Marc Hanheide; Sven Wachsmuth; Gerhard Sagerer

In this paper, we present a case study that exemplifies general ideas of system integration and coordination. The application field of assistant technology provides an ideal test bed for complex computer vision systems including real-time components, human-computer interaction, dynamic 3-d environments, and information retrieval aspects. In our scenario the user is wearing an augmented reality device that supports her/him in everyday tasks by presenting information that is triggered by perceptual and contextual cues. The system integrates a wide variety of visual functions like localization, object tracking and recognition, action recognition, interactive object learning, etc. We show how different kinds of system behavior are realized using the Active Memory Infrastructure that provides the technical basis for distributed computation and a data- and eventdriven integration approach.


ACM Transactions on Accessible Computing | 2014

Automatic Task Assistance for People with Cognitive Disabilities in Brushing Teeth - A User Study with the TEBRA System

Christian Peters; Thomas Hermann; Sven Wachsmuth; Jesse Hoey

People with cognitive disabilities such as dementia and intellectual disabilities tend to have problems in coordinating steps in the execution of Activities of Daily Living (ADLs) due to limited capabilities in cognitive functioning. To successfully perform ADLs, these people are reliant on the assistance of human caregivers. This leads to a decrease of independence for care recipients and imposes a high burden on caregivers. Assistive Technology for Cognition (ATC) aims to compensate for decreased cognitive functions. ATC systems provide automatic assistance in task execution by delivering appropriate prompts which enable the user to perform ADLs without any assistance of a human caregiver. This leads to an increase of the user’s independence and to a relief of caregiver’s burden. In this article, we describe the design, development and evaluation of a novel ATC system. The TEBRA (TEeth BRushing Assistance) system supports people with moderate cognitive disabilities in the execution of brushing teeth. A main requirement for the acceptance of ATC systems is context awareness: explicit feedback from the user is not necessary to provide appropriate assistance. Furthermore, an ATC system needs to handle spatial and temporal variance in the execution of behaviors such as different movement characteristics and different velocities. The TEBRA system handles spatial variance in a behavior recognition component based on a Bayesian network classifier. A dynamic timing model deals with temporal variance by adapting to different velocities of users during a trial. We evaluate a fully functioning prototype of the TEBRA system in a study with people with cognitive disabilities. The main aim of the study is to analyze the technical performance of the system and the user’s behavior in the interaction with the system with regard to the main hypothesis: is the TEBRA system able to increase the user’s independence in the execution of brushing teeth?


international conference on computer vision | 2007

Learning Structured Appearance Models from Captioned Images of Cluttered Scenes

Michael Jamieson; Afsaneh Fazly; Sven J. Dickinson; Suzanne Stevenson; Sven Wachsmuth

Given an unstructured collection of captioned images of cluttered scenes featuring a variety of objects, our goal is to learn both the names and appearances of the objects. Only a small number of local features within any given image are associated with a particular caption word. We describe a connected graph appearance model where vertices represent local features and edges encode spatial relationships. We use the repetition of feature neighborhoods across training images and a measure of correspondence with caption words to guide the search for meaningful feature configurations. We demonstrate improved results on a dataset to which an unstructured object model was previously applied. We also apply the new method to a more challenging collection of captioned images from the Web, detecting and annotating objects within highly cluttered realistic scenes.


robot and human interactive communication | 2007

Classes of Applications for Social Robots: A User Study

Frank Hegel; Manja Lohse; Agnes Swadzba; Sven Wachsmuth; Katharina J. Rohlfing; Britta Wrede

The paper introduces an online user study on applications for social robots with 127 participants. The potential users proposed 570 application scenarios based on the appearance and functionality of four robots presented (AIBO, BARTHOC, BIRON, iCat). The items were grouped into 13 categories which are interpreted and discussed by means of four dimensions: public vs. private use, intensity of interaction, complexity of interaction model, and functional vs. human-like appearance. The interpretation lead to three classes of applications for social robots according to the degree of social interaction: (1) Specialized Applications where the robot has to perform clearly defined tasks which are delegated by a user, (2) Public Applications which are directed to the communication with many users, and (3) Individual Applications with the need of a highly elaborated social model to maintain a variety of situations with few people.


International Journal of Social Robotics | 2011

How Can I Help? - Spatial Attention Strategies for a Receptionist Robot

Patrick Holthaus; Karola Pitsch; Sven Wachsmuth

Social interaction between humans takes place in the spatial environment on a daily basis. We occupy space for ourselves and respect the dynamics of spaces that are occupied by others. In human-robot interaction, spatial models are commonly used for structuring relatively far-away interactions or passing-by scenarios. This work instead, focuses on the transition between distant and close communication for an interaction opening. We applied a spatial model to a humanoid robot and implemented an attention system that is connected to it. The resulting behaviors have been verified in an online video study. The questionnaire revealed that these behaviors are applicable and result in a robot that has been perceived as more interested in the human and shows its attention and intentions earlier and to a higher degree than other strategies.

Collaboration


Dive into the Sven Wachsmuth's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge