Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthias Scheutz is active.

Publication


Featured researches published by Matthias Scheutz.


international conference on robotics and automation | 2009

What to do and how to do it: Translating natural language directives into temporal and dynamic logic representation for goal management and action execution

Juraj Dzifcak; Matthias Scheutz; Chitta Baral; Paul W. Schermerhorn

Robots that can be given instructions in spoken language need to be able to parse a natural language utterance quickly, determine its meaning, generate a goal representation from it, check whether the new goal conflicts with existing goals, and if acceptable, produce an action sequence to achieve the new goal (ideally being sensitive to the existing goals). In this paper, we describe an integrated robotic architecture that can achieve the above steps by translating natural language instructions incrementally and simultaneously into formal logical goal description and action languages, which can be used both to reason about the achievability of a goal as well as to generate new action scripts to pursue the goal. We demonstrate the implementation of our approach on a robot taking spoken natural language instructions in an office environment.


human-robot interaction | 2008

Robot social presence and gender: do females view robots differently than males?

Paul W. Schermerhorn; Matthias Scheutz; Charles R. Crowell

Social-psychological processes in humans will play an important role in long-term human-robot interactions. This study investigates peoples perceptions of social presence in robots during (relatively) short interactions. Findings indicate that males tend to think of the robot as more human-like and accordingly show some evidence of “social facilitation” on an arithmetic task as well as more socially desirable responding on a survey administered by a robot. In contrast, females saw the robot as more machine-like, exhibited less socially desirable responding to the robots survey, and were not socially facilitated by the robot while engaged in the arithmetic tasks. Various alternative accounts of these findings are explored and the implications of these results for future work are discussed.


Autonomous Robots | 2007

First steps toward natural human-like HRI

Matthias Scheutz; Paul W. Schermerhorn; James F. Kramer; David Anderson

Natural human-like human-robot interaction (NHL-HRI) requires the robot to be skilled both at recognizing and producing many subtle human behaviors, often taken for granted by humans. We suggest a rough division of these requirements for NHL-HRI into three classes of properties: (1) social behaviors, (2) goal-oriented cognition, and (3) robust intelligence, and present the novel DIARC architecture for complex affective robots for human-robot interaction, which aims to meet some of those requirements. We briefly describe the functional properties of DIARC and its implementation in our ADE system. Then we report results from human subject evaluations in the laboratory as well as our experiences with the robot running ADE at the 2005 AAAI Robot Competition in the Open Interaction Event and Robot Exhibition.


human-robot interaction | 2007

Incremental natural language processing for HRI

Timothy Brick; Matthias Scheutz

Robots that interact with humans face-to-face using natural language need to be responsive to the way humans use language in those situations. We propose a psychologically-inspired natural language processing system for robots which performs incremental semantic interpretation of spoken utterances, integrating tightly with the robots perceptual and motor systems.


human-robot interaction | 2006

The utility of affect expression in natural language interactions in joint human-robot tasks

Matthias Scheutz; Paul W. Schermerhorn; James F. Kramer

Recognizing and responding to human affect is important in collaborative tasks in joint human-robot teams. In this paper we present an integrated affect and cognition architecture for HRI and report results from an experiment with this architecture that shows that expressing affect and responding to human affect with affect expressions can significantly improve team performance in a joint human-robot task.


intelligent robots and systems | 2004

Fast, reliable, adaptive, bimodal people tracking for indoor environments

Matthias Scheutz; John McRaven; Gyorgy Cserey

We present a real-time system for a mobile robot that can reliably detect and track people in uncontrolled indoor environments. The system uses a combination of leg detection based on distance information from a laser range sensor and visual face detection based on an analogical algorithm implemented on specialized hardware (the CNN universal machine). Results from tests in a variety of environments with different lighting conditions, a different number of appearing and disappearing people, and different obstacles are reported to demonstrate that the system can find and subsequently track several, possibly people simultaneously in indoor environments. Applications of the system include in particular service robots for social events.


I-perception | 2011

A mismatch in the human realism of face and voice produces an uncanny valley

Wade J. Mitchell; Kevin A Szerszen; Amy Shirong Lu; Paul W. Schermerhorn; Matthias Scheutz; Karl F. MacDorman

The uncanny valley has become synonymous with the uneasy feeling of viewing an animated character or robot that looks imperfectly human. Although previous uncanny valley experiments have focused on relations among a characters visual elements, the current experiment examines whether a mismatch in the human realism of a characters face and voice causes it to be evaluated as eerie. The results support this hypothesis.


human factors in computing systems | 2012

Brainput: enhancing interactive systems with streaming fnirs brain input

Erin Treacy Solovey; Paul W. Schermerhorn; Matthias Scheutz; Angelo Sassaroli; Sergio Fantini; Robert J. K. Jacob

This paper describes the Brainput system, which learns to identify brain activity patterns occurring during multitasking. It provides a continuous, supplemental input stream to an interactive human-robot system, which uses this information to modify its behavior to better support multitasking. This paper demonstrates that we can use non-invasive methods to detect signals coming from the brain that users naturally and effortlessly generate while using a computer system. If used with care, this additional information can lead to systems that respond appropriately to changes in the users state. Our experimental study shows that Brainput significantly improves several performance metrics, as well as the subjective NASA-Task Load Index scores in a dual-task human-robot activity.


Minds and Machines | 1999

When Physical Systems Realize Functions...

Matthias Scheutz

After briefly discussing the relevance of the notions ‘computation’ and ‘implementation’ for cognitive science, I summarize some of the problems that have been found in their most common interpretations. In particular, I argue that standard notions of computation together with a ‘state-to-state correspondence view of implementation’ cannot overcome difficulties posed by Putnams Realization Theorem and that, therefore, a different approach to implementation is required. The notion ‘realization of a function’, developed out of physical theories, is then introduced as a replacement for the notional pair ‘computation-implementation’. After gradual refinement, taking practical constraints into account, this notion gives rise to the notion ‘digital system’ which singles out physical systems that could be actually used, and possibly even built.


human-robot interaction | 2010

Robust spoken instruction understanding for HRI

Rehj Cantrell; Matthias Scheutz; Paul W. Schermerhorn; Xuan Wu

Natural human-robot interaction requires different and more robust models of language understanding (NLU) than non-embodied NLU systems. In particular, architectures are required that (1) process language incrementally in order to be able to provide early backchannel feedback to human speakers; (2) use pragmatic contexts throughout the understanding process to infer missing information; and (3) handle the underspecified, fragmentary, or otherwise ungrammatical utterances that are common in spontaneous speech. In this paper, we describe our attempts at developing an integrated natural language understanding architecture for HRI, and demonstrate its novel capabilities using challenging data collected in human-human interaction experiments.

Collaboration


Dive into the Matthias Scheutz's collaboration.

Top Co-Authors

Avatar

Paul W. Schermerhorn

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge