Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul W. Schermerhorn is active.

Publication


Featured researches published by Paul W. Schermerhorn.


international conference on robotics and automation | 2009

What to do and how to do it: Translating natural language directives into temporal and dynamic logic representation for goal management and action execution

Juraj Dzifcak; Matthias Scheutz; Chitta Baral; Paul W. Schermerhorn

Robots that can be given instructions in spoken language need to be able to parse a natural language utterance quickly, determine its meaning, generate a goal representation from it, check whether the new goal conflicts with existing goals, and if acceptable, produce an action sequence to achieve the new goal (ideally being sensitive to the existing goals). In this paper, we describe an integrated robotic architecture that can achieve the above steps by translating natural language instructions incrementally and simultaneously into formal logical goal description and action languages, which can be used both to reason about the achievability of a goal as well as to generate new action scripts to pursue the goal. We demonstrate the implementation of our approach on a robot taking spoken natural language instructions in an office environment.


human-robot interaction | 2008

Robot social presence and gender: do females view robots differently than males?

Paul W. Schermerhorn; Matthias Scheutz; Charles R. Crowell

Social-psychological processes in humans will play an important role in long-term human-robot interactions. This study investigates peoples perceptions of social presence in robots during (relatively) short interactions. Findings indicate that males tend to think of the robot as more human-like and accordingly show some evidence of “social facilitation” on an arithmetic task as well as more socially desirable responding on a survey administered by a robot. In contrast, females saw the robot as more machine-like, exhibited less socially desirable responding to the robots survey, and were not socially facilitated by the robot while engaged in the arithmetic tasks. Various alternative accounts of these findings are explored and the implications of these results for future work are discussed.


Autonomous Robots | 2007

First steps toward natural human-like HRI

Matthias Scheutz; Paul W. Schermerhorn; James F. Kramer; David Anderson

Natural human-like human-robot interaction (NHL-HRI) requires the robot to be skilled both at recognizing and producing many subtle human behaviors, often taken for granted by humans. We suggest a rough division of these requirements for NHL-HRI into three classes of properties: (1) social behaviors, (2) goal-oriented cognition, and (3) robust intelligence, and present the novel DIARC architecture for complex affective robots for human-robot interaction, which aims to meet some of those requirements. We briefly describe the functional properties of DIARC and its implementation in our ADE system. Then we report results from human subject evaluations in the laboratory as well as our experiences with the robot running ADE at the 2005 AAAI Robot Competition in the Open Interaction Event and Robot Exhibition.


human-robot interaction | 2006

The utility of affect expression in natural language interactions in joint human-robot tasks

Matthias Scheutz; Paul W. Schermerhorn; James F. Kramer

Recognizing and responding to human affect is important in collaborative tasks in joint human-robot teams. In this paper we present an integrated affect and cognition architecture for HRI and report results from an experiment with this architecture that shows that expressing affect and responding to human affect with affect expressions can significantly improve team performance in a joint human-robot task.


I-perception | 2011

A mismatch in the human realism of face and voice produces an uncanny valley

Wade J. Mitchell; Kevin A Szerszen; Amy Shirong Lu; Paul W. Schermerhorn; Matthias Scheutz; Karl F. MacDorman

The uncanny valley has become synonymous with the uneasy feeling of viewing an animated character or robot that looks imperfectly human. Although previous uncanny valley experiments have focused on relations among a characters visual elements, the current experiment examines whether a mismatch in the human realism of a characters face and voice causes it to be evaluated as eerie. The results support this hypothesis.


human factors in computing systems | 2012

Brainput: enhancing interactive systems with streaming fnirs brain input

Erin Treacy Solovey; Paul W. Schermerhorn; Matthias Scheutz; Angelo Sassaroli; Sergio Fantini; Robert J. K. Jacob

This paper describes the Brainput system, which learns to identify brain activity patterns occurring during multitasking. It provides a continuous, supplemental input stream to an interactive human-robot system, which uses this information to modify its behavior to better support multitasking. This paper demonstrates that we can use non-invasive methods to detect signals coming from the brain that users naturally and effortlessly generate while using a computer system. If used with care, this additional information can lead to systems that respond appropriately to changes in the users state. Our experimental study shows that Brainput significantly improves several performance metrics, as well as the subjective NASA-Task Load Index scores in a dual-task human-robot activity.


human-robot interaction | 2010

Robust spoken instruction understanding for HRI

Rehj Cantrell; Matthias Scheutz; Paul W. Schermerhorn; Xuan Wu

Natural human-robot interaction requires different and more robust models of language understanding (NLU) than non-embodied NLU systems. In particular, architectures are required that (1) process language incrementally in order to be able to provide early backchannel feedback to human speakers; (2) use pragmatic contexts throughout the understanding process to infer missing information; and (3) handle the underspecified, fragmentary, or otherwise ungrammatical utterances that are common in spontaneous speech. In this paper, we describe our attempts at developing an integrated natural language understanding architecture for HRI, and demonstrate its novel capabilities using challenging data collected in human-human interaction experiments.


ACM Transactions on Intelligent Systems and Technology | 2010

Planning for human-robot teaming in open worlds

Kartik Talamadupula; J. Benton; Subbarao Kambhampati; Paul W. Schermerhorn; Matthias Scheutz

As the number of applications for human-robot teaming continue to rise, there is an increasing need for planning technologies that can guide robots in such teaming scenarios. In this article, we focus on adapting planning technology to Urban Search And Rescue (USAR) with a human-robot team. We start by showing that several aspects of state-of-the-art planning technology, including temporal planning, partial satisfaction planning, and replanning, can be gainfully adapted to this scenario. We then note that human-robot teaming also throws up an additional critical challenge, namely, enabling existing planners, which work under closed-world assumptions, to cope with the open worlds that are characteristic of teaming problems such as USAR. In response, we discuss the notion of conditional goals, and describe how we represent and handle a specific class of them called open world quantified goals. Finally, we describe how the planner, and its open world extensions, are integrated into a robot control architecture, and provide an empirical evaluation over USAR experimental runs to establish the effectiveness of the planning components.


human-robot interaction | 2012

Tell me when and why to do it!: run-time planner model updates via natural language instruction

Rehj Cantrell; J. Benton; Kartik Talamadupula; Subbarao Kambhampati; Paul W. Schermerhorn; Matthias Scheutz

Robots are currently being used in and developed for critical HRI applications such as search and rescue. In these scenarios, humans operating under changeable and high-stress conditions must communicate effectively with autonomous agents, necessitating that such agents be able to respond quickly and effectively to rapidly-changing conditions and expectations. We demonstrate a robot planner that is able to utilize new information, specifically information originating in spoken input produced by human operators. We show that the robot is able to learn the pre- and postconditions of previously-unknown action sequences from natural language constructions, and immediately update (1) its knowledge of the current state of the environment, and (2) its underlying world model, in order to produce new and updated plans that are consistent with this new information. While we demonstrate in detail the robots successful operation with a specific example, we also discuss the dialogue modules inherent scalability, and investigate how well the robot is able to respond to natural language commands from untrained users.


human-robot interaction | 2010

Investigating multimodal real-time patterns of joint attention in an hri word learning task

Chen Yu; Matthias Scheutz; Paul W. Schermerhorn

Joint attention - the idea that humans make inferences from observable behaviors of other humans by attending to the objects and events that these others humans attend to - has been recognized as a critical component in human-robot interactions. While various HRI studies showed that having robots to behave in ways that support human recognition of joint attention leads to better behavioral outcomes on the human side, there are no studies that investigate the detailed time course of interactive joint attention processes. In this paper, we present the results from an HRI study that investigates the exact time course of human multi-modal attentional processes during an HRI word learning task in an unprecedented way. Using novel data analysis techniques, we are able to demonstrate that the temporal details of human attentional behavior are critical for understanding human expectations of joint attention in HRI and that failing to do so can force humans into assuming unnatural behaviors.

Collaboration


Dive into the Paul W. Schermerhorn's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Benton

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge