Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Henny Admoni is active.

Publication


Featured researches published by Henny Admoni.


Annual Review of Biomedical Engineering | 2012

Robots for Use in Autism Research

Brian Scassellati; Henny Admoni; Maja J. Matarić

Autism spectrum disorders are a group of lifelong disabilities that affect peoples ability to communicate and to understand social cues. Research into applying robots as therapy tools has shown that robots seem to improve engagement and elicit novel social behaviors from people (particularly children and teenagers) with autism. Robot therapy for autism has been explored as one of the first application domains in the field of socially assistive robotics (SAR), which aims to develop robots that assist people with special needs through social interactions. In this review, we discuss the past decades work in SAR systems designed for autism therapy by analyzing robot design decisions, human-robot interactions, and system evaluations. We conclude by discussing challenges and future trends for this young but rapidly developing research area.


human-robot interaction | 2014

Deliberate delays during robot-to-human handovers improve compliance with gaze communication

Henny Admoni; Anca D. Dragan; Siddhartha S. Srinivasa; Brian Scassellati

As assistive robots become popular in factories and homes, there is greater need for natural, multi-channel communication during collaborative manipulation tasks. Non-verbal communication such as eye gaze can provide information without overloading more taxing channels like speech. However, certain collaborative tasks may draw attention away from these subtle communication modalities. For instance, robot-to-human handovers are primarily manual tasks, and human attention is therefore drawn to robot hands rather than to robot faces during handovers. In this paper, we show that a simple manipulation of a robot’s handover behavior can significantly increase both awareness of the robot’s eye gaze and compliance with that gaze. When eye gaze communication occurs during the robot’s release of an object, delaying object release until the gaze is finished draws attention back to the robot’s head, which increases conscious perception of the robot’s communication. Furthermore, the handover delay increases peoples’ compliance with the robot’s communication over a non-delayed handover, even when compliance results in counterintuitive behavior.


human robot interaction | 2017

Social eye gaze in human-robot interaction: a review

Henny Admoni; Brian Scassellati

This article reviews the state of the art in social eye gaze for human-robot interaction (HRI). It establishes three categories of gaze research in HRI, defined by differences in goals and methods: a human-centered approach, which focuses on peoples responses to gaze; a design-centered approach, which addresses the features of robot gaze behavior and appearance that improve interaction; and a technology-centered approach, which is concentrated on the computational tools for implementing social eye gaze in robots. This paper begins with background information about gaze research in HRI and ends with a set of open questions.


human-robot interaction | 2013

Are you looking at me?: perception of robot attention is mediated by gaze type and group size

Henny Admoni; Bradley Hayes; David J. Feil-Seifer; Daniel Ullman; Brian Scassellati

Studies in HRI have shown that people follow and understand robot gaze. However, only a few studies to date have examined the time-course of a meaningful robot gaze, and none have directly investigated what type of gaze is best for eliciting the perception of attention. This paper investigates two types of gaze behaviors-short, frequent glances and long, less frequent stares - to find which behavior is better at conveying a robots visual attention. We describe the development of a programmable research platform from MyKeepon toys, and the use of these programmable robots to examine the effects of gaze type and group size on the perception of attention. In our experiment, participants viewed a group of MyKeepon robots executing random motions, occasionally fixating on various points in the room or directly on the participant. We varied type of gaze fixations within participants and group size between participants. Results show that people are more accurate at recognizing shorter, more frequent fixations than longer, less frequent ones, and that their performance improves as group size decreases. From these results, we conclude that multiple short gazes are preferable for indicating attention over one long gaze, and that the visual search for robot attention is susceptible to group size effects.


international conference on multimodal interfaces | 2014

Data-Driven Model of Nonverbal Behavior for Socially Assistive Human-Robot Interactions

Henny Admoni; Brian Scassellati

Socially assistive robotics (SAR) aims to develop robots that help people through interactions that are inherently social, such as tutoring and coaching. For these interactions to be effective, socially assistive robots must be able to recognize and use nonverbal social cues like eye gaze and gesture. In this paper, we present a preliminary model for nonverbal robot behavior in a tutoring application. Using empirical data from teachers and students in human-human tutoring interactions, the model can be both predictive (recognizing the context of new nonverbal behaviors) and generative (creating new robot nonverbal behaviors based on a desired context) using the same underlying data representation.


international conference on robotics and automation | 2016

Modeling communicative behaviors for object references in human-robot interaction

Henny Admoni; Thomas Weng; Brian Scassellati

This paper presents a model that uses a robots verbal and nonverbal behaviors to successfully communicate object references to a human partner. This model, which is informed by computer vision, human-robot interaction, and cognitive psychology, simulates how low-level and high-level features of the scene might draw a users attention. It then selects the most appropriate robot behavior that maximizes the likelihood that a user will understand the correct object reference while minimizing the cost of the behavior. We present a general computational framework for this model, then describe a specific implementation in a human-robot collaboration. Finally, we analyze the models performance in two human evaluations-one video-based (75 participants) and one in person (20 participants)-and demonstrate that the system predicts the correct behaviors to perform successful object references.


intelligent robots and systems | 2016

Human-robot shared workspace collaboration via hindsight optimization

Stefania Pellegrinelli; Henny Admoni; Shervin Javdani; Siddhartha S. Srinivasa

Our human-robot collaboration research aims to improve the fluency and efficiency of interactions between humans and robots when executing a set of tasks in a shared workspace. During human-robot collaboration, a robot and a user must often complete a disjoint set of tasks that use an overlapping set of objects, without using the same object simultaneously. A key challenge is deciding what task the robot should perform next in order to facilitate fluent and efficient collaboration. Most prior work does so by first predicting the humans intended goal, and then selecting actions given that goal. However, it is often difficult, and sometimes impossible, to infer the humans exact goal in real time, and this serial predict-then-act method is not adaptive to changes in human goals. In this paper, we present a system for inferring a probability distribution over human goals, and producing assistance actions given that distribution in real time. The aim is to minimize the disruption caused by the nature of human-robot shared workspace. We extend recent work utilizing Partially Observable Markov Decision Processes (POMDPs) for shared autonomy in order to provide assistance without knowing the exact goal. We evaluate our system in a study with 28 participants, and show that our POMDP model outperforms state of the art predict-then-act models by producing fewer human-robot collisions and less human idling time.


Robotics and Autonomous Systems | 2010

Action selection and task sequence learning for hybrid dynamical cognitive agents

Eric Aaron; Henny Admoni

As a foundation for action selection and task-sequencing intelligence, the reactive and deliberative subsystems of a hybrid agent can be unified by a single, shared representation of intention. In this paper, we summarize a framework for hybrid dynamical cognitive agents (HDCAs) that incorporates a representation of dynamical intention into both reactive and deliberative structures of a hybrid dynamical system model, and we present methods for learning in these intention-guided agents. The HDCA framework is based on ideas from spreading activation models and belief-desire-intention (BDI) models. Intentions and other cognitive elements are represented as interconnected, continuously varying quantities, employed by both reactive and deliberative processes. HDCA learning methods-such as Hebbian strengthening of links between co-active elements, and belief-intention learning of task-specific relationships-modify interconnections among cognitive elements, extending the benefits of reactive intelligence by enhancing high-level task sequencing without additional reliance on or modification of deliberation. We also present demonstrations of simulated robots that learned geographic and domain-specific task relationships in an office environment.


human robot interaction | 2016

Robot Nonverbal Behavior Improves Task Performance In Difficult Collaborations

Henny Admoni; Thomas Weng; Bradley Hayes; Brian Scassellati

Nonverbal behaviors increase task efficiency and improve collaboration between people and robots. In this paper, we introduce a model for generating nonverbal behavior and investigate whether the usefulness of nonverbal behaviors changes based on task difficulty. First, we detail a robot behavior model that accounts for top-down and bottom-up features of the scene when deciding when and how to perform deictic references (looking or pointing). Then, we analyze how a robots deictic nonverbal behavior affects peoples performance on a memorization task under differing difficulty levels. We manipulate difficulty in two ways: by adding steps to memorize, and by introducing an interruption. We find that when the task is easy, the robots nonverbal behavior has little influence over recall and task completion. However, when the task is challenging- because the memorization load is high or because the task is interrupted-a robots nonverbal behaviors mitigate the negative effects of these challenges, leading to higher recall accuracy and lower completion times. In short, nonverbal behavior may be even more valuable for difficult collaborations than for easy ones.


The International Journal of Robotics Research | 2018

Shared autonomy via hindsight optimization for teleoperation and teaming

Shervin Javdani; Henny Admoni; Stefania Pellegrinelli; Siddhartha S. Srinivasa; J. Andrew Bagnell

In shared autonomy, a user and autonomous system work together to achieve shared goals. To collaborate effectively, the autonomous system must know the user’s goal. As such, most prior works follow a predict-then-act model, first predicting the user’s goal with high confidence, then assisting given that goal. Unfortunately, confidently predicting the user’s goal may not be possible until they have nearly achieved it, causing predict-then-act methods to provide little assistance. However, the system can often provide useful assistance even when confidence for any single goal is low (e.g. move towards multiple goals). In this work, we formalize this insight by modeling shared autonomy as a partially observable Markov decision process (POMDP), providing assistance that minimizes the expected cost-to-go with an unknown goal. As solving this POMDP optimally is intractable, we use hindsight optimization to approximate. We apply our framework to both shared-control teleoperation and human–robot teaming. Compared with predict-then-act methods, our method achieves goals faster, requires less user input, decreases user idling time, and results in fewer user–robot collisions.

Collaboration


Dive into the Henny Admoni's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rosario Scalise

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Shen Li

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge