Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sean Andrist is active.

Publication


Featured researches published by Sean Andrist.


human-robot interaction | 2014

Conversational gaze aversion for humanlike robots

Sean Andrist; Xiang Zhi Tan; Michael Gleicher; Bilge Mutlu

Gaze aversion—the intentional redirection away from the face of an interlocutor—is an important nonverbal cue that serves a number of conversational functions, including signaling cognitive effort, regulating a conversation’s intimacy level, and managing the conversational floor. In prior work, we developed a model of how gaze aversions are employed in conversation to perform these functions. In this paper, we extend the model to apply to conversational robots, enabling them to achieve some of these functions in conversations with people. We present a system that addresses the challenges of adapting human gaze aversion movements to a robot with very different affordances, such as a lack of articulated eyes. This system, implemented on the NAO platform, autonomously generates and combines three distinct types of robot head movements with different purposes: face-tracking movements to engage in mutual gaze, idle head motion to increase lifelikeness, and purposeful gaze aversions to achieve conversational functions. The results of a human-robot interaction study with 30 participants show that gaze aversions implemented with our approach are perceived as intentional, and robots can use gaze aversions to appear more thoughtful and effectively manage the conversational floor.Categories and Subject DescriptorsH.1.2 [Models and Principles]: User/Machine Systems —human factors, software psychology; H.5.2 [Information Interfaces and Presentation]: User Interfaces — evaluation/ methodology, usercentered design


Computer Graphics Forum | 2015

A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception

Kerstin Ruhland; Christopher E. Peters; Sean Andrist; Jeremy B. Badler; Norman I. Badler; Michael Gleicher; Bilge Mutlu; Rachel McDonnell

A persons emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: ‘The face is the portrait of the mind; the eyes, its informers’. This presents a significant challenge for Computer Graphics researchers who generate artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human–human interactions. This review article provides an overview of the efforts made on tackling this demanding task. As with many topics in computer graphics, a cross‐disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We begin with a discussion of the movement of the eyeballs, eyelids and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Furthermore, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye gaze, during the expression of emotion or during conversation. We discuss how these findings are synthesized in computer graphics and can be utilized in the domains of Human–Robot Interaction and Human–Computer Interaction for allowing humans to interact with virtual agents and other artificial entities. We conclude with a summary of guidelines for animating the eye and head from the perspective of a character animator.


intelligent virtual agents | 2013

Conversational Gaze Aversion for Virtual Agents

Sean Andrist; Bilge Mutlu; Michael Gleicher

In conversation, people avert their gaze from one another to achieve a number of conversational functions, including turn-taking, regulating intimacy, and indicating that cognitive effort is being put into planning an utterance. In this work, we enable virtual agents to effectively use gaze aversions to achieve these same functions in conversations with people. We extend existing social science knowledge of gaze aversion by analyzing video data of human dyadic conversations. This analysis yielded precise timings of speaker and listener gaze aversions, enabling us to design gaze aversion behaviors for virtual agents. We evaluated these behaviors for their ability to achieve positive conversational functions in a laboratory experiment with 24 participants. Results show that virtual agents employing gaze aversion are perceived as thinking, are able to elicit more disclosure from human interlocutors, and are able to regulate conversational turn-taking.


human factors in computing systems | 2015

Look Like Me: Matching Robot Personality via Gaze to Increase Motivation

Sean Andrist; Bilge Mutlu; Adriana Tapus

Socially assistive robots are envisioned to provide social and cognitive assistance where they will seek to motivate and engage people in therapeutic activities. Due to their physicality, robots serve as a powerful technology for motivating people. Prior work has shown that effective motivation requires adaption to user needs and characteristics, but how robots might successfully achieve such adaptation is still unknown. In this paper, we present work on matching a robots personality-expressed via its gaze behavior-to that of its users. We confirmed in an online study with 22 participants that the robots gaze behavior can successfully express either an extroverted or introverted personality. In a laboratory study with 40 participants, we demonstrate the positive effect of personality matching on a users motivation to engage in a repetitive task. These results have important implications for the design of adaptive robot behaviors in assistive human-robot interaction.


human-robot interaction | 2013

Rhetorical robots: making robots more effective speakers using linguistic cues of expertise

Sean Andrist; Erin Spannan; Bilge Mutlu

Robots hold great promise as informational assistants such as museum guides, information booth attendants, concierges, shopkeepers, and more. In such positions, people will expect them to be experts on their area of specialty. Not only will robots need to be experts, but they will also need to communicate their expertise effectively in order to raise trust and compliance with the information that they provide. This paper draws upon literature in psychology and linguistics to examine cues in speech that would help robots not only to provide expert knowledge, but also to deliver this knowledge effectively. To test the effectiveness of these cues, we conducted an experiment in which participants created a plan to tour a fictional city based on suggestions by two robots. We manipulated the landmark descriptions along two dimensions of expertise: practical knowledge and rhetorical ability. We then measured which locations the participants chose to include in the tour based on their descriptions. Our results showed that participants were strongly influenced by both practical knowledge and rhetorical ability; they included more landmarks described using expert linguistic cues than those described using simple facts. Even when the overall level of practical knowledge was high, an increase in rhetorical ability resulted in significant improvements. These results have implications for the development of effective dialogue strategies for informational robots.


Ksii Transactions on Internet and Information Systems | 2015

Gaze and Attention Management for Embodied Conversational Agents

Tomislav Pejsa; Sean Andrist; Michael Gleicher; Bilge Mutlu

To facilitate natural interactions between humans and embodied conversational agents (ECAs), we need to endow the latter with the same nonverbal cues that humans use to communicate. Gaze cues in particular are integral in mechanisms for communication and management of attention in social interactions, which can trigger important social and cognitive processes, such as establishment of affiliation between people or learning new information. The fundamental building blocks of gaze behaviors are gaze shifts: coordinated movements of the eyes, head, and body toward objects and information in the environment. In this article, we present a novel computational model for gaze shift synthesis for ECAs that supports parametric control over coordinated eye, head, and upper body movements. We employed the model in three studies with human participants. In the first study, we validated the model by showing that participants are able to interpret the agent’s gaze direction accurately. In the second and third studies, we showed that by adjusting the participation of the head and upper body in gaze shifts, we can control the strength of the attention signals conveyed, thereby strengthening or weakening their social and cognitive effects. The second study shows that manipulation of eye--head coordination in gaze enables an agent to convey more information or establish stronger affiliation with participants in a teaching task, while the third study demonstrates how manipulation of upper body coordination enables the agent to communicate increased interest in objects in the environment.


simulation modeling and programming for autonomous robots | 2010

Decision and coordination strategies for robocup rescue agents

Maitreyi Nanjanath; Alexander J. Erlandson; Sean Andrist; Aravind Ragipindi; Abdul A. Mohammed; Ankur S. Sharma; Maria L. Gini

We describe the decision processes of agents in the Robocup Rescue Agent Simulation. Agents have to rescue civilians trapped in buildings and extinguish fires in a city which has been struck by an earthquake. Lack of information and limited communications hamper the rescue process. We examine how effective our strategies and algorithms are and compare their performance against the baseline agents and agents which competed in last years competition.


Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction | 2012

A head-eye coordination model for animating gaze shifts of virtual characters

Sean Andrist; Tomislav Pejsa; Bilge Mutlu; Michael Gleicher

We present a parametric, computational model of head-eye coordination that can be used in the animation of directed gaze shifts for virtual characters. The model is based on research in human neurophysiology. It incorporates control parameters that allow for adapting gaze shifts to the characteristics of the environment, the gaze targets, and the idiosyncratic behavioral attributes of the virtual character. A user study confirms that the model communicates gaze targets as effectively as real humans do, while being preferred subjectively to state-of-the-art models.


Frontiers in Psychology | 2015

Look together: analyzing gaze coordination with epistemic network analysis

Sean Andrist; Wesley Collier; Michael Gleicher; Bilge Mutlu; David Williamson Shaffer

When conversing and collaborating in everyday situations, people naturally and interactively align their behaviors with each other across various communication channels, including speech, gesture, posture, and gaze. Having access to a partners referential gaze behavior has been shown to be particularly important in achieving collaborative outcomes, but the process in which peoples gaze behaviors unfold over the course of an interaction and become tightly coordinated is not well understood. In this paper, we present work to develop a deeper and more nuanced understanding of coordinated referential gaze in collaborating dyads. We recruited 13 dyads to participate in a collaborative sandwich-making task and used dual mobile eye tracking to synchronously record each participants gaze behavior. We used a relatively new analysis technique—epistemic network analysis—to jointly model the gaze behaviors of both conversational participants. In this analysis, network nodes represent gaze targets for each participant, and edge strengths convey the likelihood of simultaneous gaze to the connected target nodes during a given time-slice. We divided collaborative task sequences into discrete phases to examine how the networks of shared gaze evolved over longer time windows. We conducted three separate analyses of the data to reveal (1) properties and patterns of how gaze coordination unfolds throughout an interaction sequence, (2) optimal time lags of gaze alignment within a dyad at different phases of the interaction, and (3) differences in gaze coordination patterns for interaction sequences that lead to breakdowns and repairs. In addition to contributing to the growing body of knowledge on the coordination of gaze behaviors in joint activities, this work has implications for the design of future technologies that engage in situated interactions with human users.


human-robot interaction | 2015

Effects of Culture on the Credibility of Robot Speech: A Comparison between English and Arabic

Sean Andrist; Micheline Ziadee; Halim Boukaram; Bilge Mutlu; Majd F. Sakr

As social robots begin to enter our lives as providers of information, assistance, companionship, and motivation, it becomes increasingly important that these robots are capableof interacting effectively with human users across different cultural settings worldwide. A key capability in establishing acceptance and usability is the way in which robots structure their speech to build credibility and express information in a meaningful and persuasive way. Previous work has established that robots can use speech to improve credibility in two ways: expressing practical knowledge and using rhetorical linguistic cues. In this paper, we present twostudies that build on prior work to explore the effects of language and cultural context on the credibility of robot speech. In the first study

Collaboration


Dive into the Sean Andrist's collaboration.

Top Co-Authors

Avatar

Bilge Mutlu

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Michael Gleicher

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomislav Pejsa

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

David Williamson Shaffer

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeremy B. Badler

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge