Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bilge Mutlu is active.

Publication


Featured researches published by Bilge Mutlu.


ieee-ras international conference on humanoid robots | 2006

A Storytelling Robot: Modeling and Evaluation of Human-like Gaze Behavior

Bilge Mutlu; Jodi Forlizzi; Jessica K. Hodgins

Engaging storytelling is a necessary skill for humanoid robots if they are to be used in education and entertainment applications. Storytelling requires that the humanoid robot be aware of its audience and able to direct its gaze in a natural way. In this paper, we explore how human gaze can be modeled and implemented on a humanoid robot to create a natural, human-like behavior for storytelling. Our gaze model integrates data collected from a human storyteller and a discourse structure model developed by Cassell and her colleagues for human-like conversational agents (1994). We used this model to direct the gaze of a humanoid robot, Hondas ASIMO, as he recited a Japanese fairy tale using a pre-recorded human voice. We assessed the efficacy of this gaze algorithm by manipulating the frequency of ASIMOs gaze between two participants and used pre and post questionnaires to assess whether participants evaluated the robot more positively and did better on a recall task when ASIMO looked at them more. We found that participants performed significantly better in recalling ASIMOs story when the robot looked at them more. Our results also showed significant differences in how men and women evaluated ASIMO based on the frequency of gaze they received from the robot. Our study adds to the growing evidence that there are many commonalities between human-human communication and human-robot communication


human-robot interaction | 2008

Robots in organizations: the role of workflow, social, and environmental factors in human-robot interaction

Bilge Mutlu; Jodi Forlizzi

Robots are becoming increasingly integrated into the workplace, impacting organizational structures and processes, and affecting products and services created by these organizations. While robots promise significant benefits to organizations, their introduction poses a variety of design challenges. In this paper, we use ethnographic data collected at a hospital using an autonomous delivery robot to examine how organizational factors affect the way its members respond to robots and the changes engendered by their use. Our analysis uncovered dramatic differences between the medical and post-partum units in how people integrated the robot into their workflow and their perceptions of and interactions with it. Different patient profiles in these units led to differences in workflow, goals, social dynamics, and the use of the physical environment. In medical units, low tolerance for interruptions, a discrepancy between the perceived cost and benefits of using the robot, and breakdowns due to high traffic and clutter in the robots path caused the robot to have a negative impact on the workflow and staff resistance. On the contrary, post-partum units integrated the robot into their workflow and social context. Based on our findings, we provide design guidelines for the development of robots for organizations.


human factors in computing systems | 2012

Pay attention!: designing adaptive agents that monitor and improve user engagement

Daniel Szafir; Bilge Mutlu

Embodied agents hold great promise as educational assistants, exercise coaches, and team members in collaborative work. These roles require agents to closely monitor the behavioral, emotional, and mental states of their users and provide appropriate, effective responses. Educational agents, for example, will have to monitor student attention and seek to improve it when student engagement decreases. In this paper, we draw on techniques from brain-computer interfaces (BCI) and knowledge from educational psychology to design adaptive agents that monitor student attention in real time using measurements from electroencephalography (EEG) and recapture diminishing attention levels using verbal and nonverbal cues. An experimental evaluation of our approach showed that an adaptive robotic agent employing behavioral techniques to regain attention during drops in engagement improved student recall abilities 43% over the baseline regardless of student gender and significantly improved female motivation and rapport. Our findings offer guidelines for developing effective adaptive agents, particularly for educational settings.


ubiquitous computing | 2013

MACH: my automated conversation coach

Mohammed E. Hoque; Matthieu Courgeon; Jean-Claude Martin; Bilge Mutlu; Rosalind W. Picard

MACH--My Automated Conversation coacH--is a novel system that provides ubiquitous access to social skills training. The system includes a virtual agent that reads facial expressions, speech, and prosody and responds with verbal and nonverbal behaviors in real time. This paper presents an application of MACH in the context of training for job interviews. During the training, MACH asks interview questions, automatically mimics certain behavior issued by the user, and exhibit appropriate nonverbal behaviors. Following the interaction, MACH provides visual feedback on the users performance. The development of this application draws on data from 28 interview sessions, involving employment-seeking students and career counselors. The effectiveness of MACH was assessed through a weeklong trial with 90 MIT undergraduates. Students who interacted with MACH were rated by human experts to have improved in overall interview performance, while the ratings of students in control groups did not improve. Post-experiment interviews indicate that participants found the interview experience informative about their behaviors and expressed interest in using MACH in the future.


user interface software and technology | 2007

Robust, low-cost, non-intrusive sensing and recognition of seated postures

Bilge Mutlu; Andreas Krause; Jodi Forlizzi; Carlos Guestrin; Jessica K. Hodgins

In this paper, we present a methodology for recognizing seatedpostures using data from pressure sensors installed on a chair.Information about seated postures could be used to help avoidadverse effects of sitting for long periods of time or to predictseated activities for a human-computer interface. Our system designdisplays accurate near-real-time classification performance on datafrom subjects on which the posture recognition system was nottrained by using a set of carefully designed, subject-invariantsignal features. By using a near-optimal sensor placement strategy,we keep the number of required sensors low thereby reducing costand computational complexity. We evaluated the performance of ourtechnology using a series of empirical methods including (1)cross-validation (classification accuracy of 87% for ten posturesusing data from 31 sensors), and (2) a physical deployment of oursystem (78% classification accuracy using data from 19sensors).


Ksii Transactions on Internet and Information Systems | 2012

Conversational gaze mechanisms for humanlike robots

Bilge Mutlu; Takayuki Kanda; Jodi Forlizzi; Jessica K. Hodgins; Hiroshi Ishiguro

During conversations, speakers employ a number of verbal and nonverbal mechanisms to establish who participates in the conversation, when, and in what capacity. Gaze cues and mechanisms are particularly instrumental in establishing the participant roles of interlocutors, managing speaker turns, and signaling discourse structure. If humanlike robots are to have fluent conversations with people, they will need to use these gaze mechanisms effectively. The current work investigates peoples use of key conversational gaze mechanisms, how they might be designed for and implemented in humanlike robots, and whether these signals effectively shape human-robot conversations. We focus particularly on whether humanlike gaze mechanisms might help robots signal different participant roles, manage turn-exchanges, and shape how interlocutors perceive the robot and the conversation. The evaluation of these mechanisms involved 36 trials of three-party human-robot conversations. In these trials, the robot used gaze mechanisms to signal to its conversational partners their roles either of two addressees, an addressee and a bystander, or an addressee and a nonparticipant. Results showed that participants conformed to these intended roles 97% of the time. Their conversational roles affected their rapport with the robot, feelings of groupness with their conversational partners, and attention to the task.


human-robot interaction | 2014

Conversational gaze aversion for humanlike robots

Sean Andrist; Xiang Zhi Tan; Michael Gleicher; Bilge Mutlu

Gaze aversion—the intentional redirection away from the face of an interlocutor—is an important nonverbal cue that serves a number of conversational functions, including signaling cognitive effort, regulating a conversation’s intimacy level, and managing the conversational floor. In prior work, we developed a model of how gaze aversions are employed in conversation to perform these functions. In this paper, we extend the model to apply to conversational robots, enabling them to achieve some of these functions in conversations with people. We present a system that addresses the challenges of adapting human gaze aversion movements to a robot with very different affordances, such as a lack of articulated eyes. This system, implemented on the NAO platform, autonomously generates and combines three distinct types of robot head movements with different purposes: face-tracking movements to engage in mutual gaze, idle head motion to increase lifelikeness, and purposeful gaze aversions to achieve conversational functions. The results of a human-robot interaction study with 30 participants show that gaze aversions implemented with our approach are perceived as intentional, and robots can use gaze aversions to appear more thoughtful and effectively manage the conversational floor.Categories and Subject DescriptorsH.1.2 [Models and Principles]: User/Machine Systems —human factors, software psychology; H.5.2 [Information Interfaces and Presentation]: User Interfaces — evaluation/ methodology, usercentered design


robot and human interactive communication | 2006

Task Structure and User Attributes as Elements of Human-Robot Interaction Design

Bilge Mutlu; Steven Osman; Jodi Forlizzi; Jessica K. Hodgins; Sara Kiesler

Recent developments in humanoid robotics have made possible technologically advanced robots and a vision for their everyday use as assistants in the home and workplace. Nonetheless, little is known about how we should design interactions with humanoid robots. In this paper, we argue that adaptation for user attributes (in particular gender) and task structure (in particular a competitive vs. a cooperative structure) are key design elements. We experimentally demonstrate how these two elements affect the users social perceptions of ASIMO after playing an interactive video game with him


human-robot interaction | 2012

Robot behavior toolkit: generating effective social behaviors for robots

Chien-Ming Huang; Bilge Mutlu

Social interaction involves a large number of patterned behaviors that people employ to achieve particular communicative goals. To achieve fluent and effective humanlike communication, robots must seamlessly integrate the necessary social behaviors for a given interaction context. However, very little is known about how robots might be equipped with a collection of such behaviors and how they might employ these behaviors in social interaction. In this paper, we propose a framework that guides the generation of social behavior for humanlike robots by systematically using specifications of social behavior from the social sciences and contextualizing these specifications in an Activity-Theory-based interaction model. We present the Robot Behavior Toolkit, an open-source implementation of this framework as a Robot Operating System (ROS) module and a community-based repository for behavioral specifications, and an evaluation of the effectiveness of the Toolkit in using these specifications to generate social behavior in a human-robot interaction study, focusing particularly on gaze behavior. The results show that specifications from this knowledge base enabled the Toolkit to achieve positive social, cognitive, and task outcomes, such as improved information recall, collaborative work, and perceptions of the robot.


human factors in computing systems | 2005

DanceAlong: supporting positive social exchange and exercise for the elderly through dance

Pedram Keyani; Gary Hsieh; Bilge Mutlu; Matthew W. Easterday; Jodi Forlizzi

The elderly face serious social, environmental, and physical constraints that impact their well-being. Some of the most serious of these are shrinking social connections, limitations in building new relationships, and diminished health. To address these issues, we have designed an augmented dancing environment that allows elders to select dance sequences from well-known movies and dance along with them. The goal of DanceAlong is twofold: (1) to provide entertainment and exercise for each individual user and (2) to promote social engagement within the group. We deployed DanceAlong in a cultural celebration at a senior community center and conducted evaluations. In this paper, we present the design process of DanceAlong, evaluations of DanceAlong, and design guidelines for creating similar interactive systems for the elderly.

Collaboration


Dive into the Bilge Mutlu's collaboration.

Top Co-Authors

Avatar

Michael Gleicher

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Sean Andrist

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Melissa C. Duff

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar

Jodi Forlizzi

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Daniel Rakita

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Allison Sauppé

University of Wisconsin–La Crosse

View shared research outputs
Top Co-Authors

Avatar

Chien-Ming Huang

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Lyn S. Turkstra

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge