Vasant Srinivasan
Texas A&M University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vasant Srinivasan.
human-robot interaction | 2011
Vasant Srinivasan; Robin R. Murphy
Based on a synthesis of eight major studies using six robots involving social gaze in robotics, this research proposes a novel behavioral definition as a mapping G = E(C) from the perception of a social context C to a set of head, eye, and body patterns called gaze acts G that expresses the engagement E. This definition places social gaze within the behavior-based programming framework for robots and agents, providing a guide for principled future implementations. The research also identifies five social contexts, or functions, of social gaze (Establishing agency, Communicating social attention, Regulating the interaction process, Manifesting interaction content and Projecting mental state) along with six discrete gaze acts for social gaze functions (Fixation, Short glance, Aversion, Concurrence, Confusion, and Scan) that have been employed by various robots or in simulation for these contexts. The research contributes to a computational understanding of social gaze that bridges psychological, cognitive, and robotics communities.
human factors in computing systems | 2016
Vasant Srinivasan; Leila Takayama
Robots that can leverage help from people could accomplish much more than robots that cannot. We present the results of two experiments that examine how robots can more effectively request help from people. Study 1 is a video prototype experiment (N=354), investigating the effectiveness of four linguistic politeness strategies as well as the effects of social status (equal, low), size of request (large, small), and robot familiarity (high, low) on peoples willingness to help a robot. The results of this study largely support Politeness Theory and the Computers as Social Actors paradigm. Study 2 is a physical human-robot interaction experiment (N=48), examining the impact of source orientation (autonomous, single operator, multiple operators) on peoples behavioral willingness to help the robot. People were nearly 50% faster to help the robot if they perceived it to be autonomous rather than being teleoperated. Implications for research design, theory, and methods are discussed.
IEEE Transactions on Human-Machine Systems | 2014
Zachary Henkel; Cindy L. Bethel; Robin R. Murphy; Vasant Srinivasan
This paper introduces and empirically evaluates two scaling functions to alter a robots physical movements based on proximity to a human. Previous research has focused on individual aspects of proxemics, like the appropriate distance to maintain from a human, but has not explored autonomous methods to adapt robot behavior as proximity changes. This paper proposes that robots in a social role should modify their behavior using a continuous function mapped to proximity. The method developed calculates a gain value from proximity readings, which is used to shape the execution of active behaviors on the robot. In order to identify the effects of different mappings from proximity to gain value, two different scaling functions were implemented on an affective search and rescue robot. The findings from a 72 participant study, in a high-fidelity mock disaster site, are examined with attention given to a new measure to determine proxemic awareness. The results indicated that for attributes of intelligence, likability, proxemic awareness, and submissiveness, a logarithmic-based scaling function is preferred over a linear-based scaling function, and over no scaling function. In areas of participant comfort and participant stress, the results indicated both logarithmic and linear scaling functions were preferred to no scaling.
IEEE Transactions on Human-Machine Systems | 2014
Vasant Srinivasan; Cindy L. Bethel; Robin R. Murphy
This study demonstrates that robots can achieve socially acceptable interactions using loosely synchronized head gaze-speech acts. Prior approaches use tightly synchronized head gaze-speech, which requires significant human effort and time to manually annotate synchronization events in advance, restricts interactive dialog, or requires that the operator acts as a puppeteer. This paper describes how autonomous synchronization of head gaze can be achieved by exploiting affordances in the sentence structure and time delays. A 93-participant user study was conducted in a simulated disaster site. The rescue robot “Survivor Buddy” generated head gaze for a victim management scenario using a 911 dialog. The study used pre- and postinteraction questionnaires to compare the social acceptance level of loosely synchronized head gaze-speech against tightly synchronized head gaze-speech (manual annotation) and no head gaze-speech conditions. The results indicated that for attributes of Self-Assessment Manikin, i.e., Arousal, Robot Likeability, Human-Like Behavior, Understanding Robot Behavior, Gaze-Speech Synchronization, Looking at Objects at Appropriate Times, and Natural Movement, the loosely synchronized head gaze-speech is similar to tightly synchronized head gaze-speech and preferred to the no head gaze-speech case. This study contributes to a fundamental understanding of the role of social head gaze in social acceptance for human-machine interaction, how social gaze can be produced, and promotes practical implementation in social robots.
international conference on robotics and automation | 2011
Robin R. Murphy; Aaron Rice; Negar Rashidi; Zachary Henkel; Vasant Srinivasan
Designing and constructing affective robots on schedule and within costs is especially challenging because of the qualitative, artistic nature of affective expressions. Detailed affective design principles do not exist, forcing an iterative design process. This paper describes a three step design process created for the Survivor Buddy project that engages artists in the design process and allows animation to guide physical implementation. The process combines creative design of believable agents unconstrained by costs with traditional design decision matrices. The paper provides a case study comparing the resulting design of the Survivor Buddy 2.0 robot with the original (Survivor Buddy 1.0). The multi-disciplinary methodology produced a more pleasing and expressive robot that was 50% less expensive, 78% lighter, and up to 700% faster within the same amount of design time. This methodology is expected to contribute to reducing risk in designing cost-effective affective robots and robots in general.
International Journal of Social Robotics | 2015
Vasant Srinivasan; Robin R. Murphy; Cindy L. Bethel
This article outlines a reference architecture for social head gaze generation in social robots. The architecture discussed here is grounded in human communication, based on behavioral robotics theory, and captures the commonalities, essence, and experience of 32 previous social robotics implementations of social head gaze. No such architecture currently exists, but such an architecture is needed to: (1) serve as a template for creating or re-engineering systems, (2) provide analyses and understanding of different systems, and (3) provide a common lexicon and taxonomy that facilitates communication across various communities. A constructed reference architecture and the Software Architecture Analysis Method (SAAM) are used to evaluate, improve, and re-engineer two existing head gaze system architectures (Human–Robot Collaboration architecture and Robot Behavior Toolkit architecture). SAAM shows that no existing architecture incorporated the summation of functionalities found in the 32 studies. SAAM suggests several architectural improvements so that the two existing architectures can better support adaptation to a new environment and extension of capability. The resulting reference architecture guides the implementation of social head gaze in a rescue robot for the purpose of victim management in urban search and rescue (US&R). Using the proposed reference architecture will benefit social robotics because it will simplify the principled implementations of head gaze generation and allow for comparisons between such implementations.
ieee international conference on technologies for homeland security | 2013
Robin R. Murphy; Vasant Srinivasan; Zachary Henkel; Jesus Suarez; Matthew Minson; J. C. Straus; Stanley Hempstead; Tim Valdez; Shinichi Egawa
The paper reports on a discovery field exercise used to examine how disaster responders can use an audio and video equipped robot to interact with a trapped victim. In the exercise, a small robot with two-way video and audio communication was inserted into a physically simulated building collapse next to a trapped victim, and was provided to a team of trained responders as a means for performing remote triage and victim monitoring. The interaction between the responders and the victim was examined, with emphasis on how the responders adapted to different video and audio capabilities, and how they might have responded to different populations and injuries that may limit communication. The ad hoc interaction protocols used by the responders were observed in the field exercise, and four interaction schemes were identified: Two-way Video with Two-way Audio, One-way Video (from Robot to Responders) with Two-way Audio, Two-way Video with no Audio, and One-way Video (from Robot to Responders) with no audio. The interaction schemes are defined according to the minimum capabilities of the robot and victim, the requirements of the responders, and preliminary protocols required for each interaction scheme. From observations made about the exercise, the paper identifies minimalistic interfaces and transparency of robot state as key areas for improving a robot-mediated interaction between responders and victims.
human-robot interaction | 2011
Vasant Srinivasan; Robin R. Murphy; Zachary Henkel; Victoria Groom; Clifford Nass
This paper describes an open source speech translator toolkit created as part of the “Survivor Buddy” project which allows written or spoken word from multiple independent controllers to be translated into either a single synthetic voice, synthetic voices for each controller, or unchanged natural voice of each controller. The human controllers can work over the internet or be physically co-located with the Survivor Buddy. The toolkit is expected to be of use for exploring voice in general human-robot interaction.
Paladyn: Journal of Behavioral Robotics | 2016
Zachary Henkel; Jesus Suarez; Vasant Srinivasan; Robin R. Murphy
Abstract This article reports observations from a field study in which medical responders used a social telepresence robot to communicate with participants playing the role of a trapped victim in two search and rescue exercises. The interaction between the robot, victims, and responders suggests the coexistence of two distinct social identities for the robot. One which is a pure conduit for the remote medic, and another in which the robot is treated as an independent social actor. Participants acting as victims demonstrated fluidity in interacting with each identity. The social identify of a robot has important implications for the development of future telepresence systems, particularly in the healthcare domain. Since victims in the exercises gave attention to both the robot and the remote medic, it is possible that the robot’s social actor role may divert attention from the remotely connected individual. The work provides a starting point for investigation of role conflict between a remote medical professional and the robot they are using to assist a patient.
human-robot interaction | 2011
Robin R. Murphy; Jessica Gonzales; Vasant Srinivasan
We have created a preliminary inference engine for generating gaze acts based on extracting the social context from conversational structure and timing in human-robot dialog.