Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ross Mead is active.

Publication


Featured researches published by Ross Mead.


International Journal of Social Robotics | 2013

Automated Proxemic Feature Extraction and Behavior Recognition: Applications in Human-Robot Interaction

Ross Mead; Amin Atrash; Maja J. Matarić

In this work, we discuss a set of feature representations for analyzing human spatial behavior (proxemics) motivated by metrics used in the social sciences. Specifically, we consider individual, physical, and psychophysical factors that contribute to social spacing. We demonstrate the feasibility of autonomous real-time annotation of these proxemic features during a social interaction between two people and a humanoid robot in the presence of a visual obstruction (a physical barrier). We then use two different feature representations—physical and psychophysical—to train Hidden Markov Models (HMMs) to recognize spatiotemporal behaviors that signify transitions into (initiation) and out of (termination) a social interaction. We demonstrate that the HMMs trained on psychophysical features, which encode the sensory experience of each interacting agent, outperform those trained on physical features, which only encode spatial relationships. These results suggest a more powerful representation of proxemic behavior with particular implications in autonomous socially interactive and socially assistive robotics.


robot and human interactive communication | 2010

An architecture for rehabilitation task practice in socially assistive human-robot interaction

Ross Mead; Eric Wade; Pierre Johnson; Aaron B. St. Clair; Shuya Chen; Maja J. Matarić

New approaches to rehabilitation and health care have developed due to advances in technology and human robot interaction (HRI). Socially assistive robotics (SAR) is a subcategory of HRI that focuses on providing assistance through hands-off interactions. We have developed a SAR architecture that facilitates multiple task-oriented interactions between a user and a robot agent. The architecture accommodates a variety of inputs, tasks, and interaction modalities that are used to provide relevant, real-time feedback to the participant. We have implemented the architecture and validated its technological feasibility in a small pilot study in which a SAR agent led three post-stroke individuals through an exercise scenario. In the following, we present our architecture design, and the results of the feasibility study.


human-robot interaction | 2011

Recognition of spatial dynamics for predicting social interaction

Ross Mead; Amin Atrash; Maja J. Matarić

We present a user study and dataset designed and collected to analyze how humans use space in face-to-face interactions. In a proof-of-concept investigation into human spatial dynamics, a Hidden Markov Model (HMM) was trained over a subset of features to recognize each of three interaction cues-initiation, acceptance, and termination-in both dyadic and triadic scenarios; these cues are useful in predicting transitions into, during, and out of multi-party social encounters. It is shown that the HMM approach performed twice as well as a weighted random classifier, supporting the feasibility of recognizing and predicting social behavior based on spatial features.


international symposium on experimental robotics | 2016

Perceptual Models of Human-Robot Proxemics

Ross Mead; Maja J. Matarić

To enable socially situated human-robot interaction, a robot must both understand and control proxemics—the social use of space—to employ communication mechanisms analogous to those used by humans. In this work, we considered how proxemic behavior is influenced by human speech and gesture production, and how this impacts robot speech and gesture recognition in face-to-face social interactions. We conducted a data collection to model these factors conditioned on distance. This resulting models of pose, speech, and gesture were consistent with related work in human-human interactions, but were inconsistent with related work in human-human interactions—participants in our data collection pos itioned themselves much farther away than has been observed in related work. These models have been integrated into a situated autonomous proxemic robot controller, in which the robot selects interagent pose parameters to maximize its expectation to recognize natural human speech and body gestures during an interaction. This work contributes to the understanding of the underlying per-cultural processes that govern human proxemic behavior, and has implications for the development of robust proxemic controllers for sociable and socially assistive robots situated in complex interactions (e.g., with multiple people or individuals with hearing/visual impairments) and environments (e.g., in which there is loud noise, reverberation, low lighting, or visual occlusion).


international conference on social robotics | 2011

Proxemic feature recognition for interactive robots: automating metrics from the social sciences

Ross Mead; Amin Atrash; Maja J. Matarić

In this work, we discuss a set of metrics for analyzing human spatial behavior (proxemics) motivated by work in the social sciences. Specifically, we investigate individual, attentional, interpersonal, and physiological factors that contribute to social spacing. We demonstrate the feasibility of autonomous real-time annotation of these spatial features during multi-person social encounters. We utilize sensor suites that are non-invasive to participants, are readily deployable in a variety of environments (ranging from an instrumented workspace to a mobile robot platform), and do not interfere with the social interaction itself. Finally, we provide a discussion of the impact of these metrics and their utility in autonomous socially interactive systems.


Paladyn | 2011

Socially Assistive Robotics for Guiding Motor Task Practice

Eric Wade; Avinash Parnandi; Ross Mead; Maja J. Matarić

Due to their quantitative nature, robotic systems are useful tools for systematically augmenting human behavior and performance in dynamic environments, such as therapeutic rehabilitation settings. The efficacy of human-robot interaction (HRI) in these settings will depend on the robot’s coaching style. Our goal was to investigate the influence of robot coaching styles designed to enhance motivation and encouragement on post-stroke individuals during motor task practice. We hypothesized that coaching styles incorporating user performance and preference would be preferred in a therapeutic HRI setting. We designed an evaluation study with seven individuals post stroke. A socially assistive robotics (SAR) system using three different coaching styles guided participants during performance of an upper extremity practice task. User preference was not significantly affected by the different robot coaching styles in our participant sample (H(2) = 2.638, p = 0.267). However, trends indicated differences in preference for the coaching styles. Our results provide insights into the design and use of SAR systems in therapeutic interactions aiming to influence user behavior.


human-robot interaction | 2012

A probabilistic framework for autonomous proxemic control in situated and mobile human-robot interaction

Ross Mead; Maja J. Matarić

In this paper, we draw upon insights gained in our previous work on human-human proxemic behavior analysis to develop a novel method for human-robot proxemic behavior production. A probabilistic framework for spatial interaction has been developed that considers the sensory experience of each agent (human or robot) in a co-present social encounter. In this preliminary work, a robot attempts to maintain a set of human body features in its camera field-of-view. This methodology addresses the functional aspects of proxemic behavior in human-robot interaction, and provides an elegant connection between previous approaches.


international conference on multimodal interfaces | 2012

Space, speech, and gesture in human-robot interaction

Ross Mead

To enable natural and productive situated human-robot interaction, a robot must both understand and control proxemics, the social use of space, in order to employ communication mechanisms analogous to those used by humans: social speech and gesture production and recognition. My research focuses on answering these questions: How do social (auditory and visual) and environmental (noisy and occluding) stimuli influence spatially situated communication between humans and robots, and how should a robot dynamically adjust its communication mechanisms to maximize human perceptions of its social signals in the presence of extrinsic and intrinsic sensory interference?


robot and human interactive communication | 2011

Investigating the effects of visual saliency on deictic gesture production by a humanoid robot

Aaron B. St. Clair; Ross Mead; Maja J. Matarić

In many collocated human-robot interaction scenarios, robots are required to accurately and unambiguously indicate an object or point of interest in the environment. Realistic, cluttered environments containing many visually salient targets can present a challenge for the observer of such pointing behavior. In this paper, we describe an experiment and results detailing the effects of visual saliency and pointing modality on human perceptual accuracy of a robots deictic gestures (head and arm pointing) and compare the results to the perception of human pointing.


Autonomous Robots | 2017

Autonomous human---robot proxemics: socially aware navigation based on interaction potential

Ross Mead; Maja J. Matarić

To enable situated human–robot interaction (HRI), an autonomous robot must both understand and control proxemics—the social use of space—to employ natural communication mechanisms analogous to those used by humans. This work presents a computational framework of proxemics based on data-driven probabilistic models of how social signals (speech and gesture) are produced (by a human) and perceived (by a robot). The framework and models were implemented as autonomous proxemic behavior systems for sociable robots, including: (1) a sampling-based method for robot proxemic goal state estimation with respect to human–robot distance and orientation parameters, (2) a reactive proxemic controller for goal state realization, and (3) a cost-based trajectory planner for maximizing automated robot speech and gesture recognition rates along a path to the goal state. Evaluation results indicate that the goal state estimation and realization significantly improve upon past work in human–robot proxemics with respect to “interaction potential”—predicted automated speech and gesture recognition rates as the robot enters into and engages in face-to-face social encounters with a human user—illustrating their efficacy to support richer robot perception and autonomy in HRI.

Collaboration


Dive into the Ross Mead's collaboration.

Top Co-Authors

Avatar

Maja J. Matarić

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jerry B. Weinberg

Southern Illinois University Edwardsville

View shared research outputs
Top Co-Authors

Avatar

Amin Atrash

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Aaron B. St. Clair

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Eric Wade

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar

Brent Beer

Southern Illinois University Edwardsville

View shared research outputs
Top Co-Authors

Avatar

Jeffrey R. Croxell

Southern Illinois University Edwardsville

View shared research outputs
Top Co-Authors

Avatar

Robert Louis Long

Southern Illinois University Edwardsville

View shared research outputs
Top Co-Authors

Avatar

Tarik Tosun

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge