Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tracy Sanders is active.

Publication


Featured researches published by Tracy Sanders.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2011

A Model of Human-Robot Trust: Theoretical Model Development

Tracy Sanders; Kristin E. Oleson; Deborah R. Billings; Jessie Y. C. Chen; Peter A. Hancock

This work explores the theoretical foundations of trust which provide the framework for the development of our model of human-robot team trust. The pragmatic purpose for this model is to provide a greater understanding of the factors that facilitate the development of human operator trust in robotic teammates. We predicate the model’s structure with our findings from a quantitative meta-analysis that we have completed. Our approach categorizes the dimensions influencing trust in human-robot interaction. To date, we have explored human, robot and environmental-based factors. Our road map for model development and refinement is here outlined.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2012

Classification of Robot Form: Factors Predicting Perceived Trustworthiness

Kristin E. Schaefer; Tracy Sanders; Ryan Yordon; Deborah R. Billings; Peter A. Hancock

Many factors influence perceived usability of robots, including attributes of the human user, the environment, and the robot itself. Traditionally, the primary focus of research has been on performance-based characteristics of the robot for the purposes of classification, design, and understanding human-robot trust. In this work, we examine the human perceptions of the aesthetic dimensions of a variety of robot domains to gain insight into the impact of physical form on perceived trustworthiness that occurs prior to human-robot interaction. Results show that the physical form does matter when predicting initial trustworthiness of a robot, primarily through the perceived intelligence and classification of the robot.


Proceedings of SPIE | 2014

Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems

Scott Ososky; Tracy Sanders; Florian Jentsch; Peter A. Hancock; Jessie Y. C. Chen

Increasingly autonomous robotic systems are expected to play a vital role in aiding humans in complex and dangerous environments. It is unlikely, however, that such systems will be able to consistently operate with perfect reliability. Even less than 100% reliable systems can provide a significant benefit to humans, but this benefit will depend on a human operator’s ability to understand a robot’s behaviors and states. The notion of system transparency is examined as a vital aspect of robotic design, for maintaining humans’ trust in and reliance on increasingly automated platforms. System transparency is described as the degree to which a system’s action, or the intention of an action, is apparent to human operators and/or observers. While the physical designs of robotic systems have been demonstrated to greatly influence humans’ impressions of robots, determinants of transparency between humans and robots are not solely robot-centric. Our approach considers transparency as emergent property of the human–robot system. In this paper, we present insights from our interdisciplinary efforts to improve the transparency of teams made up of humans and unmanned robots. These near-futuristic teams are those in which robot agents will autonomously collaborate with humans to achieve task goals. This paper demonstrates how factors such as human–robot communication and human mental models regarding robots impact a human’s ability to recognize the actions or states of an automated system. Furthermore, we will discuss the implications of system transparency on other critical HRI factors such as situation awareness, operator workload, and perceptions of trust.


2015 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision | 2015

Fidelity & validity in robotic simulation

K. Elizabeth Schafer; Tracy Sanders; Theresa Kessler; Mitchell Dunfee; Tyler Wild; Peter A. Hancock

This work assesses the relationship between common theoretical constructs involved in simulation design and evaluation. Specifically, the degree to which realism is a desired goal in design is examined through a thorough review of the available literature. It was found that, especially for training simulations, high fidelity does not always beget improved outcomes, and this finding was corroborated by the results of an experiment involving a simulated robot. In the within-subjects experiment, participants rated their trust in both live and simulated versions of a robot performing in both reliable and unreliable scenarios. As predicted, strong correlations in both the reliable and unreliable scenarios validate the RIVET simulation engine as a model for trust in HRI and provide further evidence that relatively low-fidelity simulations can sometimes be sufficient or superior to high-fidelity alternatives.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2012

Augmented Emotion and its Remote Embodiment: The Importance of Design from Fiction to Reality

Kristin E. Schaefer; Jacquelyn G. Cook; Jeffrey K. Adams; Jonathan Bell; Tracy Sanders; Peter A. Hancock

In this work, we address the under-emphasized need for attention to the emotional dynamics involved in human-robot interaction. This becomes more prominent as robots continue to transition from a tool-based role to a role of a teammate or companion. A theoretical review of robotic design through both current technology and fictional media provides a foundation for understanding domains in which the remote embodiment of human emotions can be used. Current and prior research is discussed, as well as limitations and necessities. Recommendations are established for an initial Best Practices approach that can provide optimal benefits to the user with regard to design implementation.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2016

Implicit Attitudes Toward Robots

Tracy Sanders; Kathryn E. Schafer; William Volante; Ashley Reardon; Peter A. Hancock

This study explores employing a measurement of implicit attitudes to better understand attitudes and trust levels towards robots. This work builds upon an existing implicit measure (Implicit Associations Test) to compare attitudes toward humans with attitudes toward robots. Results are compared with explicit self-report measures, and future directions for this work are discussed.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2016

Specifying Influences that Mediate Trust in Human-Robot Interaction

William Volante; Tracy Sanders; D. Dodge; Valarie Yerdon; Peter A. Hancock

In this work we investigate the effects of robot appearance and reliability on a user’s trust levels through an experiment where participants reacted to three different robot forms that either behaved reliably or unreliably during a series of experimental trials. A final trial was implemented to evaluate use choice by allowing participants to choose their preferred robot and complete an additional trial with that robot. Results from this pilot experimentation indicated differences based on the reliability of the robot, as well as whether the participant chose to interact with the robot.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

Trust in Multimodal Sensory Cueing Automation in a Target Detection Task

Timothy L. White; Julia L. Wright; Joe Mercado; Tracy Sanders; Peter A. Hancock

The goal of our work was twofold. The first was to examine the effects of dispositional trust on performance in a target detection task. The second was to examine the effects of performance on implicit and explicit trust in cueing modalities in that same target detection task. Fifty-four participants detected targets using four cueing modalities (non-cued, auditory cue alone, tactile cue alone, and combined auditory and tactile cueing). Participants monitored three screens for targets and responded as rapidly and accurately as possible when the presence of a target was perceived. Dispositional trust proved to be a significant predictor of performance for the auditory modality. Performance was a significant predictor of explicit trust in the tactile and combined conditions. Overall, participants reported preferring the tactile and combined cueing modalities for this target detection task. These findings suggest that measures of explicit trust should be employed early in system design to enhance eventual trust and system usability.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

The Influence of Robot Form on Trust

Tracy Sanders; William Volante; Kimberly Stowers; Theresa Kessler; Katharina Gabracht; Brandon Harpold; Paul Oppold; Peter A. Hancock

Assistive robotics is a rapidly progressing field of study that contains facets yet to be fully understood. Here we look at the effect of robot form on user’s level of trust placed on the robot. Form-based trust was evaluated in this study by comparing participant trust ratings based on four robot designs: Lego Mindstorm, Keepon, Sphero and Ozzy. The first view of the robot and the interactions with the robots were examined with pre and post measurements of trust. Sphero and Lego received consistently higher trust ratings than Keepon and Ozzy. Pre-post measures reveal a difference between the initial measure of trust based on form, and the second measure of trust based on the observation of robot function.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013

Identifying the Role of Attributions in Human Perceptions of Robots

Julia L. Wright; Tracy Sanders; Peter A. Hancock

To date, research into the potential impact of fundamental design attributes, such as material and color, on human-robot trust has been limited. This study addresses how a human’s perception of fundamental, basic design features (i.e., the robot’s physical appearance) may influence their attribution of anthropomorphic characteristics to the robot. Two experiments investigated the correlations between the color, texture, and material of a robot body and the perception of the robot’s internal characteristics (i.e. intelligence, friendliness, robustness, reliability, personality, and integrity), as well as its appropriate uses and tasks. Experiment 1 found correlations between participants’ basic attributions and fundamental design elements of the robot images. Experiment 2 evaluated combinations of significant correlational relationships from study 1 to determine which of competing characteristics would determine the participants’ attributions of the robots’ internal characteristics. These correlations have implications for robot design and will lead to the creation of design heuristics and guidelines that can address any identified human biases occurring based on robot appearance alone.

Collaboration


Dive into the Tracy Sanders's collaboration.

Top Co-Authors

Avatar

Peter A. Hancock

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

William Volante

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Theresa Kessler

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Ashley Reardon

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Deborah R. Billings

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

K. Elizabeth Schafer

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Kristin E. Schaefer

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Jeffrey K. Adams

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Timothy L. White

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

D. Dodge

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge