Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Robinette is active.

Publication


Featured researches published by Paul Robinette.


human robot interaction | 2016

Overtrust of Robots in Emergency Evacuation Scenarios

Paul Robinette; Wenchen Li; Robert L. Allen; Ayanna M. Howard; Alan R. Wagner

Robots have the potential to save lives in emergency scenarios, but could have an equally disastrous effect if participants overtrust them. To explore this concept, we performed an experiment where a participant interacts with a robot in a non-emergency task to experience its behavior and then chooses whether to follow the robots instructions in an emergency or not. Artificial smoke and fire alarms were used to add a sense of urgency. To our surprise, all 26 participants followed the robot in the emergency, despite half observing the same robot perform poorly in a navigation guidance task just minutes before. We performed additional exploratory studies investigating different failure modes. Even when the robot pointed to a dark room with no discernible exit the majority of people did not choose to safely exit the way they entered.


international conference on robotics and automation | 2012

Information propagation applied to robot-assisted evacuation

Paul Robinette; Patricio A. Vela; Ayanna M. Howard

Inspired by large fatality rates due to fires in crowded areas and the increasing presence of robots in dangerous emergency situations, we have implemented a model of information propagation among evacuees. Information about the locations of exits and the relative confidence of the individual in the location of the exit disseminated through a simulated crowd of people during an evacuation modeled after The Station Nightclub fire of 2003. True believers were added to this system as individuals who refused to accept exit information from others, instead preferring to head to their own exit. This system was then tested to find what percentage of true believers most likely existed in the actual fire. Using this true believer percentage, robots were added to the environment to guide evacuees to the nearest exit. The number of people who believed a robots instructions was varied to find what percentage of people need to trust these robots in order to exploit information propagation and thus increase survivability. As a lower bound, we have found that 30% of the evacuees should believe a robots instructions to significantly increase survival rates.


robot and human interactive communication | 2011

Incorporating a model of human panic behavior for robotic-based emergency evacuation

Paul Robinette; Ayanna M. Howard

Evacuating a building in an emergency situation can be very confusing and dangerous. Exit signs are static and thus have no ability to convey information about congestion or danger between the sign and the actual exit door. Emergency personnel may arrive too late to assist in an evacuation. Robots, however, can be stored inside of buildings and can be used to guide evacuees to the best available exit. To enable this process, evacuation robots must have an understanding of how people react in emergency situations. By incorporating a model of human panic behavior, these robots can effectively guide crowds of people to zones of safety. In this paper, we discuss an initial design of these robots and their behaviors. Preliminary simulation results show that a significantly larger proportion of people are evacuated with robot assistance than without.


robot and human interactive communication | 2014

Assessment of robot guidance modalities conveying instructions to humans in emergency situations

Paul Robinette; Alan R. Wagner; Ayanna M. Howard

Motivated by the desire to mitigate human casualties in emergency situations, this paper explores various guidance modalities provided by a robotic platform for instructing humans to safely evacuate during an emergency. We focus on physical modifications of the robot, which enables visual guidance instructions, since auditory guidance instructions pose potential problems in a noisy emergency environment. Robotic platforms can convey visual guidance instructions through motion, static signs, dynamic signs, and gestures using single or multiple arms. In this paper, we discuss the different guidance modalities instantiated by different physical platform constructs and assess the abilities of the platforms to convey information related to evacuation. Human-robot interaction studies with 192 participants show that participants were able to understand the information conveyed by the various robotic constructs in 75.8% of cases when using dynamic signs with multi-arm gestures, as opposed to 18.0% when using static signs for visual guidance. Of interest to note is that dynamic signs had equivalent performance to single-arm gestures overall but drastically different performances at the two distance levels tested. Based on these studies, we conclude that dynamic signs are important for information conveyance when the robot is in close proximity to the human but multi-arm gestures are necessary when information must be conveyed across a greater distance.


IEEE Transactions on Human-Machine Systems | 2017

Effect of Robot Performance on Human–Robot Trust in Time-Critical Situations

Paul Robinette; Ayanna M. Howard; Alan R. Wagner

Robots have the potential to save lives in high-risk situations, such as emergency evacuations. To realize this potential, we must understand how factors such as the robots performance, the riskiness of the situation, and the evacuees motivation influence his or her decision to follow a robot. In this paper, we developed a set of experiments that tasked individuals with navigating a virtual maze using different methods to simulate an evacuation. Participants chose whether or not to use the robot for guidance in each of two separate navigation rounds. The robot performed poorly in two of the three conditions. The participants decision to use the robot and self-reported trust in the robot served as dependent measures. A 53% drop in self-reported trust was found when the robot performs poorly. Self-reports of trust were strongly correlated with the decision to use the robot for guidance (


international conference on social robotics | 2015

Timing Is Key for Robot Trust Repair

Paul Robinette; Ayanna M. Howard; Alan R. Wagner

\phi ({90}) = + 0.745


international symposium on safety, security, and rescue robotics | 2012

Trust in emergency evacuation robots

Paul Robinette; Ayanna M. Howard

). We conclude that a mistake made by a robot will cause a person to have a significantly lower level of trust in it in later interactions.


Archive | 2016

Investigating Human-Robot Trust in Emergency Scenarios: Methodological Lessons Learned

Paul Robinette; Alan R. Wagner; Ayanna M. Howard

Even the best robots will eventually make a mistake while performing their tasks. In our past experiments, we have found that even one mistake can cause a large loss in trust by human users. In this paper, we evaluate the effects of a robot apologizing for its mistake, promising to do better in the future, and providing additional reasons to trust it in a simulated office evacuation conducted in a virtual environment. In tests with 319 participants, we find that each of these techniques can be successful at repairing trust if they are used when the robot asks the human to trust it again, but are not successful when used immediately after the mistake. The implications of these results are discussed.


Archive | 2017

Conceptualizing Overtrust in Robots: Why Do People Trust a Robot That Previously Failed?

Paul Robinette; Ayanna M. Howard; Alan R. Wagner

Would you trust a robot to lead you to safety in an emergency? What design would best attract your attention in a smoke-filled environment? How should the robot behave to best increase your trust? To answer these questions, we have created a three dimensional environment to simulate an emergency and determine to what degree an individual will follow a robot to a variety of exits. Survey feedback and quantitative scenario results were gathered on two different robot designs. Fifteen volunteers completed a total of seven scenarios each: one without a robot and one with each robot pointing to each of three exits in the environment. Robots were followed by each volunteer in at least two scenarios. One-third of all volunteers followed the robot in each robot-guided scenario.


human robot interaction | 2018

Preliminary Interactions of Human-Robot Trust, Cognitive Load, and Robot Intelligence Levels in a Competitive Game

Michael Novitzky; Paul Robinette; Michael R. Benjamin; Danielle K. Gleason; Caileigh Fitzgerald; Henrik Schmidt

The word “trust” has many definitions that vary based on context and culture, so asking participants if they trust a robot is not as straightforward as one might think. The perceived risk involved in a scenario and the precise wording of a question can bias the outcome of a study in ways that the experimenter did not intend. This chapter presents the lessons we have learned about trust while conducting human-robot experiments with 770 human subjects. We discuss our work developing narratives that describe trust situations as well as interactive human-robot simulations. These experimental paradigms have guided our research exploring the meaning of trust, trust loss, and trust repair. By using crowdsourcing to locate and manage experiment participants, considerable diversity of opinion is found; there are, however, several considerations that must be included. Conclusions drawn from these experiments demonstrate the types of biases that participants are prone to as well as techniques for mitigating these biases.

Collaboration


Dive into the Paul Robinette's collaboration.

Top Co-Authors

Avatar

Ayanna M. Howard

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alan R. Wagner

Georgia Tech Research Institute

View shared research outputs
Top Co-Authors

Avatar

Caileigh Fitzgerald

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Danielle K. Gleason

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Henrik Schmidt

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Novitzky

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael R. Benjamin

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Patricio A. Vela

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert L. Allen

Georgia Tech Research Institute

View shared research outputs
Top Co-Authors

Avatar

Sergio García-Vergara

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge