Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alan R. Wagner is active.

Publication


Featured researches published by Alan R. Wagner.


Proceedings of the IEEE | 2012

Moral Decision Making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception

Ronald C. Arkin; Patrick D. Ulam; Alan R. Wagner

As humans are being progressively pushed further downstream in the decision-making process of autonomous systems, the need arises to ensure that moral standards, however defined, are adhered to by these robotic artifacts. While meaningful inroads have been made in this area regarding the use of ethical lethal military robots, including work by our laboratory, these needs transcend the warfighting domain and are pervasive, extending to eldercare, robot nannies, and other forms of service and entertainment robotic platforms. This paper presents an overview of the spectrum and specter of ethical issues raised by the advent of these systems, and various technical results obtained to date by our research group, geared towards managing ethical behavior in autonomous robots in relation to humanity. This includes: 1) the use of an ethical governor capable of restricting robotic behavior to predefined social norms; 2) an ethical adaptor which draws upon the moral emotions to allow a system to constructively and proactively modify its behavior based on the consequences of its actions; 3) the development of models of robotic trust in humans and its dual, deception, drawing on psychological models of interdependence theory; and 4) concluding with an approach towards the maintenance of dignity in human-robot relationships.


International Journal of Social Robotics | 2011

Acting Deceptively: Providing Robots with the Capacity for Deception

Alan R. Wagner; Ronald C. Arkin

Deception is utilized by a variety of intelligent systems ranging from insects to human beings. It has been argued that the use of deception is an indicator of theory of mind (Cheney and Seyfarth in Baboon Metaphysics: The Evolution of a Social Mind, 2008) and of social intelligence (Hauser in Proc. Natl. Acad. Sci. 89:12137–12139, 1992). We use interdependence theory and game theory to explore the phenomena of deception from the perspective of robotics, and to develop an algorithm which allows an artificially intelligent system to determine if deception is warranted in a social situation. Using techniques introduced in Wagner (Proceedings of the 4th International Conference on Human-Robot Interaction (HRI 2009), 2009), we present an algorithm that bases a robot’s deceptive action selection on its model of the individual it’s attempting to deceive. Simulation and robot experiments using these algorithms which investigate the nature of deception itself are discussed.


human robot interaction | 2016

Overtrust of Robots in Emergency Evacuation Scenarios

Paul Robinette; Wenchen Li; Robert L. Allen; Ayanna M. Howard; Alan R. Wagner

Robots have the potential to save lives in emergency scenarios, but could have an equally disastrous effect if participants overtrust them. To explore this concept, we performed an experiment where a participant interacts with a robot in a non-emergency task to experience its behavior and then chooses whether to follow the robots instructions in an emergency or not. Artificial smoke and fire alarms were used to add a sense of urgency. To our surprise, all 26 participants followed the robot in the emergency, despite half observing the same robot perform poorly in a navigation guidance task just minutes before. We performed additional exploratory studies investigating different failure modes. Even when the robot pointed to a dark room with no discernible exit the majority of people did not choose to safely exit the way they entered.


international conference on robotics and automation | 2004

Multi-robot communication-sensitive reconnaissance

Alan R. Wagner; Ronald C. Arkin

This paper presents a method for multi-robot communication sensitive reconnaissance. This approach utilizes collections of precompiled vector fields in parallel to coordinate a team of robots in a manner that is responsive to communication failures. Collections of vector fields are organized at the task level for reusability and generality. Different team sizes, scenarios, and task management strategies are investigated. Results indicate an acceptable reduction in communication attenuation when compared to other related methods of navigation. Online management of tasks and potential scalability are discussed.


international conference on robotics and automation | 2007

Integrated Mission Specification and Task Allocation for Robot Teams - Design and Implementation

Patrick D. Ulam; Yoichiro Endo; Alan R. Wagner; Ronald C. Arkin

As the capabilities, range of missions, and the size of robot teams increase, the ability for a human operator to account for all the factors in these complex scenarios can become exceedingly difficult. Our previous research has studied the use of case-based reasoning (CBR) tools to assist a user in the generation of multi-robot missions. These tools, however, typically assume that the robots available for the mission are of the same type (i.e., homogeneous). We loosen this assumption through the integration of contract-net protocol (CNP) based task allocation coupled with a CBR-based mission specification wizard. Two alternative designs are explored for combining case-based mission specification and CNP-based team allocation as well as the tradeoffs that result from the selection of one of these approaches over the other.


robot and human interactive communication | 2014

Assessment of robot guidance modalities conveying instructions to humans in emergency situations

Paul Robinette; Alan R. Wagner; Ayanna M. Howard

Motivated by the desire to mitigate human casualties in emergency situations, this paper explores various guidance modalities provided by a robotic platform for instructing humans to safely evacuate during an emergency. We focus on physical modifications of the robot, which enables visual guidance instructions, since auditory guidance instructions pose potential problems in a noisy emergency environment. Robotic platforms can convey visual guidance instructions through motion, static signs, dynamic signs, and gestures using single or multiple arms. In this paper, we discuss the different guidance modalities instantiated by different physical platform constructs and assess the abilities of the platforms to convey information related to evacuation. Human-robot interaction studies with 192 participants show that participants were able to understand the information conveyed by the various robotic constructs in 75.8% of cases when using dynamic signs with multi-arm gestures, as opposed to 18.0% when using static signs for visual guidance. Of interest to note is that dynamic signs had equivalent performance to single-arm gestures overall but drastically different performances at the two distance levels tested. Based on these studies, we conclude that dynamic signs are important for information conveyance when the robot is in close proximity to the human but multi-arm gestures are necessary when information must be conveyed across a greater distance.


human-robot interaction | 2009

Creating and using matrix representations of social interaction

Alan R. Wagner

This paper explores the use of an outcome matrix as a computational representation of social interaction suitable for implementation on a robot. An outcome matrix expresses the reward afforded to each interacting individual with respect to pairs of potential behaviors. We detail the use of the outcome matrix as a representation of interaction in social psychology and game theory, discuss the need for modeling the robots interactive partner, and contribute an algorithm for creating outcome matrices from perceptual information. Experimental results explore the use of the algorithm with different types of partners and in different environments.


computational intelligence in robotics and automation | 2009

Robot deception: Recognizing when a robot should deceive

Alan R. Wagner; Ronald C. Arkin

This article explores the possibility of developing robot control software capable of discerning when and if a robot should deceive. Exploration of this problem is critical for developing robots with deception capabilities and may lend valuable insight into the phenomena of deception itself. In this paper we explore deception from an interdependence/game theoretic perspective. Further, we develop and experimentally investigate an algorithm capable of indicating whether or not a particular social situation warrants deception on the part of the robot. Our qualitative and quantitative results provide evidence that, indeed, our algorithm recognizes situations which justify deception and that a robot capable of discerning these situations is better suited to act than one that does not.


robot and human interactive communication | 2011

Recognizing situations that demand trust

Alan R. Wagner; Ronald C. Arkin

This article presents an investigation into the theoretical and computational aspects of trust as applied to robots. It begins with an in-depth review of the trust literature in search of a definition for trust suitable for implementation on a robot. Next we apply the definition to our interdependence framework for social action selection and develop an algorithm for determining if an interaction demands trust on the part of the robot. Finally, we apply our algorithm to several canonical social situations and review the resulting indications of whether or not the situation demands trust.


IEEE Transactions on Human-Machine Systems | 2017

Effect of Robot Performance on Human–Robot Trust in Time-Critical Situations

Paul Robinette; Ayanna M. Howard; Alan R. Wagner

Robots have the potential to save lives in high-risk situations, such as emergency evacuations. To realize this potential, we must understand how factors such as the robots performance, the riskiness of the situation, and the evacuees motivation influence his or her decision to follow a robot. In this paper, we developed a set of experiments that tasked individuals with navigating a virtual maze using different methods to simulate an evacuation. Participants chose whether or not to use the robot for guidance in each of two separate navigation rounds. The robot performed poorly in two of the three conditions. The participants decision to use the robot and self-reported trust in the robot served as dependent measures. A 53% drop in self-reported trust was found when the robot performs poorly. Self-reports of trust were strongly correlated with the decision to use the robot for guidance (

Collaboration


Dive into the Alan R. Wagner's collaboration.

Top Co-Authors

Avatar

Ronald C. Arkin

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ayanna M. Howard

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Paul Robinette

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Patrick D. Ulam

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yoichiro Endo

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jason Borenstein

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin R. Fransen

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Charlene K. Stokes

Air Force Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge