Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gordon Briggs is active.

Publication


Featured researches published by Gordon Briggs.


intelligent robots and systems | 2015

Planning for serendipity

Tathagata Chakraborti; Gordon Briggs; Kartik Talamadupula; Yu Zhang; Matthias Scheutz; David E. Smith; Subbarao Kambhampati

Recently there has been a lot of focus on human robot co-habitation issues that are often orthogonal to many aspects of human-robot teaming; e.g. on producing socially acceptable behaviors of robots and de-conflicting plans of robots and humans in shared environments. However, an interesting offshoot of these settings that has largely been overlooked is the problem of planning for serendipity - i.e. planning for stigmergic collaboration without explicit commitments on agents in co-habitation. In this paper we formalize this notion of planning for serendipity for the first time, and provide an Integer Programming based solution for this problem. Further, we illustrate the different modes of this planning technique on a typical Urban Search and Rescue scenario and show a real-life implementation of the ideas on the Nao Robot interacting with a human colleague.


International Journal of Social Robotics | 2014

How Robots Can Affect Human Behavior: Investigating the Effects of Robotic Displays of Protest and Distress

Gordon Briggs; Matthias Scheutz

The rise of military drones and other robots deployed in ethically-sensitive contexts has fueled interest in developing autonomous agents that behave ethically. The ability for autonomous agents to independently reason about situational ethics will inevitably lead to confrontations between robots and human operators regarding the morality of issued commands. Ideally, a robot would be able to successfully convince a human operator to abandon a potentially unethical course of action. To investigate this issue, we conducted an experiment to measure how successfully a humanoid robot could dissuade a person from performing a task using verbal refusals and affective displays that conveyed distress. The results demonstrate a significant behavioral effect on task-completion as well as significant effects on subjective metrics such as how comfortable subjects felt ordering the robot to complete the task. We discuss the potential relationship between the level of perceived agency of the robot and the sensitivity of subjects to robotic confrontation. Additionally, the possible ethical pitfalls of utilizing robotic displays of affect to shape human behavior are also discussed.


affective computing and intelligent interaction | 2013

Some Correlates of Agency Ascription and Emotional Value and Their Effects on Decision-Making

Megan Strait; Gordon Briggs; Matthias Scheutz

The prefrontal cortex (PFC) has been investigated extensively with functional magnetic resonance imaging (fMRI) and identified as a neural correlate of emotion regulation and decision-making, particularly in the context of moral utilitarian dilemmas. However, there are two limitations of previous work: (1) fMRI requires strict constraints on the physical experimental environment and (2) experimental manipulations have yet to consider the role of agency on the dilemma outcome and the corresponding neural activity. In this paper, we extend previous work by first evaluating an alternative neuroimaging technique, functional near infrared spectroscopy (NIRS), for observing decision-making processes in a less-constrained environment. We then examine the role of agency in deciding emotional (moral) and non-emotional dilemmas through a 2-part, 20-subject preliminary investigation. Our findings are two-fold: they suggest (1) NIRS is a potential alternative to fMRI in this decision-making context and (2) agency shows some influence on prefrontal neural activity, making NIRS a promising method for objective evaluation of agency and emotional value in human-agent interactions.


human robot interaction | 2017

Enabling robots to understand indirect speech acts in task-based interactions

Gordon Briggs; Tom Williams; Matthias Scheutz

An important open problem for enabling truly taskable robots is the lack of task-general natural language mechanisms within cognitive robot architectures that enable robots to understand typical forms of human directives and generate appropriate responses. In this paper, we first provide experimental evidence that humans tend to phrase their directives to robots indirectly, especially in socially conventionalized contexts. We then introduce pragmatic and dialogue-based mechanisms to infer intended meanings from such indirect speech acts and demonstrate that these mechanisms can handle all indirect speech acts found in our experiment as well as other common forms of requests.


ibero-american conference on artificial intelligence | 2014

A dempster-shafer theoretic approach to understanding indirect speech acts

Tom Williams; Rafael C. Núñez; Gordon Briggs; Matthias Scheutz; Kamal Premaratne; Manohar N. Murthi

Understanding Indirect Speech Acts (ISAs) is an integral function of human understanding of natural language. Recent attempts at understanding ISAs have used rule-based approaches to map utterances to deep semantics. While these approaches have been successful in handling a wide range of ISAs, they do not take into account the uncertainty associated with the utterance’s context, or the utterance itself. We present a new approach for understanding ISAs using the Dempster-Shafer theory of evidence and show how this approach increases the robustness of ISA inference by (1) accounting for uncertain implication rules and context, (2) fluidly adapting rules given new information, and (3) enabling better modeling of the beliefs of other agents.


robot and human interactive communication | 2015

Towards morally sensitive action selection for autonomous social robots

Matthias Scheutz; Bertram F. Malle; Gordon Briggs

Autonomous social robots embedded in human societies have to be sensitive to human social interactions and thus to moral norms and principles guiding these interactions. Actions that violate norms can lead to the violator being blamed. Robots thus need to be able to anticipate possible norm violations and attempt to prevent them while they execute actions. If norm violations cannot be prevented (e.g., in a moral dilemma situation in which every action leads to a norm violation), then the robot needs to be able to justify the action to address any potential blame. In this paper, we present a first attempt at an action execution system for social robots that can (a) detect (some) norm violations, (b) consult an ethical reasoner for guidance on what to do in moral dilemma situations, and (c) it can keep track of execution traces and any resulting states that might have violated norms in order to produce justifications.


international conference on social robotics | 2015

When Robots Object: Evidence for the Utility of Verbal, but not Necessarily Spoken Protest

Gordon Briggs; Ian McConnell; Matthias Scheutz

Future autonomous robots will likely encounter situations in which humans end up commanding the robots to perform tasks that robot ought to object. A previous study showed that robot appearance does not seem to affect human receptiveness to robot protest produced in response to inappropriate human commands. However, this previous work used robots that communicate the objection to the human in spoken natural language, thus allowing for the possibility that spoken language, not the content of the objection and its justification, were responsible for human reactions. In this paper, we specifically set out to answer this open question by comparing spoken robot protest with written robot protest.


robot and human interactive communication | 2014

Actions speak louder than looks: Does robot appearance affect human reactions to robot protest and distress?

Gordon Briggs; Bryce Gessell; Matt Dunlap; Matthias Scheutz

People will eventually be exposed to robotic agents that may protest their commands for a wide range of reasons. We present an experiment designed to determine whether a robots appearance has a significant effect on the amount of agency people ascribed to it and its ability to dissuade a human operator from forcing it to carry out a specific command. Participants engage in a human-robot interaction (HRI) with either a small humanoid or non-humanoid robot that verbally protests a command. Initial results indicate that humanoid appearance does not significantly affect the behavior of human operators in the task. Agency ratings given to the robots were also not significantly affected.


annual meeting of the special interest group on discourse and dialogue | 2014

Modeling Blame to Avoid Positive Face Threats in Natural Language Generation

Gordon Briggs; Matthias Scheutz

Prior approaches to politeness modulation in natural language generation (NLG) often focus on manipulating factors such as the directness of requests that pertain to preserving the autonomy of the addressee (negative face threats), but do not have a systematic way of understanding potential impoliteness from inadvertently critical or blame-oriented communications (positive face threats). In this paper, we discuss ongoing work to integrate a computational model of blame to prevent inappropriate threats to positive face.


Emotions and Personality in Personalized Services | 2016

Reflections on the Design Challenges Prompted by Affect-Aware Socially Assistive Robots

Jason R. Wilson; Matthias Scheutz; Gordon Briggs

The rising interest in socially assistive robotics is, at least in part, stemmed by the aging population around the world. A lot of research and interest has gone into insuring the safety of these robots. However, little has been done to consider the necessary role of emotion in these robots and the potential ethical implications of having affect-aware socially assistive robots. In this chapter we address some of the considerations that need to be taken into account in the research and development of robots assisting a vulnerable population. We use two fictional scenarios involving a robot assisting a person with Parkinson’s disease to discuss five ethical issues relevant to affect-aware socially assistive robots.

Collaboration


Dive into the Gordon Briggs's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Bello

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sangeet Khemlani

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge