Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charlene K. Stokes is active.

Publication


Featured researches published by Charlene K. Stokes.


Human Factors | 2012

Human-human reliance in the context of automation.

Joseph B. Lyons; Charlene K. Stokes

Objective: The current study examined human–human reliance during a computer-based scenario where participants interacted with a human aid and an automated tool simultaneously. Background: Reliance on others is complex, and few studies have examined human–human reliance in the context of automation. Past research found that humans are biased in their perceived utility of automated tools such that they view them as more accurate than humans. Prior reviews have postulated differences in human–human versus human–machine reliance, yet few studies have examined such reliance when individuals are presented with divergent information from different sources. Method: Participants (N = 40) engaged in the Convoy Leader experiment. They selected a convoy route based on explicit guidance from a human aid and information from an automated map. Subjective and behavioral human–human reliance indices were assessed. Perceptions of risk were manipulated by creating three scenarios (low, moderate, and high) that varied in the amount of vulnerability (i.e., potential for attack) associated with the convoy routes. Results: Results indicated that participants reduced their behavioral reliance on the human aid when faced with higher risk decisions (suggesting increased reliance on the automation); however, there were no reported differences in intentions to rely on the human aid relative to the automation. Conclusion: The current study demonstrated that when individuals are provided information from both a human aid and automation, their reliance on the human aid decreased during high-risk decisions. Application: This study adds to a growing understanding of the biases and preferences that exist during complex human–human and human–machine interactions.


human-robot interaction | 2015

Emotional Storytelling in the Classroom: Individual versus Group Interaction between Children and Robots

Iolanda Leite; Marissa McCoy; Monika Lohani; Daniel Ullman; Nicole Salomons; Charlene K. Stokes; Susan E. Rivers; Brian Scassellati

Robot assistive technology is becoming increasingly prevalent. Despite the growing body of research in this area, the role of type of interaction (i.e., small groups versus individual interactions) on effectiveness of interventions is still unclear. In this paper, we explore a new direction for socially assistive robotics, where multiple robotic characters interact with children in an interactive storytelling scenario. We conducted a between-subjects repeated interaction study where a single child or a group of three children interacted with the robots in an interactive narrative scenario. Results show that although the individual condition increased participant’s story recall abilities compared to the group condition, the emotional interpretation of the story content seemed more dependent on the difficulty level rather than the study condition. Our findings suggest that, despite the type of interaction, interactive narratives with multiple robots are a promising approach to foster children’s development of social-related skills.


collaboration technologies and systems | 2010

Accounting for the human in cyberspace: Effects of mood on trust in automation

Charlene K. Stokes; Joseph B. Lyons; Kenneth Littlejohn; Joseph Natarian; Ellen Case; Nicholas Speranza

The present study examined the effects of mood on trust in automation over time. Participants (N = 72) were induced into either a positive or negative mood and then completed a computer-based task that involved the assistance of an automated aid. Results indicated that mood had a significant impact on initial trust formation, but this impact diminishes as time and interaction with the automated aid increases. Implications regarding trust propensity and trustworthiness are discussed, as well as the dynamic effects of trust over time.


Team Performance Management | 2010

Adaptive performance: a criterion problem

Charlene K. Stokes; Tamera R. Schneider; Joseph B. Lyons

Purpose – The purpose of this paper is to present an empirical examination of the convergent validity of the two foremost measurement methods used to assess adaptive performance: subjective ratings and objective task scores. Predictors of adaptive performance have been extensively examined, but limited research attention has been directed at adaptability itself as a validated construct within the job performance domain. Due to this neglect, it is unclear if researchers can generalize findings across criterion measurement methods.Design/methodology/approach – Teams of five (275 individuals) performed a computer‐based task that involved a series of disruptions requiring an adaptive response. In addition to post‐disruption task scores, subjective self‐ and peer‐ratings of adaptive performance were collected.Findings – Results did not indicate strong support for the convergent validity of subjective and objective measures. Although the measures were significantly related (r=0.47, p < 0.001) and shared a relat...


Archive | 2017

A Framework for Human-Agent Social Systems: The Role of Non-technical Factors in Operation Success

Monika Lohani; Charlene K. Stokes; Natalia Dashan; Marissa McCoy; Christopher A. Bailey; Susan E. Rivers

We present a comprehensive framework that identifies a number of factors that impact human-agent team building, including human, agent, and environmental factors. This framework integrates existing empirical work in organization behavior, non-technical training, and human-agent interaction to support successful human-agent operations. We conclude by discussing implications and next steps to evaluate and expand our framework with the aim of guiding future attempts to create efficient human-agent teams and improve mission outcomes.


human robot interaction | 2016

Social Interaction Moderates Human-Robot Trust-Reliance Relationship and Improves Stress Coping

Monika Lohani; Charlene K. Stokes; Marissa McCoy; Christopher A. Bailey; Susan E. Rivers

Previous work with non-social human-robot interaction has found no links between trust and reliance [1]. The current study tested the question: Can social interactions moderate trust-reliance relationship? Human-robot interactions may share similar characteristics to social and emotional interactions between humans. We investigated how social and emotional human-robot interactions moderate the trust-reliance relationship and impacts perceived stress coping abilities. In the experimental condition, social and emotional interactions were used to guide the dialogue between a participant and a virtual robot in order to promote team building. In the matched control condition, the interactions were information-focused, without social or emotional interaction. We show that social interaction moderated the effect of trust on reliance such that higher trust led to greater reliance on the robot. The experimental condition also had higher perceived stress coping abilities. These findings contribute to the existing literature and suggest that creating deeper social and emotional interactions with a robot teammate can facilitate human-robot partnership.


Frontiers in Robotics and AI | 2017

Narratives with Robots: The Impact of Interaction Context and Individual Differences on Story Recall and Emotional Understanding

Iolanda Leite; Marissa McCoy; Monika Lohani; Daniel Ullman; Nicole Salomons; Charlene K. Stokes; Susan E. Rivers; Brian Scassellati

Role-play scenarios have been considered a successful learning space for children to develop their social and emotional abilities. In this paper, we investigate whether socially assistive robots in role-playing settings are as effective with small groups of children as they are with a single child, and whether individual factors such as gender, grade level (first vs. second), perception of the robots (peer vs. adult), and empathy level (low vs. high) play a role in these two interaction contexts. We conducted a three-week repeated exposure experiment where 40 children interacted with socially assistive robotic characters that acted out interactive stories around words that contribute to expanding childrens emotional vocabulary. Our results showed that although participants who interacted alone with the robots recalled the stories better than participants in the group condition, no significant differences were found in childrens emotional interpretation of the narratives. With regard to individual differences, we found that a single child setting appeared more appropriate to first graders than a group setting, empathy level is an important predictor for emotional understanding of the narratives, and childrens performance varies depending on their perception of the robots (peer vs. adult) in the two conditions.


robot and human interactive communication | 2016

Autonomous disengagement classification and repair in multiparty child-robot interaction

Iolanda Leite; Marissa McCoy; Monika Lohani; Nicole Salomons; Kara McElvaine; Charlene K. Stokes; Susan E. Rivers; Brian Scassellati

As research on robotic tutors increases, it becomes more relevant to understand whether and how robots will be able to keep students engaged over time. In this paper, we propose an algorithm to monitor engagement in small groups of children and trigger disengagement repair interventions when necessary. We implemented this algorithm in a scenario where two robot actors play out interactive narratives around emotional words and conducted a field study where 72 children interacted with the robots three times in one of the following conditions: control (no disengagement repair), targeted (interventions addressing the child with the highest disengagement level) and general (interventions addressing the whole group). Surprisingly, children in the control condition had higher narrative recall than in the two experimental conditions, but no significant differences were found in the emotional interpretation of the narratives. When comparing the two different types of disengagement repair strategies, participants who received targeted interventions had higher story recall and emotional understanding, and their valence after disengagement repair interventions increased over time.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2012

Trust in Computers and Robots: The Uses and Boundaries of the Analogy to Interpersonal Trust

David Atkinson; Peter A. Hancock; Robert R. Hoffman; John D. Lee; Ericka Rovira; Charlene K. Stokes; Alan R. Wagner

Trust is a complex concept having many meanings and hinting at many variables, and is not a single concept, or state, or continuum. Panelists will briefly argue their stances concerning concepts of trust in automation, and whether (or to what extent) our understanding of trust in automation should be addressed by analogy to interpersonal trust. There is considerable divergence of opinion on these matters, and on the question of whether it is possible for robots to engage in trustworthy relations with humans.


intelligent virtual agents | 2017

Do We Need Emotionally Intelligent Artificial Agents? First Results of Human Perceptions of Emotional Intelligence in Humans Compared to Robots

Lisa Fan; Matthias Scheutz; Monika Lohani; Marissa McCoy; Charlene K. Stokes

Humans are very apt at reading emotional signals in other humans and even artificial agents, which raises the question of whether artificial agents need to be emotionally intelligent to ensure effective social interactions. For artificial agents without emotional intelligence might generate behavior that is misinterpreted, unexpected, and confusing to humans, violating human expectations and possibly causing emotional harm. Surprisingly, there is a dearth of investigations aimed at understanding the extent to which artificial agents need emotional intelligence for successful interactions. Here, we present the first study in the perception of emotional intelligence (EI) in robots vs. humans. The objective was to determine whether people viewed robots as more or less emotionally intelligent when exhibiting similar behaviors as humans, and to investigate which verbal and nonverbal communication methods were most crucial for human observational judgments. Study participants were shown a scene in which either a robot or a human behaved with either high or low empathy, and then they were asked to evaluate the agent’s emotional intelligence and trustworthiness. The results showed that participants could consistently distinguish the high EI condition from the low EI condition regardless of the variations in which communication methods were observed, and that whether the agent was a robot or human had no effect on the perception. We also found that relative to low EI high EI conditions led to greater trust in the agent, which implies that we must design robots to be emotionally intelligent if we wish for users to trust them.

Collaboration


Dive into the Charlene K. Stokes's collaboration.

Top Co-Authors

Avatar

Joseph B. Lyons

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Schwartz

Air Force Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge