Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shan G. Lakhmani is active.

Publication


Featured researches published by Shan G. Lakhmani.


Theoretical Issues in Ergonomics Science | 2018

Situation awareness-based agent transparency and human-autonomy teaming effectiveness

Jessie Y. C. Chen; Shan G. Lakhmani; Kimberly Stowers; Anthony R. Selkowitz; Julia L. Wright; Michael J. Barnes

ABSTRACT Effective collaboration between humans and agents depends on humans maintaining an appropriate understanding of and calibrated trust in the judgment of their agent counterparts. The Situation Awareness-based Agent Transparency (SAT) model was proposed to support human awareness in human–agent teams. As agents transition from tools to artificial teammates, an expansion of the model is necessary to support teamwork paradigms, which require bidirectional transparency. We propose that an updated model can better inform human–agent interaction in paradigms involving more advanced agent teammates. This paper describes the models use in three programmes of research, which exemplify the utility of the model in different contexts – an autonomous squad member, a mediator between a human and multiple subordinate robots, and a plan recommendation agent. Through this review, we show that the SAT model continues to be an effective tool for facilitating shared understanding and proper calibration of trust in human–agent teams.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2014

System State Awareness A Human Centered Design Approach to Awareness in a Complex World

Nicholas Kasdaglis; Olivia B. Newton; Shan G. Lakhmani

Situation Awareness is a popular concept used to assess human agents’ understanding of a system and any error that may occur due to poor understanding. However, the popular conception of situation awareness retains assumptions better suited for linear, controlled systems. When assessing complex systems, rife with non-linear, emergent behaviors, current models of situation awareness frequently place much of the burden of system failure onto the human agent. We contend that the traditional concept of a fully controlled system is not the best fit for a complex system with networked loci of control, especially during abnormal system states. Instead, we recommend an approach that focuses on agents’ adaptation to environmental cues. We discuss how the concept of situation awareness, when enmeshed in the assumption of linearity, insufficiently deals with extended cognition, reliability, adaptation, and system stability. We conclude that an approach focusing on System State Awareness (SSA), instead, facilitates the adaptation of system goals during off-normal system states. Thus, SSA provides the theoretical underpinning for design of distributed networked systems that improve human performance in complex environments.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2016

Intelligent Agent Transparency The Design and Evaluation of an Interface to Facilitate Human and Intelligent Agent Collaboration

Kimberly Stowers; Nicholas Kasdaglis; Olivia B. Newton; Shan G. Lakhmani; Ryan Wohleber; Jessie Y. C. Chen

We evaluated the usability and utility of an unmanned vehicle management interface that was developed based on the Situation awareness–based Agent Transparency model. We sought to examine the effect of increasing levels of agent transparency on operator task performance and perceived usability of the agent. Usability and utility were assessed through flash testing, a focus group, and experimental testing. While usability appeared to decrease with the portrayal of uncertainty, operator performance and reliance on key parts of the interface increased. Implications and next steps are discussed.


Archive | 2017

Displaying Information to Support Transparency for Autonomous Platforms

Anthony R. Selkowitz; Cintya A. Larios; Shan G. Lakhmani; Jessie Y. C. Chen

The purpose of this paper is to summarize display design techniques that are best suited for displaying information to support transparency of communication in autonomous systems interfaces. The principles include Ecological Interface Design, integrated displays, and pre-attentive cuing. Examples of displays from two recent experiments investigating how transparency affects operator trust, situational awareness, and workload, are provided throughout the paper as an application of these techniques. Specifically, these interfaces were formatted using the Situation awareness-based Agent Transparency model as a method of formatting the information in displays for an autonomous robot—the Autonomous Squad Member (ASM). Overall, these methods were useful in creating usable interfaces for the ASM display.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2016

Agent Transparency and the Autonomous Squad Member

Anthony R. Selkowitz; Shan G. Lakhmani; Cintya N. Larios; Jessie Y. C. Chen

The present study investigated the effect of including information to support transparency, based on the Situation awareness-based Agent Transparency (SAT) model, in the interface for an autonomous agent known as the Autonomous Squad Member (ASM). In four different SAT model-based display conditions, participants used the ASM’s interface to gain information about the ASM and simulated squad’s status as they completed a route containing obstacles. Results indicated that participants had greater trust in the ASM and were most effective at maintaining situation awareness when the ASM provided outcome predictions (SAT Level 3), in addition to planned actions (SAT Level 1) and a rationale for those actions (SAT Level 2). No differences in participant workload, while monitoring the ASM, were observed between SAT display conditions.


intelligent user interfaces | 2016

Human-Autonomy Teaming and Agent Transparency

Jessie Y. C. Chen; Michael J. Barnes; Anthony R. Selkowitz; Kimberly Stowers; Shan G. Lakhmani; Nicholas Kasdaglis

We developed the user interfaces for two Human-Robot Interaction (HRI) tasking environments: dismounted infantry interacting with a ground robot (Autonomous Squad Member) and human interaction with an intelligent agent to manage a team of heterogeneous robotic vehicles (IMPACT). These user interfaces were developed based on the Situation awareness-based Agent Transparency (SAT) model. User testing showed that as agent transparency increased, so did overall human-agent team performance. Participants were able to calibrate their trust in the agent more appropriately as agent transparency increased.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2015

The Effects of Agent Transparency on Human Interaction with an Autonomous Robotic Agent

Anthony R. Selkowitz; Shan G. Lakhmani; Jessie Y. C. Chen; Michael W. Boyce

We use the Situation awareness-based Agent Transparency model as a framework to design a user interface to support agent transparency. Participants were instructed to supervise an autonomous robotic agent as it traversed simulated urban environments. During this task, participants were exposed to one of three levels of information used to support agent transparency in the interface display. Our findings suggest that providing agent transparency information allows operators to properly calibrate trust without excess workload. Though, increased agent transparency information did not support operator situation awareness.


Cognitive Systems Research | 2017

Using agent transparency to support situation awareness of the Autonomous Squad Member

Anthony R. Selkowitz; Shan G. Lakhmani; Jessie Y. C. Chen

Abstract Agent transparency has been proposed as a solution to the problem of facilitating operators’ situation awareness in human-robot teams. Sixty participants performed a dual monitoring task, monitoring both an intelligent, autonomous robot teammate and performing threat detection in a virtual environment. The robot displayed four different interfaces, corresponding to information from the Situation awareness-based Agent Transparency (SAT) model. Participants’ situation awareness of the robot, confidence in their situation awareness, trust in the robot, workload, cognitive processing, and perceived usability of the robot displays were assessed. Results indicate that participants using interfaces corresponding to higher SAT level had greater situation awareness, cognitive processing, and trust in the robot than when they viewed lower level SAT interfaces. No differences in workload or perceived usability of the display were detected. Based on these findings, we observed that transparency has a significant effect on situation awareness, trust, and cognitive processing.


human robot interaction | 2015

Effects of Agent Transparency on Operator Trust

Michael W. Boyce; Jessie Y. C. Chen; Anthony R. Selkowitz; Shan G. Lakhmani

We conducted a human-in-the-loop robot simulation experiment. The effects of displaying transparency information, in the interface for an autonomous robot, on operator trust were examined. Participants were assigned to one of three transparency conditions and trust was measured prior to observing the autonomous robotic agents progress and post observation. Results demonstrated that participants who received more transparency information reported higher trust in the autonomous robotic agent. Overall findings indicate that displaying SAT model-based transparency information on a robotic interface is effective for appropriate trust calibration in an autonomous robotic agent.


Simulation & Gaming | 2014

Opening Cinematics: Their Cost-Effectiveness in Serious Games

Katelyn Procci; Shan G. Lakhmani; Talib S. Hussain; Clint A. Bowers

For the development of serious gaming, it is necessary to articulate the specific features that lend themselves best to the creation of effective learning games. Given the limited resources of the typical serious games developer, time and money should be spent in a way such that features with the greatest return on investment take priority. Opening cinematics, a popular feature of games, was examined through the lens of three major theoretical perspectives that promote learning, specifically situated learning, emotional arousal, and goal orientation. A series of three experiments was conducted to determine if the inclusion of opening cinematics was able to change the goal orientation of players as well as improve the effectiveness of a serious game used to train U.S. Navy recruits shipboard damage control procedures. The data suggest that opening cinematics were not worth the immense development investment. Game design suggestions and potential topics for future research are provided.

Collaboration


Dive into the Shan G. Lakhmani's collaboration.

Top Co-Authors

Avatar

Kimberly Stowers

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Nicholas Kasdaglis

Florida Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Clint A. Bowers

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Olivia B. Newton

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Alicia Sanchez

Defense Acquisition University

View shared research outputs
Top Co-Authors

Avatar

Anthony R. Selkowitz

United States Army Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Cintya N. Larios

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Daniel Barber

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Elaine M. Raybourn

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

James L. Szalma

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge