Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephanie J. Lackey is active.

Publication


Featured researches published by Stephanie J. Lackey.


International Journal of Human-computer Interaction | 2004

A Paradigm Shift in Interactive Computing: Deriving Multimodal Design Principles from Behavioral and Neurological Foundations

Kay M. Stanney; Shatha N. Samman; Leah Reeves; Kelly S. Hale; Wendi L. Buff; Clint A. Bowers; Brian Goldiez; Denise Nicholson; Stephanie J. Lackey

As technology advances, systems are increasingly able to provide more information than a human operator can process accurately. Thus, a challenge for designers is to create interfaces that allow operators to process the optimal amount of data. It is herein proposed that this may be accomplished by creating multimodal display systems that augment or switch modalities to maximize user information processing. Such a system would ultimately be informed by a users neurophysiological state. As a first step toward that goal, relevant literature is reviewed and a set of preliminary design guidelines for multimodal information systems is suggested.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2011

Defining Next-Generation Multi-Modal Communication in Human Robot Interaction

Stephanie J. Lackey; Daniel Barber; Lauren Reinerman; Norman I. Badler; Irwin Hudson

With teleoperation being the contemporary standard for Human Robot Interaction (HRI), research into multi-modal communication (MMC) has focused on development of advanced Operator Control Units (OCU) supporting control of one or more robots. However, with advances being made to improve the perception, intelligence, and mobility of robots, a need exists to revolutionize the ways in which Soldiers interact with robotic team members. Within this future vision, mixed-initiative Soldier-Robot (SR) teams will work collaboratively sharing information back-and-forth in a fluid natural manner using combinations of communication methods. Therefore, new definitions are required to focus research efforts to support next-generation MMC. After a thorough survey of the literature and a scientific workshop on the topic, this paper aims to operationally define MMC, Explicit Communication, and Implicit Communication to encompass the shifting paradigm of HRI from a controller/controlled relationship to a cooperative team mate relationship. This paper presents the results from a survey of the literature and a scientific workshop that inform proposed definitions for multi-modal, explicit, and implicit communication. An illustrative scenario vignette provides context and specific examples of each communication type. Finally, future research efforts are summarized.


Proceedings of SPIE | 2013

Visual and tactile interfaces for bi-directional human robot communication

Daniel Barber; Stephanie J. Lackey; Lauren Reinerman-Jones; Irwin Hudson

Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.


collaboration technologies and systems | 2010

Recommended roles for uninhabited team members within mixed-initiative combat teams

Sherry Ogreten; Stephanie J. Lackey; Denise Nicholson

Trust in automation is a well-researched topic that is particularly important when planning mixed initiative interaction. When working with teams comprised of both human and non-human team members, the amount of trust the operator places in the automation often determines which parts of the interaction can be automated and the optimal level of automation. The mixed-initiative community has created numerous systems that leverage trust in automation, but results have been inconclusive. After examining the primary factors that impact trust in automated systems, we make several recommendations regarding the assignment of roles for human and non-human mixed-initiative team members.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2014

Impact of Instructional Strategies on Motivation and Engagement for Simulation-Based Training of Robot-Aided ISR Tasks

Julie Nanette Salcedo; Stephanie J. Lackey; Crystal S. Maraj; Lauren Reinerman-Jones

The U.S. Army projects a considerable increase in the number of operational Unmanned Ground Systems (UGS) within the next ten years. There is a need to enhance UGS capabilities to support remote Intelligence, Surveillance, Reconnaissance (ISR) operations involving the identification of High-Value Individuals (HVI). Emerging UGS capability requirements will inevitably result in new or revised training requirements. The U.S. Army identifies Simulation-Based Training (SBT) as a required training platform for robot-aided ISR tasks utilizing UGSs. In order to implement an effective SBT system, there are several factors to consider related to training systems design and trainee needs. Factors addressed in this study include the selection of effective SBT instructional strategies and the impact on trainee motivation and engagement. Results from this study contribute to design and future research recommendations regarding SBT for robot-aided ISR tasks.


international conference on virtual, augmented and mixed reality | 2013

Assessing Engagement in Simulation-Based Training Systems for Virtual Kinesic Cue Detection Training

Eric Ortiz; Crystal S. Maraj; Julie Nanette Salcedo; Stephanie J. Lackey; Irwin Hudson

Combat Profiling techniques strengthen a Warfighter’s ability to quickly react to situations within the operational environment based upon observable behavioral identifiers. One significant domain-specific skill researched is kinesics, or the study of body language. A Warfighter’s ability to distinguish kinesic cues can greatly aid in the detection of possible threatening activities or individuals with harmful intent. This paper describes a research effort assessing the effectiveness of kinesic cue depiction within Simulation-Based Training (SBT) systems and the impact of engagement levels upon trainee performance. For this experiment, live training content served as the foundation for scenarios generated using Bohemia Interactive’s Virtual Battlespace 2 (VBS2). Training content was presented on a standard desktop computer or within a physically immersive Virtual Environment (VE). Results suggest that the utilization of a highly immersive VE is not critical to achieve optimal performance during familiarization training of kinesic cue detection. While there was not a significant difference in engagement between conditions, the data showed evidence to suggest decreased levels of engagement by participants using the immersive VE. Further analysis revealed that temporal dissociation, which was significantly lower in the immersive VE condition, was a predictor of simulation engagement. In one respect, this indicates that standard desktop systems are suited for transitioning existing kinesic familiarization training content from the classroom to a personal computer. However, interpretation of the results requires operational context that suggests the capabilities of high-fidelity immersive VEs are not fully utilized by existing training methodologies. Thus, this research serves as an illustration of technology advancements compelling the SBT community to evolve training methods in order to fully benefit from emerging technologies.


The international journal of learning | 2011

Training system impact assessment: a review, reconceptualisation, and extension

Timothy Kotnour; Rafael E. Landaeta; Stephanie J. Lackey

This research focuses on reviewing and extending the current literature on impact assessment of training systems. This research contributes a model of training system assessment that considers organisational aspects of the training system, the life cycle of training systems, and the different stakeholders of training systems. This investigation is based on the definition of training systems as socio-technical human resource development initiatives. The intent of the training system assessment model is to help organisations providing and receiving training, as well as organisations managing the research and development of training systems with the ability to evaluate the performance of a training system through its life cycle from the identification of programme needs to selecting and conducting R&D to implementation to defining and measuring results. As proof of this concept, new views of training system impact assessment were developed and applied in the Next-generation Expeditionary Warfare Intelligent Training (NEW-IT).


spring simulation multiconference | 2010

The impact of unmanned weapon systems on individual and team performance

Eric Ortiz; Stephanie J. Lackey; M. A. J. Jonathan Stevens; Irwin Hudson

The U.S. military integrates unmanned systems within combat operations with greater regularity and scope each year. Warfighters currently conduct operations such as Improvised Explosive Device (IED) interrogation, and Unmanned Aerial System (UAS) surveillance and reconnaissance with various unmanned systems. Integration of unmanned weapon systems into human Fire Teams represents the next evolution in mixed-initiative teams. Such integration aims to improve individual and team performance; however, improved understanding and application of Human-Robot Interaction (HRI) principles within combat environments is required. The research presented investigates the impact upon individual and team performance when a non-autonomous unmanned weapon system is integrated into a human Fire Team. Studies were conducted at two U.S. Army installations involving 36, four-person Fire Teams. At the first location, participants included pre-deployed novice soldiers, and the second installment included experienced soldiers as participants. All soldiers had previous weapon experience using an M16 rifle and M240B machine gun. Two conditions were compared: Fire Teams fully manned by human Warfighters and Fire Teams where one human Warfighter was replaced by a remotely operated weapon. Each team consisted of four members: one M240B Gunner and three M16 Rifleman. The teams completed simulated missions utilizing the Engagement Skills Trainer 2000 (EST 2000), a virtual training simulator that executes various mission scenarios. The Fire Teams completed two different scenarios, each consisting of a manned and unmanned condition. In the unmanned condition, the Gunner operated a remotely operated weapon from a separate location. Performance was primarily measured by recording the total number of targets hit by each team member during scenario execution. Paired samples t-tests revealed significant differences in individual performance from the manned to unmanned conditions. Individual Riflemen improved performance from manned to unmanned scenarios. However, the Gunners significantly decreased in performance when operating the remote weapon system during the unmanned condition. Team performance did not reveal a significant difference in performance across conditions. This paper describes the experimental plan and methodology, followed by a discussion of experimental results and recommendations for future mixed-initiative team research.


international conference on virtual, augmented and mixed reality | 2016

Mixed Reality Training of Military Tasks: Comparison of Two Approaches Through Reactions from Subject Matter Experts

Roberto K. Champney; Julie Nanette Salcedo; Stephanie J. Lackey; Stephen R. Serge; Michelle Sinagra

This paper discusses a training-based comparison of two mixed reality military trainers utilizing simulation elements that are categorized on different areas of the virtuality continuum. The comparison encompassed exposing subject matter experts (SMEs) to the training systems. Independent groups of SMEs interacted with each system through conducting expert system evaluations. Independent groups of military officers experienced each system for call for fire/close air support training. Following these exposures, participants were queried on the constructs of simulator sickness, training utility, simulator fidelity, usability, and immersion. The results are contrasted and discussed. The outcomes of this comparison serve to promote discussion among the scientific community concerning the training tradeoffs affected by the virtuality continuum.


international conference on virtual, augmented and mixed reality | 2016

Impact of Instructional Strategies on Workload, Stress, and Flow in Simulation-Based Training for Behavior Cue Analysis

Julie Nanette Salcedo; Stephanie J. Lackey; Crystal S. Maraj

The U.S. Army desires to improve Intelligence, Surveillance, Reconnaissance (ISR) abilities by incorporating Unmanned Ground Systems (UGS) to aid in the identification of High Value Individuals (HVI) through the analysis of human behavior cues from safer distances. This requires analysts to employ perceptual skills indirectly via UGS video surveillance displays and will also require training platforms tailored to address the perceptual skill needs of these robot-aided ISR tasks. The U.S. Army identifies Simulation-Based Training (SBT) as a necessary training medium for UGS technologies. Instructional strategies that may increase the effectiveness of SBT for robot-aided ISR tasks include Highlighting and Massed Exposure. This study compared the impact of each strategy on trainee workload, stress, and flow during SBT for a behavior cue analysis task. Ultimately, the goal of this research effort is to provide instructional design recommendations that will improve SBT development to support effective training for emerging UGS capabilities.

Collaboration


Dive into the Stephanie J. Lackey's collaboration.

Top Co-Authors

Avatar

Julie Nanette Salcedo

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Crystal S. Maraj

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Daniel Barber

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Eric Ortiz

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Denise Nicholson

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Roberto K. Champney

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kay M. Stanney

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Stephen R. Serge

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge