Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laura Milham is active.

Publication


Featured researches published by Laura Milham.


Military Psychology | 2003

Training Evaluation in the Military: Misconceptions, Opportunities, and Challenges

Eduardo Salas; Laura Milham; Clint A. Bowers

Due to a number of misconceptions about training evaluation in the military, these evaluations are rarely done. In this article, we review recent findings that identify obstacles to training evaluation in the military and offer some alternatives for dealing with these problems. Further, we discuss the use of theoretically driven evaluation outcomes to provide evaluators with information that can feed back into the training system. Finally, we discuss future challenges for training evaluation in the military environment, such as the evolution from physical to cognitive tasks; training large, distributed teams; and the use of simulation in training design.


Theoretical Issues in Ergonomics Science | 2009

Multimodal sensory information requirements for enhancing situation awareness and training effectiveness

Kelly S. Hale; Kay M. Stanney; Laura Milham; M.A. Bell Carroll; D.L. Jones

Virtual training systems use multimodal technology to provide realistic training scenarios. To determine the benefits of adopting multimodal training strategies, it is essential to identify the critical knowledge, skills and attitudes that are being targeted in training and relate these to the multimodal human sensory systems that should be stimulated to support this acquisition. This paper focuses on trainee situation awareness and develops a multimodal optimisation of situation awareness conceptual model that outlines how multimodality may be used to optimise perception, comprehension and prediction of object recognition, spatial and temporal components of awareness. Specific multimodal design techniques are presented, which map desired training outcomes and supporting sensory requirements to training system design guidelines for optimal trainee situation awareness and human performance.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2006

A Method to Determine Optimal Simulator Training Time: Examining Performance Improvement Across the Learning Curve

Roberto K. Champney; Laura Milham; Meredith Carroll; Kay M. Stanney; Joseph Cohn

Training simulators have become an integral part of training programs across both military and non military domains. A pressing issue, however, is when and how the systems should be integrated into an existing training curriculum. Currently, time spent in simulator training is usually driven by availability of the simulator or the planned class curriculum, rather than by any type of systematic evaluation of the incremental learning that occurs across time. One reason for this may be the lack of methodologies for evaluating the optimal training time spent in simulators. To address this, a methodology is presented which utilizes a continuous evaluation of performance across trials to identify a “plateau” in learning improvements as represented by a learning curve.


Military Psychology | 2013

Deriving Training Strategies for Spatial Knowledge Acquisition From Behavioral, Cognitive, and Neural Foundations

Kay M. Stanney; Joseph Cohn; Laura Milham; Kelly S. Hale; Rudy Darken; Joseph Sullivan

While much has been made of the potential uses for virtual environment (VE) technologies as training aids, there are few guidelines and strategies to inform system development from the user’s perspective. Assumptions are that a human factors-based evaluation will ensure optimal performance, transferring training from virtual to real worlds; however, there are complex, yet unexplored, issues surrounding system optimization and employment. A comprehensive investigation into the foundations of training, traversing levels of performance analysis, from overt behavioral responses to the less explicit neuronal patterns, is proposed from which optimal training strategies can be inferred and system development guidelines deduced.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2008

Sensory-Perceptual Objective Task (SPOT) Taxonomy: A Task Analysis Tool

Roberto K. Champney; Meredith Carroll; Laura Milham; Kelly S. Hale

Task Analyses serve multiple purposes in system design, yet even with the resource intensive nature of such processes, further insight is still needed for the design of multi-modal systems. A Sensory Task Analysis (STA) allows a very granular level task decomposition into sensory information and interaction capability requirements that allows the determination of how individuals in the domain gather information as well as act upon this information in the operational environment. To complete this, however, knowledge of both the domain task and human information processing theory is required. Herein is presented a sensory-perceptual task taxonomy, a task analysis tool that facilitates the decomposition of domain tasks into generalizable sensory perceptual and response task types that greatly facilitates this process. This paper discusses the development and components that make up this taxonomy and presents a proof of concept example of how it may be used in an operational military domain. A discussion of potential applications and how this taxonomy fits within a tool to define optimal system fidelity requirements is also presented.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2004

When is VE Training Effective? A Framework and Two Case Studies

Laura Milham; Kelly S. Hale; Kay Stanney; Joseph Cohn; Rudy Darken; Joseph Sullivan

There are a number of studies that suggest that virtual environment (VE) systems can facilitate transfer of training (Darken & Peterson, 2002; Péruch, Belingard, & Thinus-Blanc, 2000; Rose et al., 2001). Yet this benefit is not universal, as Brooks et al. (2002) found that VE training was not beneficial for a recognition task. Thus, VE training may facilitate some types of training tasks, while not benefiting others. As technology pushes training more frequently into highly immersive environments, it is important to delineate and examine characteristics of VE trainers (such as egocentric perspective or multimodal interactivity) and consider the impact these characteristics have on training tasks and desired outcomes. Towards this end, a preliminary framework is herein presented which suggests that specific characteristics of VE systems impact specific training outcomes. This effort presents two operationally-based case studies that begin to examine different parts of the proposed framework. The first study examines how egocentric perspective was introduced into helicopter navigator training to impact spatial knowledge acquisition; the second study investigates how a VE was used to introduce interactivity into training to improve training outcomes for procedural knowledge. Given the low number of participants, no substantive conclusions can be made; rather, the studies are framed as an initial approach to VE training effectiveness within a theoretical model.


Proceedings of the 53rd Annual Conference of the Human Factors and Ergonomics Society | 2009

Developing impact assessment for training systems research & development: A case study of the strategic approach for the NEWIT system

Tim Kotnour; Kay Stanney; Rafael E. Landaeta; Laura Milham; Julie M. Drexler; Denise Nicholson

Impact assessment seeks to evaluate the effects of a new system realized on target beneficiaries and is an essential process via which to tangibly demonstrate the operational and economic benefits of a research and development (R&D) program. This paper contributes a framework –Program-management Understanding, Measurement, and Assessment (PUMA) - for developing an impact assessment approach for planning and evaluating R&D programs. The intent of the framework is to help an R&D organization provide traceability from the identification of program needs to selecting and conducting R&D to implementation to defining and measuring results. The framework is demonstrated using an Office of Naval Research project.


International Journal of Learning Technology | 2017

An examination of virtual environment training fidelity on training effectiveness

Roberto K. Champney; Kay M. Stanney; Laura Milham; Meredith Carroll; Joseph Cohn

Live training is a vital component of military training. Unfortunately it can be expensive, resource intensive, of limited accessibility or impossible to achieve due to the risks involved. Virtual environment (VEs) training environments can provide trainees with opportunities to practice key skills and work out performance issues in a more cost effective environment, which may lead to more efficient use of live training time. The current study explored this premise by conducting a transfer of training study that examined the question of whether pre-training in low and/or high fidelity VEs can lead to time savings and improved performance in live training environments. In this study, four-person teams received training on a room clearing task either on a low fidelity VE, a high fidelity VE, or no pre-training at all, after receiving familiarisation on the task. After training, all groups transferred to a live shoothouse for 20 test trials. Results suggest that high fidelity VE pre-training may facilitate both faster skill acquisition and better performance in a transfer environment. Although sample size may have prevented the findings from reaching statistical significance, the effects were consistent and of moderate to high effect sizes, suggesting an effect is present.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2006

Exploring eye tracking measures to understand operator performance

Dervon Chang; Sven Fuchs; Laura Milham; Meredith Bell-Carroll; Kay Stanney

Eye tracker technology is a growing and viable source for more sensitive, unobtrusive, and objective measures of operator performance and cognitive state. Several eye movement metrics have been validated in the empirical literature, but caution is advised when linking low level eye movements (e.g., fixations) to high level cognitive constructs (e.g., workload). Valid analysis of eye movement data is vulnerable to output interpretation, metric granularity, and incomplete views of operator performance. To address these issues, more research is needed to exploit contextual information from other performance measures, identify metric deficiencies, and develop useful composite measures. Individual eye movement metrics alone provide an insufficient picture of operator cognition and performance, but when purposefully combined with other metrics (e.g., other physiological sensor data), offer a more comprehensive look at operator performance. Understanding why operator errors occur can help researchers identify information-processing bottlenecks, possibly allowing designers to find ways to improve performance.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2005

Assessing and Enhancing Team Training Technologies: From the Lab to the Field:

Joseph Cohn; Roy Stripling; Dylan Schmorrow; Kay Stanney; Laura Milham; Kelly Kingdon-Hale; Richard Schaffer; Eric Muth; Fred S. Switzer; Jared Freeman

The primary goal of any training system is to enhance performance on the real world tasks it simulates (Lathan et al., 2002). The principal benefit that Virtual Environment (VE) systems have over field exercises and similar real world training is that VEs present training situations that would be too hazardous or too costly to reproduce in the real world (Rose, Attree, Brooks, Parslow, Penn & Ambihaipahan, 2000). As compared with Legacy type systems or physical mockups VEs also afford a smaller footprint and greater reconfigurability to support a variety of training tasks (Cohn, Helmick, Meyers & Burns, 2000). An additional, relatively unexplored benefit of VE, is the potential ease with which they can be implemented to train team tasks. This panel will investigate the utility of using VE for team training, addressing both theoretical and practical concerns.

Collaboration


Dive into the Laura Milham's collaboration.

Top Co-Authors

Avatar

Kay M. Stanney

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Joseph Cohn

Office of Naval Research

View shared research outputs
Top Co-Authors

Avatar

Kelly S. Hale

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roberto K. Champney

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

David Jones

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Eduardo Salas

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Joseph Sullivan

Naval Postgraduate School

View shared research outputs
Top Co-Authors

Avatar

Rudy Darken

Naval Postgraduate School

View shared research outputs
Top Co-Authors

Avatar

Clint A. Bowers

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge