Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where W. Todd Nelson is active.

Publication


Featured researches published by W. Todd Nelson.


Journal of the Acoustical Society of America | 2000

A speech corpus for multitalker communications research

Robert S. Bolia; W. Todd Nelson; Mark A. Ericson; Brian D. Simpson

A database of speech samples from eight different talkers has been collected for use in multitalker communications research. Descriptions of the nature of the corpus, the data collection methodology, and the means for obtaining copies of the database are presented.


Human Factors | 1998

Effects of Localized Auditory Information on Visual Target Detection Performance Using a Helmet-Mounted Display

W. Todd Nelson; Lawrence J. Hettinger; James A. Cunningham; Bart J. Brickman; Michael W. Haas; Richard L. McKinley

An experiment was conducted to evaluate the effects of localized auditory information on visual target detection performance. Visual targets were presented on either a wide field-of-view dome display or a helmet-mounted display and were accompanied by either localized, nonlocalized, or no auditory information. The addition of localized auditory information resulted in significant increases in target detection performance and significant reductions in workload ratings as compared with conditions in which auditory information was either nonlocalized or absent. Qualitative and quantitative analyses of participants′ head motions revealed that the addition of localized auditory information resulted in extremely efficient and consistent search strategies. Implications for the development and design of multisensory virtual environments are discussed. Actual or potential applications of this research include the use of spatial auditory displays to augment visual information presented in helmet-mounted displays, thereby leading to increases in performance efficiency, reductions in physical and mental workload, and enhanced spatial awareness of objects in the environment.


Human Factors | 2005

Target Acquisition with UAVS: Vigilance Displays and Advanced Cuing Interfaces

Daniel V. Gunn; Joel S. Warm; W. Todd Nelson; Robert S. Bolia; Donald A. Schumsky; Kevin J. Corcoran

Vigilance and threat detection are critical human factors considerations in the control of unmanned aerial vehicles (UAVs). Utilizing a vigilance task in which threat detections (critical signals) led observers to perform a subsequent manual target acquisition task, this study provides information that might have important implications for both of these considerations in the design of future UAV systems. A sensory display format resulted in more threat detections, fewer false alarms, and faster target acquisition times and imposed a lighter workload than did a cognitive display format. Additionally, advanced visual, spatial-audio, and haptic cuing interfaces enhanced acquisition performance over no cuing in the target acquisition phase of the task, and they did so to a similar degree. Thus, in terms of potential applications, this research suggests that a sensory format may be the best display format for threat detection by future UAV operators, that advanced cuing interfaces may prove useful in future UAV systems, and that these interfaces are functionally interchangeable.


The International Journal of Aviation Psychology | 2004

EVALUATING ADAPTIVE MULTISENSORY DISPLAYS FOR TARGET LOCALIZATION IN A FLIGHT TASK

Robert S. Tannen; W. Todd Nelson; Robert S. Bolia; Joel S. Warm; William N. Dember

This study was designed to determine the efficacy of providing target location information via head-coupled visual and spatial audio displays presented in adaptive and nonadaptive configurations. Twelve United States Air Force pilots performed a simulated flight task in which they were instructed to maintain flight parameters while searching for ground and air targets. The integration of visual displays with spatial audio cueing enhanced performance efficiency, especially when targets were most difficult to detect. Several of the interface conditions were also associated with lower ratings of perceived mental workload. The benefits associated with multisensory cueing were equivalent in both adaptive and nonadaptive configurations.


Human Factors | 2001

Asymmetric performance in the cocktail party effect: implications for the design of spatial audio displays.

Robert S. Bolia; W. Todd Nelson; Rebecca M. Morley

An experiment was conducted to determine the extent to which hemispheric specialization is manifested in the performance of tasks in which listeners are required to attend to one of several simultaneously spoken speech communications. Speech intelligibility and response time were measured under factorial combinations of the number of simultaneous talkers, the target talker hemifield, and the spatial arrangement of talkers. Intelligibility was found to be mediated by all of the independent variables. Results are discussed in terms of the design of adaptive spatial audio interfaces for speech communications. Actual or potential applications of this research include the design of adaptive spatial audio interfaces for speech communications.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1999

Spatial Audio Displays for Speech Communications: A Comparison of Free Field and Virtual Acoustic Environments

W. Todd Nelson; Robert S. Bolia; Mark A. Ericson; Richard L. McKinley

The ability of listeners to detect, identify, and monitor multiple simultaneous speech signals was measured in free field and virtual acoustic environments. Factorial combinations of four variables, including audio condition, spatial condition, the number of speech signals, and the sex of the talker were employed using a within-subjects design. Participants were required to detect the presentation of a critical speech signal among a background of non-signal speech events. Results indicated that spatial separation increased the percentage of correctly identified critical speech signals as the number of competing messages increased. These outcomes are discussed in the context of designing binaural speech displays to enhance speech communication in aviation environments.


The International Journal of Aviation Psychology | 2001

Applying Adaptive Control and Display Characteristics to Future Air Force Crew Stations

Michael W. Haas; W. Todd Nelson; Daniel W. Repperger; Robert S. Bolia; Greg Zacharias

The Human Effectiveness Directorate of the Air Force Research Laboratory is developing and evaluating human-machine interface concepts to enhance overall weapon system performance by embedding knowledge of the operators state inside the interface, enabling the interface to make informed, automated decisions regarding many of the interfaces information management display characteristics. Some of these characteristics include information modality, spatial arrangement, and temporal organization. By increasing the ability of the interface to respond, or adapt, to the changing requirements of the human operator in real time-in essence closing the loop-the interface provides intuitive information management to the operator and provides real-time human engineering.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1995

An Initial Study of the Effects of 3-Dimensional Auditory Cueing on Visual Target Detection

Richard L. McKinley; William R. D'Angelo; Michael W. Haas; David R. Perrot; W. Todd Nelson; Lawrence J. Hettinger; Bart J. Brickman

Developments in virtual environment technology are enabling the rapid generation of systems that provide synthetic visual and auditory displays. The successful use of this technology in education, training, entertainment, and various other applications relies to a great extent on the effective combination of visual and auditory information. Little is known about the basic interactions between the auditory system and the visual system in real environments or virtual environments. Therefore, the purpose of the current study was to begin to assess the effectiveness of various combinations of visualauditory information in supporting the performance of a common task (detecting targets) in a virtual environment.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2002

Target Acquisition with UAVS: Vigilance Displays and Advanced Cueing Interfaces

Daniel V. Gunn; W. Todd Nelson; Robert S. Bolia; Joel S. Warm; Donald A. Schumsky; Kevin J. Corcoran

Future Uninhabited Aerial Vehicles (UAVs) will require operators to switch quickly and efficiently from supervisory to manual control. Utilizing a vigilance task in which threat detections (critical signals) led observers to perform a subsequent manual target acquisition task, the present investigation revealed that the type of vigilance display might have important design implications for future UAV systems. A sensory display format resulted in more threat detections, fewer false alarms, and faster target acquisition times and imposed a lighter workload than a cognitive display format. Thus, the former may be the best display arrangement for future UAV controllers. Additionally, advanced visual, spatial audio, and haptic cueing interfaces enhanced acquisition performance over no cueing in the target acquisition phase of the task, and did so to a similar degree. This finding suggests that advanced cueing interfaces may also prove useful in future UAV systems and that these interfaces are functionally interchangeable.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1996

Effects of Virtually-Augmented Fighter Cockpit Displays on Pilot Performance, Workload, and Situation Awareness

Lawrence J. Hettinger; Bart J. Brickman; Merry M. Roe; W. Todd Nelson; Michael W. Haas

Virtually-augmented display concepts are being developed at the US Air Force Armstrong Laboratorys Synthesized Immersion Research Environment (SIRE) Facility at Wright-Patterson Air Force Base, Ohio, for use in future USAF crew stations. These displays incorporate aspects of virtual environment technology to provide users with intuitive, multisensory representations of operationally relevant information. This paper describes an evaluation that was recently conducted to contrast the effects of conventional, F-15 types of cockpit displays and virtually-augmented, multisensory cockpit displays on pilot-aircraft system performance, workload, and situation awareness in a simulated air combat task. Eighteen military pilots from the United States, France, and Great Britain served as test pilots. The results indicate a statistically significant advantage for the virtually-augmented cockpit configuration across all three classes of measures investigated. The results are discussed in terms of their relevance for the continuing evolution of advanced crew station design.

Collaboration


Dive into the W. Todd Nelson's collaboration.

Top Co-Authors

Avatar

Robert S. Bolia

Wright-Patterson Air Force Base

View shared research outputs
Top Co-Authors

Avatar

Joel S. Warm

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Mark A. Ericson

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Richard L. McKinley

Wright-Patterson Air Force Base

View shared research outputs
Top Co-Authors

Avatar

Michael W. Haas

Wright-Patterson Air Force Base

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael A. Vidulich

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Allen W. Dukes

Wright-Patterson Air Force Base

View shared research outputs
Top Co-Authors

Avatar

Brian D. Simpson

Air Force Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge