Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Douglas S. Brungart is active.

Publication


Featured researches published by Douglas S. Brungart.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2004

3D Audio Cueing for Target Identification in a Simulated Flight Task

Brian D. Simpson; Douglas S. Brungart; Robert H. Gilkey; Jeffrey L. Cowgill; Ronald C. Dallman; Randall F. Green; Kevin L. Youngblood; Thomas Moore

Modern Traffic Advisory Systems (TAS) can increase flight safety by providing pilots with real-time information about the locations of nearby aircraft. However, most current collision avoidance systems rely on non-intuitive visual and audio displays that may not allow pilots to take full advantage of this information. In this experiment, we compared the response times required for subjects participating in a fully-immersive simulated flight task to visually acquire and identify nearby targets under four different simulated TAS display conditions: 1) no display; 2) a visual display combined with a non-spatialized warning sound; 3) a visual display combined with a clock-coordinate speech signal; and 4) a visual display combined with a spatialized auditory warning sound. The results show that response times varied in an orderly fashion as a function of display condition, with the slowest times occurring in the no display condition and the fastest times occurring in the 3D audio display condition, where they were roughly 25% faster than those without the 3D audio cues.


IEEE Journal of Selected Topics in Signal Processing | 2015

Efficient Real Spherical Harmonic Representation of Head-Related Transfer Functions

Griffin D. Romigh; Douglas S. Brungart; Richard M. Stern; Brian D. Simpson

Several methods have recently been proposed for modeling spatially continuous head-related transfer functions (HRTFs) using techniques based on finite-order spherical harmonic expansion. These techniques inherently impart some amount of spatial smoothing to the measured HRTFs. However, the effect this spatial smoothing has on the localization accuracy has not been analyzed. Consequently, the relationship between the order of a spherical harmonic representation for HRTFs and the maximum localization ability that can be achieved with that representation remains unknown. The present study investigates the effect that spatial smoothing has on virtual sound source localization by systematically reducing the order of a spherical-harmonic-based HRTF representation. Results of virtual localization tests indicate that accurate localization performance is retained with spherical harmonic representations as low as fourth-order, and several important physical HRTF cues are shown to be present even in a first-order representation. These results suggest that listeners do not rely on the fine details in an HRTFs spatial structure and imply that some of the theoretically-derived bounds for HRTF sampling may be exceeding perceptual requirements.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2005

Spatial Audio as a Navigation Aid and Attitude Indicator

Brian D. Simpson; Douglas S. Brungart; Ronald C. Dallman; Jacque M. Joffrion; Michael D. Presnar; Robert H. Gilkey

Most current display systems in general aviation (GA) environments employ, at best, relatively simple audio displays that do not fully exploit a pilots auditory processing capabilities. Spatial audio displays, however, take advantage of the spatial processing capabilities of the auditory system and have the ability to provide, in an intuitive manner, comprehensive information about the status of an aircraft to the pilot. This paper describes a study conducted in order to assess the utility of spatial audio as (1) a navigation aid, and (2) an attitude indicator in an actual flight environment. Performance was measured in tasks requiring pilots to fly in the direction of a spatial audio “navigation beacon” and use an auditory artificial horizon display to detect changes in attitude and maintain straight and level flight when no visual cues were available. The results indicate that spatial audio displays can effectively be used by pilots for both navigation and attitude monitoring, and thus may be a valuable tool in supporting pilot situation awareness and improving overall safety in GA environments.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2007

In-Flight Navigation Using Head-Coupled and Aircraft-Coupled Spatial Audio Cues

Brian D. Simpson; Douglas S. Brungart; Ronald C. Dallman; Richard J. Yasky; Griffin D. Romigh; John Raquet

A flight test was conducted to evaluate how effectively spatialized audio cues could be used to maneuver a general aviation aircraft through a complex navigation course. Two conditions were tested: a head-coupled condition, where audio cues were updated in response to changes in the orientation of the pilots head, and an aircraft-coupled condition, where audio cues were updated in response to changes in the direction of the aircraft. Both cueing conditions resulted in excellent performance, with the pilots on average passing within 0.25 nm of the waypoints on the navigation course. However, overall performance was better in the aircraft-coupled condition than in the head-coupled condition. This result is discussed in terms of an alignment mismatch between the pilots frame of reference and that of the aircraft, which is critical when using spatial audio to cue the desired orientation of the vehicle rather than the location of an object in space.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2008

An “Audio Annotation” Technique for Intuitive Communication in Spatial Tasks

Adrienne J. Ephrem; Victor Finomore; Robert H. Gilkey; Rihana M. Newton; Griffin D. Romigh; Brian D. Simpson; Douglas S. Brungart; Jeffrey L. Cowgill

Navigating in any environment can be a very tedious task, and it becomes considerably more difficult when the environment is unfamiliar or when it contains threats that could possibly jeopardize the success of the mission. Often in difficult navigation environments there are external sensors present in the area that could provide critical information to the ground operator. The challenge is to find a way to transmit this information to the ground operator in an intuitive, timely, and unambiguous manner. In this study, we explore a technique called “audio annotation” where the sensor information is transmitted to a remote observer who processes it and relays it verbally to an operator on the ground. Spatial information can be conveyed intuitively by using a spatial audio display to project the apparent location of the remote observers voice to an arbitrary location relative to the ground operator. The current study compared the “audio annotation” technique to standard monaural communications in a task that required a remote observer with a high-level view of the environment to assist a ground operator to avoid threats while locating a downed pilot in an urban environment. The overall performance in the audio annotation condition was found to be superior to the standard monaural condition.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2002

Segregation of Multiple Talkers in the Vertical Plane: Implications for the Design of a Multiple Talker Display

Ken I. McAnally; Robert S. Bolia; Russell L. Martin; Geoff Eberle; Douglas S. Brungart

Three experiments were conducted to evaluate the effect of spatial separation of multiple talkers in the vertical plane on speech intelligibility. The first experiment demonstrated a release from masking due to separation in the median plane, and that this release was not due to the presence of residual interaural time differences (ITDs). The second experiment showed that this release corresponded to an increase in signal level of 1.3 dB. The third experiment demonstrated that the increase in intelligibility due to separation in elevation and that due to separation in azimuth were not additive. Results are discussed in terms of their implications for the design of spatial audio displays.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2002

Sound Localization with Hearing Protectors: Performance and Head Motion Analysis in a Visual Search Task

Brian D. Simpson; Robert S. Bolia; Richard L. McKinley; Douglas S. Brungart

The effects of hearing protection on sound localization were examined in the context of an auditory-cued visual search task. Participants were required to locate a visual target in a field of 5, 20, or 50 visual distractors randomly distributed throughout ±180° of azimuth and from approximately −70° to +90° in elevation. Four conditions were examined in which an auditory cue, spatially co-located with the visual target, was presented. In these conditions, participants wore (1) earplugs, (2) earmuffs, (3) both earplugs and earmuffs, or (4) no hearing protection. In addition, a control condition was examined in which no auditory cue was provided. Visual search times and head motion data suggest that the degree to which localization cues are disrupted with hearing protection devices varies with the type of device worn. Moreover, when both earplugs and earmuffs are worn, search times approach those found with no auditory cue, suggesting that sound localization cues are nearly completely eliminated in this condition.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2012

Spatial Multisensory Cueing to Support Visual Target-Acquisition Performance

Julio C. Mateo; Brian D. Simpson; Robert H. Gilkey; Nandini Iyer; Douglas S. Brungart

The impact of spatial multisensory cues on target-acquisition performance was examined. Response times (RTs) obtained in the absence of spatial cues were compared to those obtained when tactile, auditory, or audiotactile cues indicated the target location. Visual scene complexity was manipulated by varying the number of visual distractors present. The results indicated that all these spatial cues effectively reduced RTs. The benefit of cueing was greater when more distractors were present and when targets were presented from more eccentric locations. Although the benefit was greatest for conditions containing auditory cues, tactile cues alone had a large benefit. No apparent advantage of audiotactile cues over auditory cues was observed, suggesting that the auditory cues provided sufficient information to support performance. Future research will explore whether audiotactile cues are more helpful when the auditory cues are degraded (e.g., when presented in noisy environments or in generic virtual auditory displays).


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2008

Flying by Ear: Blind Flight with a Music-Based Artificial Horizon

Brian D. Simpson; Douglas S. Brungart; Ronald C. Dallman; Richard J. Yasky; Griffin D. Romigh

Two experiments were conducted in actual flight operations to evaluate an audio artificial horizon display that imposed aircraft attitude information on pilot-selected music. The first experiment examined a pilots ability to identify, with vision obscured, a change in aircraft roll or pitch, with and without the audio artificial horizon display. The results suggest that the audio horizon display improves the accuracy of attitude identification overall, but differentially affects response time across conditions. In the second experiment, subject pilots performed recoveries from displaced aircraft attitudes using either standard visual instruments, or, with vision obscured, the audio artificial horizon display. The results suggest that subjects were able to maneuver the aircraft to within its safety envelope. Overall, pilots were able to benefit from the display, suggesting that such a display could help to improve overall safety in general aviation.


IEEE Journal of Selected Topics in Signal Processing | 2015

Free-Field Localization Performance With a Head-Tracked Virtual Auditory Display

Griffin D. Romigh; Douglas S. Brungart; Brian D. Simpson

Virtual auditory displays are systems that use signal processing techniques to manipulate the apparent spatial locations of sounds when they are presented to listeners over headphones. When the virtual audio display is limited to the presentation of stationary sounds at a finite number of source locations, it is possible to produce virtual sounds that are essentially indistinguishable from sounds presented by real loudspeakers in the free field. However, when the display is required to reproduce sound sources at arbitrary locations and respond in real-time to the head motions of the listener, it becomes much more difficult to maintain localization performance that is equivalent to the free field. The purpose of this paper is to present the results of a study that used a virtual synthesis technique to produce head-tracked virtual sounds that were comparable in terms of localization performance with real sound sources. The technique made use of an in-situ measurement and reproduction technique that made it possible to switch between the head-related transfer function measurement and the psychoacoustic validation without removing the headset from the listener. The results demonstrate the feasibility of using head-tracked virtual auditory displays to generate both short and long virtual sounds with localization performance comparable to what can be achieved in the free field.

Collaboration


Dive into the Douglas S. Brungart's collaboration.

Top Co-Authors

Avatar

Brian D. Simpson

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Griffin D. Romigh

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Nandini Iyer

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin Sheffield

Walter Reed National Military Medical Center

View shared research outputs
Top Co-Authors

Avatar

Dianne K. Popik

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

John Ziriax

Naval Surface Warfare Center

View shared research outputs
Top Co-Authors

Avatar

Robert S. Bolia

Wright-Patterson Air Force Base

View shared research outputs
Top Co-Authors

Avatar

Victor Finomore

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge