Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Griffin D. Romigh is active.

Publication


Featured researches published by Griffin D. Romigh.


IEEE Journal of Selected Topics in Signal Processing | 2015

Efficient Real Spherical Harmonic Representation of Head-Related Transfer Functions

Griffin D. Romigh; Douglas S. Brungart; Richard M. Stern; Brian D. Simpson

Several methods have recently been proposed for modeling spatially continuous head-related transfer functions (HRTFs) using techniques based on finite-order spherical harmonic expansion. These techniques inherently impart some amount of spatial smoothing to the measured HRTFs. However, the effect this spatial smoothing has on the localization accuracy has not been analyzed. Consequently, the relationship between the order of a spherical harmonic representation for HRTFs and the maximum localization ability that can be achieved with that representation remains unknown. The present study investigates the effect that spatial smoothing has on virtual sound source localization by systematically reducing the order of a spherical-harmonic-based HRTF representation. Results of virtual localization tests indicate that accurate localization performance is retained with spherical harmonic representations as low as fourth-order, and several important physical HRTF cues are shown to be present even in a first-order representation. These results suggest that listeners do not rely on the fine details in an HRTFs spatial structure and imply that some of the theoretically-derived bounds for HRTF sampling may be exceeding perceptual requirements.


workshop on applications of signal processing to audio and acoustics | 2009

Spectral HRTF enhancement for improved vertical-polar auditory localization

Douglas S. Brungart; Griffin D. Romigh

Head-related transfer functions (HRTFs) can be a valuable tool for adding realistic spatial attributes to arbitrary sounds presented over stereo headphones. However, in practice, HRTF-based virtual audio displays are rarely able to approach the same level of localization accuracy that would be expected for listeners attending to real sound sources in the free field. In this paper, we present a novel HRTF enhancement technique that systematically increases the salience of the direction-dependent spectral cues that listeners use to determine the elevations of sound sources. The technique is shown to produce substantial improvements in localization accuracy in the vertical-polar dimension for individualized and non-individualized HRTFs, without negatively impacting performance in the left-right localization dimension.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2007

In-Flight Navigation Using Head-Coupled and Aircraft-Coupled Spatial Audio Cues

Brian D. Simpson; Douglas S. Brungart; Ronald C. Dallman; Richard J. Yasky; Griffin D. Romigh; John Raquet

A flight test was conducted to evaluate how effectively spatialized audio cues could be used to maneuver a general aviation aircraft through a complex navigation course. Two conditions were tested: a head-coupled condition, where audio cues were updated in response to changes in the orientation of the pilots head, and an aircraft-coupled condition, where audio cues were updated in response to changes in the direction of the aircraft. Both cueing conditions resulted in excellent performance, with the pilots on average passing within 0.25 nm of the waypoints on the navigation course. However, overall performance was better in the aircraft-coupled condition than in the head-coupled condition. This result is discussed in terms of an alignment mismatch between the pilots frame of reference and that of the aircraft, which is critical when using spatial audio to cue the desired orientation of the vehicle rather than the location of an object in space.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2008

An “Audio Annotation” Technique for Intuitive Communication in Spatial Tasks

Adrienne J. Ephrem; Victor Finomore; Robert H. Gilkey; Rihana M. Newton; Griffin D. Romigh; Brian D. Simpson; Douglas S. Brungart; Jeffrey L. Cowgill

Navigating in any environment can be a very tedious task, and it becomes considerably more difficult when the environment is unfamiliar or when it contains threats that could possibly jeopardize the success of the mission. Often in difficult navigation environments there are external sensors present in the area that could provide critical information to the ground operator. The challenge is to find a way to transmit this information to the ground operator in an intuitive, timely, and unambiguous manner. In this study, we explore a technique called “audio annotation” where the sensor information is transmitted to a remote observer who processes it and relays it verbally to an operator on the ground. Spatial information can be conveyed intuitively by using a spatial audio display to project the apparent location of the remote observers voice to an arbitrary location relative to the ground operator. The current study compared the “audio annotation” technique to standard monaural communications in a task that required a remote observer with a high-level view of the environment to assist a ground operator to avoid threats while locating a downed pilot in an urban environment. The overall performance in the audio annotation condition was found to be superior to the standard monaural condition.


Frontiers in Neuroscience | 2014

Do you hear where I hear?: isolating the individualized sound localization cues

Griffin D. Romigh; Brian D. Simpson

It is widely acknowledged that individualized head-related transfer function (HRTF) measurements are needed to adequately capture all of the 3D spatial hearing cues. However, many perceptual studies have shown that localization accuracy in the lateral dimension is only minimally decreased by the use of non-individualized head-related transfer functions. This evidence supports the idea that the individualized components of an HRTF could be isolated from those that are more general in nature. In the present study we decomposed the HRTF at each location into average, lateral and intraconic spectral components, along with an ITD in an effort to isolate the sound localization cues that are responsible for the inter-individual differences in localization performance. HRTFs for a given listener were then reconstructed systematically with components that were both individualized and non-individualized in nature, and the effect of each modification was analyzed via a virtual localization test where brief 250 ms noise bursts were rendered with the modified HRTFs. Results indicate that the cues important for individualization of HRTFs are contained almost exclusively in the intraconic portion of the HRTF spectra and localization is only minimally affected by introducing non-individualized cues into the other HRTF components. These results provide new insights into what specific inter-individual differences in head-related acoustical features are most relevant to sound localization, and provide a framework for how future human-machine interfaces might be more effectively generalized and/or individualized.


Journal of the Acoustical Society of America | 2013

The role of spatial detail in sound-source localization: Impact on HRTF modeling and personalization.

Griffin D. Romigh; Douglas S. Brungart; Richard M. Stern; Brian D. Simpson

While current head-related transfer function (HRTF) personalization methods offer some ability to quickly customize spatial auditory displays, these techniques generally lack the realism and performance provided by full individualized HRTF measurements. This poor performance is likely due to the vast amount of individual spectral and spatial variation contained in a measured HRTF. While some of this variation contains important directional information, Kulkarni and Colburn (1998) showed that perceptually irrelevant spectral variation could be eliminated by smoothing the HRTF magnitude with a truncated Fourier series expansion. The present study investigates a related method for smoothing the spatial variation contained in an HRTF magnitude by utilizing a truncated spherical harmonic expansion. The perceptual impacts of various degrees of spatial smoothing were evaluated by comparing performance to performance obtained with full individualized HRTF measurements in a virtual localization task. Results indic...


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2008

Flying by Ear: Blind Flight with a Music-Based Artificial Horizon

Brian D. Simpson; Douglas S. Brungart; Ronald C. Dallman; Richard J. Yasky; Griffin D. Romigh

Two experiments were conducted in actual flight operations to evaluate an audio artificial horizon display that imposed aircraft attitude information on pilot-selected music. The first experiment examined a pilots ability to identify, with vision obscured, a change in aircraft roll or pitch, with and without the audio artificial horizon display. The results suggest that the audio horizon display improves the accuracy of attitude identification overall, but differentially affects response time across conditions. In the second experiment, subject pilots performed recoveries from displaced aircraft attitudes using either standard visual instruments, or, with vision obscured, the audio artificial horizon display. The results suggest that subjects were able to maneuver the aircraft to within its safety envelope. Overall, pilots were able to benefit from the display, suggesting that such a display could help to improve overall safety in general aviation.


IEEE Journal of Selected Topics in Signal Processing | 2015

Free-Field Localization Performance With a Head-Tracked Virtual Auditory Display

Griffin D. Romigh; Douglas S. Brungart; Brian D. Simpson

Virtual auditory displays are systems that use signal processing techniques to manipulate the apparent spatial locations of sounds when they are presented to listeners over headphones. When the virtual audio display is limited to the presentation of stationary sounds at a finite number of source locations, it is possible to produce virtual sounds that are essentially indistinguishable from sounds presented by real loudspeakers in the free field. However, when the display is required to reproduce sound sources at arbitrary locations and respond in real-time to the head motions of the listener, it becomes much more difficult to maintain localization performance that is equivalent to the free field. The purpose of this paper is to present the results of a study that used a virtual synthesis technique to produce head-tracked virtual sounds that were comparable in terms of localization performance with real sound sources. The technique made use of an in-situ measurement and reproduction technique that made it possible to switch between the head-related transfer function measurement and the psychoacoustic validation without removing the headset from the listener. The results demonstrate the feasibility of using head-tracked virtual auditory displays to generate both short and long virtual sounds with localization performance comparable to what can be achieved in the free field.


Journal of the Acoustical Society of America | 2009

Head‐related transfer function enhancement for improved vertical‐polar localization.

Douglas S. Brungart; Griffin D. Romigh; Brian D. Simpson

Under ideal laboratory conditions, individualized head‐related transfer functions (HRTFs) can produce virtual sound localization performance approaching the level achieved with real sound sources in the free field. However, in real‐world applications of virtual audio, practical issues such as fit‐refit variability in the headphone response and nonindividualized HRTFs generally lead to much worse localization performance, particularly in the up‐down and front‐back dimensions. Here we present a new technique that “enhances” the localizability of a virtual sound source by increasing the spectral contrast of the acoustic features that are relevant for spatial perception within a set of locations with nearly identical binaural cues (i.e., a “cone‐of‐confusion”). Validation experiments show that this enhancement technique can improve localization accuracy across a broad range of conditions, with as much as a 33% reduction in vertical‐polar localization error for nonindividualized HRTFs measured on a KEMAR manik...


Archive | 2017

When to Interrupt: A Comparative Analysis of Interruption Timings Within Collaborative Communication Tasks

Nia Peters; Griffin D. Romigh; George Bradley; Bhiksha Raj

This study seeks to determine if it is necessary for a software agent to monitor the communication channel between a human operator and human collaborators to effectively detect appropriate times to convey information or “interrupt” the operator in a collaborative communication task. The study explores the outcome of overall task performance and task time of completion (TOC) at various delivery times of periphery task interruptions. A collaborative, goal-oriented task is simulated via a dual-task where an operator participates in the primary collaborative communication task and a secondary keeping track task. User performance at various interruption timings: random, fixed, and human-determined (HD) are evaluated to determine whether an intelligent form of interrupting users is less disruptive and benefits users’ overall interaction. There is a significant difference in task performance when HD interruptions are delivered in comparison with random and fixed timed interruption. There is a 54 % overall accuracy for task performance using HD interruptions compared to 33 % for fixed interruptions and 38 % for random interruptions. These results are promising and provide some indication that monitoring a communication channel or adding intelligence to the interaction can be useful for the exchange.

Collaboration


Dive into the Griffin D. Romigh's collaboration.

Top Co-Authors

Avatar

Brian D. Simpson

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Douglas S. Brungart

Walter Reed National Military Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nandini Iyer

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard M. Stern

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric S. Schwenker

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Bhiksha Raj

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge