Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Scott H. Foster.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1988
Elizabeth M. Wenzel; Frederic L. Wightman; Scott H. Foster
A three-dimensional auditory display could take advantage of intrinsic sensory abilities like localization and perceptual organization by generating dynamic, multidimensional patterns of acoustic events that convey meaning about objects in the spatial world. Applications involve any context in which the users situational awareness is critical, particularly when visual cues are limited or absent; e.g., air traffic control or telerobotic activities in hazardous environments. Such a display would generate localized cues in a flexible and dynamic manner. Whereas this can be readily achieved with an array of real sound sources or loudspeakers, the NASA-Ames prototype maximizes flexibility and portability by synthetically generating three-dimensional sound in realtime for delivery through headphones. Psychoacoustic research suggests that perceptually-veridical localization over headphones is possible if both the direction-dependent pinna cues and the more well understood cues of interaural time and intensity are adequately synthesized. Although the realtime device is not yet complete, recent studies at the University of Wisconsin have confirmed the perceptual adequacy of the basic approach to synthesis.
Journal of the Acoustical Society of America | 1998
Jonathan S. Abel; Scott H. Foster
A method and apparatus is capable of accurately deriving acoustic transfer functions such as head-related transfer functions (HRTF) at low cost. Various aspects of the invention include constraining the reflection geometry of a measurement system to facilitate removal of reflection effects, establishing ambient noise level and ambient reverberation time to calibrate test signals, generating soundfields using Golay code test signals, invalidating measurements by detecting test subject movement and short-duration ambient sounds, deriving distance and/or interaural time difference (ITD) using minimum-phase forms of impulse responses, and deriving equalized HRTF suitable for use in acoustic displays without knowing output or input transducer acoustical properties. Spatial resampling of derived HRTF and spectral shaping of test signals are discussed.
Journal of the Acoustical Society of America | 1988
Elizabeth M. Wenzel; Frederic L. Wightman; Doris J. Kistler; Scott H. Foster
Human listeners vary widely in their ability to localize unfamiliar sounds in an environment devoid of visual cues. Our research, in which blindfolded listeners give numerical estimates of apparent source azimuth and elevation, suggests that individual differences are greatest in judgments of source elevation; listeners are uniformly accurate when judging source azimuth. The pattern of individual differences is the same for free‐field sources and for simulated free‐field sources presented over head‐phones. Simulated free‐field sources are produced by digital filtering techniques which incorporate the listener‐specific, direction‐dependent acoustic effects of the outer ears. Two features of this data bear on the question of the origin of individual differences in elevation accuracy: (1) a listeners accuracy in judging source elevation can be predicted from an analysis of the acoustic characteristics of the listeners outer ears; (2) the pattern of elevation errors made by one listener (A) can be transferr...
Journal of the Acoustical Society of America | 2000
Jonathan S. Abel; Scott H. Foster
Spatialization of soundfields is accomplished by filtering audio signals using filters having unvarying frequency response characteristics and amplifying signals using amplifier gains adapted in response to signals representing sound source location and/or listener position. The filters are derived using a singular value decomposition process which finds the best set of component impulse responses to approximate a given target set of impulse responses corresponding to head related transfer functions. Efficient implementations for rendering reflection effects, air absorption losses and other ambient effects, and for spatializing multiple sound sources and/or generating multiple output signals are disclosed.
ACM Sigchi Bulletin | 1988
Elizabeth M. Wenzel; Frederic L. Wightman; Scott H. Foster
We propose that the most powerful method of auditory cueing takes direct advantage of human perceptual capabilities, providing a dynamic, multidimensional pattern of events which conveys meaning about objects in the spatial world. Applications of such a three-dimensional auditory display involve any context in which the users situational awareness is critical, particularly when visual cues are limited or absent. Examples include air traffic control displays, advanced teleconferencing environments, and monitoring telerobotic activities in hazardous situations.
Journal of the Acoustical Society of America | 1992
Scott H. Foster; Elizabeth M. Wenzel
This demonstration illustrates some recent efforts to ‘‘render’’ in real time the complex acoustic field experienced by a listener within an environment using a very high‐speed, signal processor, the Convolvotron, and headphone presentation. The current implementation follows conceptually from the image model. The filtering effects of multiple reflecting surfaces are modeled by a finite impulse response filter than can be changed in real time and is based on the superposition of the direct path from the source with the symmetrically located image sources coming from all the reflectors in the environment. Directional characteristics of the reflections are determined by filters based on head‐related transfer functions. The demonstration scenario allows the listener to experience how sound quality is affected by the manipulation of various environmental characteristics. For example, while listening over headphones and ‘‘flying’’ through a three‐dimensional visual scene, one can hear how the sound quality of ...
Journal of the Acoustical Society of America | 1992
Scott H. Foster
Recent advances in VLSI hardware have made possible the construction of real‐time systems that can ‘‘render’’ acoustics. These systems use tracking mechanisms to monitor the position and orientation of a listener and produce synthetic binaural sound for that listener using headphones. The most sophisticated of these systems utilize supercomputing performance to render complex acoustic scenarios including transmission loss effects, reflection, nonuniform radiation, Doppler effects, and more, resulting in a compelling and natural presentation. With the advances in algorithms and hardware to produce such simulations, there is a need to develop an extensible protocol for virtual audio. Such a protocol will need to encompass all the acoustic models in use today and those expected to be developed in the near future. It is hoped that this protocol will allow virtual reality developers to utilize virtual audio technology, even as that technology makes dramatic improvements in its capabilities.
Journal of the Acoustical Society of America | 1997
Jonathan S. Abel; Scott H. Foster
workshop on applications of signal processing to audio and acoustics | 1993
Elizabeth M. Wenzel; Scott H. Foster
interactive 3d graphics and games | 1990
Elizabeth M. Wenzel; Scott H. Foster