Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elizabeth M. Wenzel is active.

Publication


Featured researches published by Elizabeth M. Wenzel.


Presence: Teleoperators & Virtual Environments | 1992

Localization in virtual acoustic displays

Elizabeth M. Wenzel

This paper discusses the development of a particular spatial display medium, the virtual acoustic display. Although the technology can stand alone, it is envisioned ultimately to be a component of a larger multisensory environment and will no doubt find its greatest utility in that context. A general philosophy of the project has been that the development of advanced computer interfaces should be driven first by an understanding of human perceptual requirements, and secondarily by technological capabilities or constraints. In expanding on this view, the paper addresses why virtual acoustic displays are useful, characterizes the abilities of such displays, reviews some recent approaches to their implementation and application, describes the research project at NASA Ames in some detail, and finally outlines some critical research issues for the future.


SAE transactions | 1996

Taxiway Navigation and Situation Awareness (T-NASA) System: Problem, Design Philosophy, and Description of an Integrated Display Suite for Low-Visibility Airport Surface Operations

David C. Foyle; Anthony D. Andre; Robert S. McCann; Elizabeth M. Wenzel; Durand R. Begault; Vernol Battiste

An integrated cockpit display suite, the T-NASA (Taxiway Navigation and Situation Awareness) system, is under development for NASAs Terminal Area Productivity (TAP) Low-Visibility Landing and Surface Operations (LVLASO) program. This system has three integrated components: Moving Map -- track-up airport surface display with own-ship, traffic and graphical route guidance; SceneLinked Symbology -- route/taxi information virtually projected via a Head-up Display (HUD) onto the forward scene; and, 3-D Audio Ground Collision Avoidance Warning (GCAW) system -- spatially-localized auditory traffic alerts. In this paper, surface operations in low-visibility conditions, the design philosophy of the T-NASA system, and the TNASA system display components are described.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1988

A virtual display system for conveying three-dimensional acoustic information

Elizabeth M. Wenzel; Frederic L. Wightman; Scott H. Foster

A three-dimensional auditory display could take advantage of intrinsic sensory abilities like localization and perceptual organization by generating dynamic, multidimensional patterns of acoustic events that convey meaning about objects in the spatial world. Applications involve any context in which the users situational awareness is critical, particularly when visual cues are limited or absent; e.g., air traffic control or telerobotic activities in hazardous environments. Such a display would generate localized cues in a flexible and dynamic manner. Whereas this can be readily achieved with an array of real sound sources or loudspeakers, the NASA-Ames prototype maximizes flexibility and portability by synthetically generating three-dimensional sound in realtime for delivery through headphones. Psychoacoustic research suggests that perceptually-veridical localization over headphones is possible if both the direction-dependent pinna cues and the more well understood cues of interaural time and intensity are adequately synthesized. Although the realtime device is not yet complete, recent studies at the University of Wisconsin have confirmed the perceptual adequacy of the basic approach to synthesis.


human factors in computing systems | 1991

Localization with non-individualized virtual acoustic display cues

Elizabeth M. Wenzel; Frederic L. Wightman; Doris J. Kistler

A recent development in advanced interface technologies is the virtual acoustic display, a system that presents threedimensional auditory information over headphones [20]. The utility of such a display depends on the accuracy with which listeners can localize the virtual, or simulated, sound sources. Synthesis of virtual sources involves the digital filtering of stimuli using filters based on acoustic HeadRelated Transfer Functions (HRTFs) measured in human ear-canals. In practise, measurement of the HRTFs of each potential user of a 3-D display may not be feasible. Thus, a critical research question is whether listeners from the general population can obtain adequate localization cues from stimuli based on non-individualized filters. In the present study, 16 inexperienced listeners judged the apparent spatial location (azimuth and elevation) of wideband noisebursts that were presented either over loudspeakers in the fkee-field (an anechoic or non-reverberent environment) or over headphones. The headphone stimuli were synthesized using HRTFs from a representative subject in a previous study [23]. Localization of both t%efield and virtual sources was quite accurate for 12 of the subjects, 2 showed poor elevation accuracy in both free-field and headphone conditions, and 2 showed degraded elevation accuracy only with virtual sources. High rates of confusion errors (reversals in judgments of azimuth and elevation) were also observed for some of the subjects and tended to increase for the virtual sources. In general, the data suggest that most listeners can obtain useful directional information from an auditory display without requiring the use of individually-tailored HRTFs, particularly for the dimension of azimuth. However, the high rates of confusion errors remain problematic. Several stimulus characteristics which may help to minimize these errors are discussed.


international conference on multimodal interfaces | 2003

Sensitivity to haptic-audio asynchrony

Bernard D. Adelstein; Durand R. Begault; Mark R. Anderson; Elizabeth M. Wenzel

The natural role of sound in actions involving mechanical impact and vibration suggests the use of auditory display as an augmentation to virtual haptic interfaces. In order to budget available computational resources for sound simulation, the perceptually tolerable asynchrony between paired haptic-auditory sensations must be known. This paper describes a psychophysical study of detectable time delay between a voluntary hammer tap and its auditory consequence (a percussive sound of either 1, 50, or 200 ms duration). The results show Just Noticeable Differences (JNDs) for temporal asynchrony of 24 ms with insignificant response bias. The invariance of JND and response bias as a function of sound duration in this experiment indicates that observers cued on the initial attack of the auditory stimuli.


ieee visualization | 1990

A system for three-dimensional acoustic 'visualization' in a virtual environment workstation

Elizabeth M. Wenzel; P.K. Stone; S.S. Fisher; S.H. Foster

The authors describe the real-time acoustic display capabilities developed for the virtual environment workstation (VIEW) project. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditor symbology, a related collection of representational auditory objects or icons, can be designed using the auditory cue editor, which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with three-dimensional visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.<<ETX>>


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1988

Virtual interface environment workstations

S.S. Fisher; Elizabeth M. Wenzel; C. Coler; M.W. McGreevy

A head-mounted, wide-angle, stereoscopic display system controlled by operator position, voice and gesture has been developed at NASAs Ames Research Center for use as a multipurpose interface environment. This Virtual Interface Environment Workstation (VIEW) system provides a multisensory, interactive display environment in which a user can virtually explore a 360—degree synthesized or remotely sensed environment and can viscerally interact with its components. Primary applications of the system are in telerobotics, management of large-scale integrated information systems, and human factors research. System configuration, research scenarios, and research directions are described.


Journal of the Acoustical Society of America | 1988

Acoustic origins of individual differences in sound localization behavior

Elizabeth M. Wenzel; Frederic L. Wightman; Doris J. Kistler; Scott H. Foster

Human listeners vary widely in their ability to localize unfamiliar sounds in an environment devoid of visual cues. Our research, in which blindfolded listeners give numerical estimates of apparent source azimuth and elevation, suggests that individual differences are greatest in judgments of source elevation; listeners are uniformly accurate when judging source azimuth. The pattern of individual differences is the same for free‐field sources and for simulated free‐field sources presented over head‐phones. Simulated free‐field sources are produced by digital filtering techniques which incorporate the listener‐specific, direction‐dependent acoustic effects of the outer ears. Two features of this data bear on the question of the origin of individual differences in elevation accuracy: (1) a listeners accuracy in judging source elevation can be predicted from an analysis of the acoustic characteristics of the listeners outer ears; (2) the pattern of elevation errors made by one listener (A) can be transferr...


Journal of the Acoustical Society of America | 1998

The impact of system latency on dynamic performance in virtual acoustic environments

Elizabeth M. Wenzel

Engineering constraints that may be encountered when implementing interactive virtual acoustic displays are examined. In particular, system parameters such as the update rate and total system latency are defined and the impact they may have on perception is discussed. For example, examination of the head motions that listeners used to aid localization in a previous study suggests that some head motions may be as fast as about 175°/s for short time periods. Analysis of latencies in virtual acoustic environments (VAEs) suggests that: (1) commonly specified parameters such as the audio update rate determine only the best‐case latency possible in a VAE, (2) total system latency and individual latencies of system components, including head‐trackers, are frequently not measured by VAE developers, and (3) typical system latencies may result in undersampling of relative listener‐source motion of 175°/s as well as positional instability in the simulated source. To clearly specify the dynamic performance of a parti...


Journal of the Acoustical Society of America | 1990

The combination of interaural time and intensity in the lateralization of high‐frequency complex signals

Ervin R. Hafter; Raymond H. Dye; Elizabeth M. Wenzel; Kitty Knecht

In an effort to examine the rules by which information arising from interaural differences of time (IDT) and interaural differences of intensity (IDI) is combined, ds were measured for trains of high-frequency clicks (4000 Hz, bandpass) possessing various combinations of IDT and IDI. The number of clicks was either 1 or 8, with the interclick interval either 2 or 10 ms. A 2-IFC task was employed in which the paired values of IDT and IDI favored one side during one interval and the other side during the other interval. Data obtained with the combined cues are compared to those obtained with IDTs or IDIs alone in order to determine the degree to which processing of the two cues is done independently. Results suggest that lateralization with such stimuli is based on the sum of the temporal and intensive differences and not on independent evaluations of their separate presences.

Collaboration


Dive into the Elizabeth M. Wenzel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Doris J. Kistler

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen R. Ellis

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge