Mark A. Ericson
Air Force Research Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mark A. Ericson.
Journal of the Acoustical Society of America | 2001
Douglas S. Brungart; Brian D. Simpson; Mark A. Ericson; Kimberly R. Scott
Although many researchers have examined the role that binaural cues play in the perception of spatially separated speech signals, relatively little is known about the cues that listeners use to segregate competing speech messages in a monaural or diotic stimulus. This series of experiments examined how variations in the relative levels and voice characteristics of the target and masking talkers influence a listeners ability to extract information from a target phrase in a 3-talker or 4-talker diotic stimulus. Performance in this speech perception task decreased systematically when the level of the target talker was reduced relative to the masking talkers. Performance also generally decreased when the target and masking talkers had similar voice characteristics: the target phrase was most intelligible when the target and masking phrases were spoken by different-sex talkers, and least intelligible when the target and masking phrases were spoken by the same talker. However, when the target-to-masker ratio was less than 3 dB, overall performance was usually lower with one different-sex masker than with all same-sex maskers. In most of the conditions tested, the listeners performed better when they were exposed to the characteristics of the target voice prior to the presentation of the stimulus. The results of these experiments demonstrate how monaural factors may play an important role in the segregation of speech signals in multitalker environments.
Journal of the Acoustical Society of America | 2000
Robert S. Bolia; W. Todd Nelson; Mark A. Ericson; Brian D. Simpson
A database of speech samples from eight different talkers has been collected for use in multitalker communications research. Descriptions of the nature of the corpus, the data collection methodology, and the means for obtaining copies of the database are presented.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1999
W. Todd Nelson; Robert S. Bolia; Mark A. Ericson; Richard L. McKinley
The ability of listeners to detect, identify, and monitor multiple simultaneous speech signals was measured in free field and virtual acoustic environments. Factorial combinations of four variables, including audio condition, spatial condition, the number of speech signals, and the sex of the talker were employed using a within-subjects design. Participants were required to detect the presentation of a critical speech signal among a background of non-signal speech events. Results indicated that spatial separation increased the percentage of correctly identified critical speech signals as the number of competing messages increased. These outcomes are discussed in the context of designing binaural speech displays to enhance speech communication in aviation environments.
Journal of the Acoustical Society of America | 1999
Robert S. Bolia; Mark A. Ericson; W. Todd Nelson; Richard L. McKinley; Brian D. Simpson
Recent research has been conducted on the effects of spatialized audio on a listener’s ability to detect and identify a target speech signal when presented among nontarget speech signals in the horizontal plane [W. T. Nelson et al., Proceedings of the 1998 IMAGE Conference (1998), pp. 159–166]. However, the existence of a ‘‘cocktail party effect’’ in the median plane has not been addressed. The purpose of the present investigation was to determine whether or not the spatial separation of multiple simultaneous speech sources in the median plane leads to improved detection and intelligibility. Independent variables include the number and angular separation of the speech signals, the sex of the target talker, and the presence or absence of head motion cues. All speech signals—phrases from a modified Coordinate Response Measure [T. J. Moore, AGARD Conference Proceedings 311: Aural Communication in Aviation (1981), pp. 2.1–2.6]—were digitally filtered via nonindividualized HRTFs and presented over headphones. ...
Journal of the Acoustical Society of America | 1998
W. Todd Nelson; Robert S. Bolia; Mark A. Ericson; Richard L. McKinley
The effect of spatial auditory information on listeners’ ability to detect, identify, and monitor the simultaneous presentation of multiple speech was evaluated in the free field. Factorial combinations of four variables, including the number of localized speech signals, the angular separation of the speech signals, the location of the speech signals along the horizontal plane, and the sex of the speaker were employed using a completely within‐subjects design. Participants were required to detect the presentation of a critical speech signal against a background of nonsignal speech events. Speech stimuli were derived from a coordinated call sign test which consisted of a call sign (‘‘Ringo’’), a color (‘‘red’’), and a number (‘‘five’’). In addition to having high face validity for aviation communication tasks, this measure has been successfully employed in competing message experiments. The experiment was conducted at the USAF Armstrong Laboratory’s Auditory Localization Facility—a 277‐speaker geodesic sph...
Journal of the Acoustical Society of America | 1988
Richard L. McKinley; Mark A. Ericson
A laboratory demonstration prototype of a digital auditory localization cue synthesizer has been developed. This synthesizer uses a single audio input that is separately processed in real‐time for independent presentation to each ear using headphones. The headphone presented acoustic signals are easy to localize and appear to be out of head. The acoustic image is stabilized for head movement by use of a three‐space head tracking device. The paper will describe the salient parameters of the design. A description of the psychoacoustic and electroacoustic measurements that led to the design will be presented. Human performance data on free field, simulated, and synthesized localization cues will be described and a real‐time interactive demonstration will be available for interested listeners.
Journal of the Acoustical Society of America | 1999
Brian D. Simpson; Robert S. Bolia; Mark A. Ericson; Richard L. McKinley
Previous research has demonstrated that the spatial separation of multiple simultaneous talkers improves detection and intelligibility of a critical speech signal among non‐signal speech events [Nelson et al., J. Acoust. Soc. Am. 103, 2341–2342(A) (1998); M. L. Hawley et al., J. Acoust. Soc. Am. 99, 2596(A) (1996)]. However, few findings on the effects of varying phrase onsets have been reported [J. C. Webster and P. O. Thompson, J. Acoust. Soc. Am. 26, 396–402 (1954)]. The purpose of the present study was to investigate the effect of varying the interval between sentence onsets on call sign detection and message intelligibility. The relative onset times of two to eight temporally overlapping phrases were varied systematically in both spatially separated and nonspatially separated conditions on the horizontal plane. The phrases were presented virtually to five normal‐hearing listeners. All possible temporal positions of the target phrase were examined. Results will be discussed in the context of listening...
Journal of the Acoustical Society of America | 1994
Mark A. Ericson; William R. D’Angelo; Richard L. McKinley
Two response measures, verbal spherical angles, and manual pointing, were compared in a free‐field localization task. Verbal spherical angles refers to speaking the stimulus’ azimuth and elevation angles with respect to the center of one’s head. Manual pointing refers to pointing the stylus tip of an electromagnetic position digitizer onto a 12‐in.‐diam sphere and recording the azimuth and elevation of the tip with respect to the center of the sphere. Continuous visual and aural stimuli were presented from 272 directions around each of the five subjects, whose head motions were unrestrained. The accuracy and reaction times of the subjects were measured and compared. Pointing was more accurate than the verbal response, 10 and 22 deg, respectively. Pointing was about three times faster than the verbal response. The manual pointing technique is being used in other experiments to compare free‐field and headphone localization performance.
Journal of the Acoustical Society of America | 1992
Richard L. McKinley; Mark A. Ericson; David R. Perrot; Robert H. Gilkey; Douglas S. Brungart; Frederic L. Wightman
Several methods can be used to synthetically generate auditory localization cues over headphones. Very little traditional auditory performance data have been presented for these types of synthesizers. The data were collected using stimuli similar to those used by Mills [J. Acoust. Soc. Am. 30, 237–246 (1958)]: 500‐Hz tone, 1 s on, 1 s off, 1 s on, 70‐ms ramps for on period, and 500 ms off after response before the next stimulus was presented. The paradigm used was the two‐source two‐interval experiment described by Hartmann and Rakerd [J. Acoust. Soc. Am. 85, 2031–2041 (1985)]. The minimum audible angle (MAA) was measured at seven locations in the horizontal plane for synthetic stimuli presented over headphones. MAA data will be presented for 10 normal hearing subjects for each of the seven locations. The MAA data using headphones will be compared with free‐field MAA data from the literature and with mean localization error data using headphones.
Journal of the Acoustical Society of America | 1992
Mark A. Ericson; Richard L. McKinley
An auditory localization cue synthesizer has been developed that can electronically encode directional information on various auditory signals and present the sounds over headphones. Performance of the synthesizer has been evaluated in several laboratory studies to validate its reproduction of free‐field cues and its potential for various applications. Data were collected for localization in noise, binaural intelligibility level difference, and target acquisition experiments. Subjects were able to localize sounds in spectrally similar noise at low (−10 to −20 dB) signal‐to‐noise ratios. A 3‐ to 6‐dB release from masking was observed in various single talker and competing message experiments. Directional audio information facilitated visual target acquisition under high visual workload conditions. A comparison between these data and free‐field localization data indicates that the synthesizer is capable of reproducing the free‐field cues necessary to localize sounds over headphones. The technology for gener...