Russell L. Martin
Defence Science and Technology Organisation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Russell L. Martin.
Journal of the Acoustical Society of America | 2000
Dexter R. F. Irvine; Russell L. Martin; Ester Ivonne Klimkeit; Rachael Smith
On a variety of visual tasks, improvement in perceptual discrimination with practice (perceptual learning) has been found to be specific to features of the training stimulus, including retinal location. This specificity has been interpreted as evidence that the learning reflects changes in neuronal tuning at relatively early processing stages. The aim of the present study was to examine the frequency specificity of human auditory perceptual learning in a frequency discrimination task. Difference limens for frequency (DLFs) were determined at 5 and 8 kHz, using a three-alternative forced choice method, for two groups of eight subjects before and after extensive training at one or the other frequency. Both groups showed substantial improvement at the training frequency, and much of this improvement generalized to the nontrained frequency. However, a small but statistically significant component of the improvement was specific to the training frequency. Whether this specificity reflects changes in neural frequency tuning or attentional changes remains unclear.
Human Factors | 1998
Patrick Flanagan; Ken I. McAnally; Russell L. Martin; James W. Meehan; Simon R. Oldfield
We investigated the time participants took to perform a visual search task for targets outside the visual field of view using a helmet-mounted display. We also measured the effectiveness of visual and auditory cues to target location. The auditory stimuli used to cue location were noise bursts previously recorded from the ear canals of the participants and were either presented briefly at the beginning of a trial or continually updated to compensate for head movements. The visual cue was a dynamic arrow that indicated the direction and angular distance from the instantaneous head position to the target. Both visual and auditory spatial cues reduced search time dramatically, compared with unaided search. The updating audio cue was more effective than the transient audio cue and was as effective as the visual cue in reducing search time. These data show that both spatial auditory and visual cues can markedly improve visual search performance. Potential applications for this research include highly visual environments, such as aviation, where there is risk of overloading the visual modality with information.
International Review of Neurobiology | 2005
Simon Carlile; Russell L. Martin; Ken I. McAnally
Publisher Summary This chapter reviews the psychophysical evidence for the role of spectral cues in sound localization and the manner in which they are combined with the other binaural cues to sound location. In addition, some of the bioacoustic, psychophysical, and neurophysiological studies that have examined the mechanisms of encoding and processing of these cues are reviewed. There is strong evidence that the information used by listeners to resolve the ambiguity in interaural time divergence (ITD) and interaural level divergence (ILD) cues is spectral in nature. The possibility that spectral cues to sound location are specific to the lateral angle of the source has been examined in two studies. Several aspects of spatial cue processing examined neurophysiologically are particularly relevant to the processing of spectral information. The chapter focuses on results obtained using the mammalian auditory system. Recent human psychophysical work has shown that under appropriate conditions the auditory system has a remarkable ability to relearn to use modified spectral cues to a sound location.
Human Factors | 2006
Karen Stephan; Sean E. Smith; Russell L. Martin; Simon Parker; Ken I. McAnally
Objective: This study examined the way in which the type and preexisting strength of association between an auditory icon and a warning event affects the ease with which the icon/event pairing can be learned and retained. Background: To be effective, an auditory warning must be audible, identifiable, interpretable, and heeded. Warnings consisting of familiar environmental sounds, or auditory icons, have potential to facilitate identification and interpretation. The ease with which pairings between auditory icons and warning events can be learned and retained is likely to depend on the type and strength of the preexisting icon/event association. Method: Sixty-three participants each learned eight auditory-icon/denotative-referent pairings and attempted to recall them 4 weeks later. Three icon/denotative-referent association types (direct, related, and unrelated) were employed. Participants rated the strength of the association for each pairing on a 7-point scale. Results: The number of errors made while learning pairings was greater for unrelated than for either related or direct associations, whereas the number of errors made while attempting to recall pairings 4 weeks later was greater for unrelated than for related associations and for related than for direct associations. Irrespective of association type, both learning and retention performance remained at very high levels, provided the strength of the association was rated greater than 5. Conclusion: This suggests that strong preexisting associations are used to facilitate learning and retention of icon/denotative-referent pairings. Application: The practical implication of this study is that auditory icons having either direct or strong, indirect associations with warning events should be preferred.
Jaro-journal of The Association for Research in Otolaryngology | 2004
Russell L. Martin; Miles Paterson; Ken I. McAnally
The contention that normally binaural listeners can localize sound under monaural conditions has been challenged by Wightman and Kistler (J. Acoust. Soc. Am. 101:1050–1063, 1997), who found that listeners are almost completely unable to localize virtual sources of sound when sound is presented to only one ear. Wightman and Kistler’s results raise the question of whether monaural spectral cues are used by listeners to localize sound under binaural conditions. We have examined the possibility that monaural spectral cues provide useful information regarding sound-source elevation and front–back hemifield when interaural time differences are available to specify sound-source lateral angle. The accuracy with which elevation and front–back hemifield could be determined was compared between a monaural condition and a binaural condition in which a wide-band signal was presented to the near ear and a version of the signal that had been lowpass-filtered at 2.5 kHz was presented to the far ear. It was found that accuracy was substantially greater in the latter condition, suggesting that information regarding sound-source lateral angle is required for monaural spectral cues to elevation and front–back hemifield to be correctly interpreted.
The International Journal of Aviation Psychology | 2004
Simon Parker; Sean E. Smith; Karen Stephan; Russell L. Martin; Ken I. McAnally
We investigated the effectiveness of supplementing head-down displays (HDDs) with high-fidelity 3-dimensional (3-D) audio using a flight simulation task in which participants were required to visually acquire the image of a target aircraft. There were 3 conditions: a visual HDD providing azimuth information combined with a nonspatial audio cue, a visual HDD providing azimuth and elevation information combined with a nonspatial audio cue, and a visual HDD providing azimuth information combined with a 3-D audio cue. When 3-D audio was presented, the visual acquisition time was faster, perceived workload was reduced, and perceived situational awareness was improved. This performance improvement was attributed to the fact that participants were often able to perform the task without the need to refer to the HDD.
Journal of the Acoustical Society of America | 2000
Geoff Eberle; Ken I. McAnally; Russell L. Martin; Patrick Flanagan
This study investigated whether listeners can use interaural time differences (ITDs) in the amplitude envelope to localize high-frequency sounds in a free field. Localization accuracy was measured for high-frequency (7 to 14 kHz) noise with and without an imposed amplitude modulation (AM) at 20, 80 or 320 Hz. Only AM at 320 Hz led to more accurate localization relative to the nonmodulated condition. The results of a control experiment suggest that the improvement in localization accuracy was due to an increase in stimulus bandwidth, rather than the temporal cues provided by the modulation.
Journal of the Acoustical Society of America | 2012
Russell L. Martin; Ken I. McAnally; Robert S. Bolia; Geoff Eberle; Douglas S. Brungart
Several studies have described a release from speech-on-speech masking associated with separation of target and masker sources in the median sagittal plane. Some have excluded the possibility that small differences between target and masker interaural time disparities can fully account for this release. This study explored the mechanisms underlying the spatial release from speech-on-speech masking that can be obtained in the absence of such differences. In one condition, interaural time disparities were removed from the nominal median-sagittal-plane, head-related impulse responses used to generate the virtual auditory space within which competing sentences were presented. In other conditions, interaural level and spectral disparities also were manipulated by presenting competing sentences monaurally or diotically after convolution with one ears head-related impulse responses. It was found that substantial spatial release from masking can be obtained in the absence of any interaural disparities and that such disparities probably make a relatively minor contribution to spatial release from speech-on-speech masking in the median sagittal plane. It is argued that this release from masking is driven primarily by a reduction in informational masking that occurs when monaural information at one, or both, of the listeners ears facilitates differentiation of competing sentences that emanate from spatially separated sources.
Human Factors | 2007
Ken I. McAnally; Russell L. Martin
Objective: The detection of target messages in a background of competing speech and the identification of the color/number combinations in those messages were examined in a continuous monitoring task. Background: Previous research has shown that if listeners know when and where to listen, speech-on-speech intelligibility is improved when signals are presented via a 3-D audio display as compared with a diotic display. However, the effect of display type on detection of infrequent target messages in a continuous monitoring task has not been examined. Method: Participants were required to monitor five communications channels conveying messages at random intervals under each of three audio display conditions: diotic, all channels in front, and channels separated in azimuth (3-D). Results: Message detection sensitivity was significantly higher for the 3-D condition than for the in-front condition but did not differ significantly between the in-front and the diotic conditions. There were no differences in response criteria across conditions. Color/number identification sensitivity also was significantly higher for the 3-D condition than for the in-front condition but did not differ significantly between the in-front and the diotic conditions. Conclusion: A 3-D audio display enhances both message detection and message identification in a continuous monitoring task. Application: Three-dimensional audio displays would be particularly beneficial in environments such as aviation, in which the information conveyed to operators via the auditory modality can be crucial to the safe and effective performance of their work.
Archive | 2011
Virginia Best; Douglas S. Brungart; Simon Carlile; Craig Jin; E. A. Macpherson; Russell L. Martin; Ken I. McAnally; A. T. Sabin; B. D. Simpson
This chapter briefly summarizes the results of a meta-analysis that examined auditory localization accuracy for more than 80,000 trials where brief broadband stimuli were presented anechoically in one of four different laboratories. The analyses were aimed at creating a comprehensive map of localization accuracy as a function of sound source location, and characterizing the distribution of responses along the “cone of confusion”. The results reveal trends in auditory localization whilst minimizing the influence of different experimental methodologies and response methods.