Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ewan A. Macpherson is active.

Publication


Featured researches published by Ewan A. Macpherson.


Journal of the Acoustical Society of America | 2000

Localization of brief sounds: Effects of level and background noise

Ewan A. Macpherson; John C. Middlebrooks

Listeners show systematic errors in vertical-plane localization of wide-band sounds when tested with brief-duration stimuli at high intensities, but long-duration sounds at any comfortable level do not produce such errors. Improvements in high-level sound localization associated with increased stimulus duration might result from temporal integration or from adaptation that might allow reliable processing of later portions of the stimulus. Free-field localization judgments were obtained for clicks and for 3- and 100-ms noise bursts presented at sensation levels from 30 to 55 dB. For the brief (clicks and 3-ms) stimuli, listeners showed compression of elevation judgments and increased rates and unusual patterns of front/back confusion at sensation levels higher than 40-45 dB. At lower sensation levels, brief sounds were localized accurately. The localization task was repeated using 3-ms noise burst targets in a background of spatially diffuse, wide-band noise intended to pre-adapt the system prior to the target onset. For high-level targets, the addition of background noise afforded mild release from the elevation compression effect. Finally, a train of identical, high-level, 3-ms bursts was found to be localized more accurately than a single burst. These results support the adaptation hypothesis.


Hearing Research | 2005

Human sound localization at near-threshold levels

Andrew T. Sabin; Ewan A. Macpherson; John C. Middlebrooks

Physiological studies of spatial hearing show that the spatial receptive fields of cortical neurons typically are narrow at near-threshold levels, broadening at moderate levels. The apparent loss of neuronal spatial selectivity at increasing sound levels conflicts with the accurate performance of human subjects localizing at moderate sound levels. In the present study, human sound localization was evaluated across a wide range of sensation levels, extending down to the detection threshold. Listeners reported whether they heard each target sound and, if the target was audible, turned their heads to face the apparent source direction. Head orientation was tracked electromagnetically. At near-threshold levels, the lateral (left/right) components of responses were highly variable and slightly biased towards the midline, and front vertical components consistently exhibited a strong bias towards the horizontal plane. Stimulus levels were specified relative to the detection threshold for a front-positioned source, so low-level rear targets often were inaudible. As the sound level increased, first lateral and then vertical localization neared asymptotic levels. The improvement of localization over a range of increasing levels, in which neural spatial receptive fields presumably are broadening, indicates that sound localization does not depend on narrow spatial receptive fields of cortical neurons.


Hearing Research | 2008

Spatial sensitivity of neurons in the anterior, posterior, and primary fields of cat auditory cortex.

Ian A. Harrington; G. Christopher Stecker; Ewan A. Macpherson; John C. Middlebrooks

We assessed the spatial-tuning properties of units in the cats anterior auditory field (AAF) and compared them with those observed previously in the primary (A1) and posterior auditory fields (PAF). Multi-channel, silicon-substrate probes were used to record single- and multi-unit activity from the right hemispheres of alpha-chloralose-anesthetized cats. Spatial tuning was assessed using broadband noise bursts that varied in azimuth or elevation. Response latencies were slightly, though significantly, shorter in AAF than A1, and considerably shorter in both of those fields than in PAF. Compared to PAF, spike counts and latencies were more poorly modulated by changes in stimulus location in AAF and A1, particularly at higher sound pressure levels. Moreover, units in AAF and A1 demonstrated poorer level tolerance than units in PAF with spike rates modulated as much by changes in stimulus intensity as changes in stimulus location. Finally, spike-pattern-recognition analyses indicated that units in AAF transmitted less spatial information, on average, than did units in PAF-an observation consistent with recent evidence that PAF is necessary for sound-localization behavior, whereas AAF is not.


The Neuroscientist | 2002

Book Review: Cortical Neurons That Localize Sounds

John C. Middlebrooks; Li Xu; Shigeto Furukawa; Ewan A. Macpherson

Efforts to locate a cortical map of auditory space generally have proven unsuccessful. At moderate sound levels, cortical neurons generally show large or unbounded spatial receptive fields. Within those large receptive fields, however, changes in sound location result in systematic changes in the temporal firing patterns such that single-neuron firing patterns can signal the locations of sound sources throughout as much as 360 degrees of auditory space. Neurons in the cat’s auditory cortex show accurate localization of broadband sounds, which human listeners localize accurately. Conversely, in response to filtered sounds that produce spatial illusions in human listeners, neurons signal systematically incorrect locations that can be predicted by a model that also predicts the listeners’ illusory reports. These results from the cat’s auditory cortex, as well as more limited results from nonhuman primates, suggest a model in which the location of any particular sound source is represented in a distributed fashion within individual auditory cortical areas and among multiple cortical areas.


Journal of the Acoustical Society of America | 2000

Psychophysical customization of directional transfer functions for virtual sound localization

John C. Middlebrooks; Ewan A. Macpherson; Zekiye A. Onsan

The accuracy of virtual localization when using nonindividualized external-ear transfer functions can be improved by scaling the transfer functions in frequency [Middlebrooks, J. Acoust. Soc. Am. 106, 1493–1510 (1999)]. The present letter describes a psychophysical procedure by which listeners identified appropriate scale factors. The procedure ran on nonspecialized equipment, took as little as 20 min, and could be used successfully by inexperienced listeners. Scale factors obtained from the psychophysical procedure approximated factors computed from acoustical measurements from individual listeners. Roughly equivalent virtual-localization accuracy was obtained using scale factors derived from acoustical measurements, from the psychophysical procedure, or from listeners’ physical dimensions.


Journal of the Acoustical Society of America | 1997

A comparison of spectral correlation and local feature‐matching models of pinna cue processing

Ewan A. Macpherson

Two models of spectral cue processing were compared to determine how the auditory system might extract directional information from pinna filtering. Previous experiments [E. Macpherson, J. Acoust. Soc. Am. 99, 2515 (1996)] suggest that the processing is robust to irregularity in source spectra, making effective use of pinna cues without utilizing prior knowledge of the source. The two candidate models attempt to achieve this by different means. The first uses correlation to find the best match between eardrum spectra (or their derivatives) and a set of stored templates associated with particular directions. The second makes the match based on prominent local spectral features such as peaks and notches. The models’ predictions were compared to listeners’ responses in localization experiments involving free‐field presentation of stimuli with a variety of nonflat spectra. Correlational matching of the spectral gradient gave the best agreement overall, while feature‐matching proved unrealistically sensitive t...


Journal of the Acoustical Society of America | 1996

Effects of source spectrum irregularity and uncertainty on sound localization.

Ewan A. Macpherson

Localization performance has been shown to deteriorate when source spectra are variable, but the few studies investigating this have confounded trial‐to‐trial variability with spectral irregularity present in each stimulus. In this experiment these factors were varied independently. Blindfolded listeners seated in an anechoic chamber reported the apparent locations of wideband noise bursts having either smoothly sloping spectra or 1/3‐oct scrambled spectra. The spectra were either varied from trial to trial or held constant throughout an experimental session. Localization performance was compared to that obtained in a baseline condition using flat‐spectrum stimuli. Despite trial‐to‐trial uncertainty, performance with the smoothly varying spectra was equivalent to baseline. The responses to the majority of the scrambled spectrum stimuli were similar to baseline, but pronounced bias and elevated variability were observed at certain pairings of source spectrum and location that varied between listeners. The ...


Journal of the Acoustical Society of America | 2011

Accurate sound localization via head movements in listeners with precipitous high‐frequency hearing loss.

Ewan A. Macpherson; M. Alasdair Cumming; Robert W. Quelch

Information about sound source location in the vertical plane is available via the direction‐dependent filtering performed by the outer ears, but errors of localization such as front/rear reversals can occur when stimuli contain a limited range of frequencies or when high frequencies are inaudible due to hearing impairment. Information about front/rear sound source location is also available in the relationship between the rotation of the head and the resulting changes in interaural time and level differences. We have shown previously [Macpherson, J. Acoust. Soc. Am. 125, 2691(A) (2009)] that in normally hearing listeners, a minimum head movement angle (MHMA) of 5–10 deg is sufficient for accurate front/rear localization of low‐frequency (0.5–1 kHz) noise‐band targets. In the present study, we measured MHMAs for low‐frequency and wideband (0.5–16 kHz) targets in listeners with near‐normal low‐frequency thresholds but precipitous hearing loss above 1–2 kHz. Neither stimulus could be localized accurately by...


Journal of the Acoustical Society of America | 2013

Cue weighting and vestibular mediation of temporal dynamics in sound localization via head rotation

Ewan A. Macpherson

Our studies have quantified the salience and weighting of dynamic acoustic cues in sound localization via head rotation. Results support three key findings: (1) low-frequency interaural time-difference (ITD) is the dominant dynamic binaural difference cue; (2) when available, high-frequency spectral cues dominate front/rear localization; and (3) the temporal dynamics of dynamic cue processing are particular to auditory-vestibular integration. ITD dominance is shown indirectly in findings that head movements are highly effective for localizing low-frequency targets but not narrow-band high-frequency targets and that only normal low-frequency hearing is required to localize via dynamic cues. Direct evidence comes from manipulation of dynamic binaural cues in spherical-head simulations lacking spectral cues. If the stimulus provides access to dominant high-frequency spectral cues, location illusions involving head-coupled source motion fail. For low-frequency targets, localization performance improves with i...


Journal of the Acoustical Society of America | 2009

Some brain mechanisms for auditory scene analysis.

John C. Middlebrooks; Chen-Chung Lee; Ewan A. Macpherson

Detection and discrimination of sounds in complex auditory environments is facilitated by knowledge of the locations of sound sources and by spatial separation of signals and maskers. In studies of the spatial sensitivity of auditory cortical neurons in anesthetized cats, we find that location‐dependent variation in temporal spike patterns can signal sound‐source locations throughout up to 360 deg of space. In awake cats, spatial receptive fields can be similarly broad, although cortical neurons show a greater diversity of temporal patterns than is seen in anesthetized conditions. Our recent results demonstrate that spatial tuning tends to sharpen when a cat is engaged in an auditory task, compared to idle conditions, and that tuning can sharpen even further when the task involves sound localization. In human psychophysical studies, spatial hearing permits perceptual “stream segregation” of multiple temporally interleaved sequences of sounds. In parallel studies of the auditory cortex in anesthetized cats...

Collaboration


Dive into the Ewan A. Macpherson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. Christopher Stecker

Kresge Hearing Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew T. Sabin

Kresge Hearing Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chen-Chung Lee

Kresge Hearing Research Institute

View shared research outputs
Top Co-Authors

Avatar

G.C. Stecker

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge