Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ewen N. MacDonald is active.

Publication


Featured researches published by Ewen N. MacDonald.


The Journal of Neuroscience | 2003

Activation of the TrkB Neurotrophin Receptor Is Induced by Antidepressant Drugs and Is Required for Antidepressant-Induced Behavioral Effects

Tommi Saarelainen; Panu Hendolin; Guilherme Lucas; Eija Koponen; Mikko Sairanen; Ewen N. MacDonald; Karin Agerman; Annakaisa Haapasalo; Hiroyuki Nawa; Raquel Aloyz; Patrik Ernfors; Eero Castrén

Recent studies have indicated that exogenously administered neurotrophins produce antidepressant-like behavioral effects. We have here investigated the role of endogenous brain-derived neurotrophic factor (BDNF) and its receptor trkB in the mechanism of action of antidepressant drugs. We found that trkB.T1-overexpressing transgenic mice, which show reduced trkB activation in brain, as well as heterozygous BDNF null (BDNF+/−) mice, were resistant to the effects of antidepressants in the forced swim test, indicating that normal trkB signaling is required for the behavioral effects typically produced by antidepressants. In contrast, neurotrophin-3+/− mice showed a normal behavioral response to antidepressants. Furthermore, acute as well as chronic antidepressant treatment induced autophosphorylation and activation of trkB in cerebral cortex, particularly in the prefrontal and anterior cingulate cortex and hippocampus. Tyrosines in the trkB autophosphorylation site were phosphorylated in response to antidepressants, but phosphorylation of the shc binding site was not observed. Nevertheless, phosphorylation of cAMP response element-binding protein was increased by antidepressants in the prefrontal cortex concomitantly with trkB phosphorylation and this response was reduced in trkB.T1-overexpressing mice. Our data suggest that antidepressants acutely increase trkB signaling in a BDNF-dependent manner in cerebral cortex and that this signaling is required for the behavioral effects typical of antidepressant drugs. Neurotrophin signaling increased by antidepressants may induce formation and stabilization of synaptic connectivity, which gradually leads to the clinical antidepressive effects and mood recovery.


Hearing Research | 2007

Temporal jitter disrupts speech intelligibility : A simulation of auditory aging

M. Kathleen Pichora-Fuller; Bruce A. Schneider; Ewen N. MacDonald; Hollis Pass; Sasha Brown

We disrupted periodicity cues by temporally jittering the speech signal to explore how such distortion might affect word identification. Jittering distorts the fine structure of the speech signal with negligible alteration of either its long-term spectral or amplitude envelope characteristics. In Experiment 1, word identification in noise was significantly reduced in young, normal-hearing adults when sentences were temporally jittered at frequencies below 1.2kHz. The accuracy of the younger adults in identifying jittered speech in noise was similar to that found previously for older adults with good audiograms when they listened to intact speech in noise. In Experiment 2, to rule out the possibility that the reductions in word identification were due to spectral distortion, we also tested a simulation of cochlear hearing loss that produced spectral distortion equivalent to that produced by jittering, but this simulation had significantly less temporal distortion than was produced by jittering. There was no significant reduction in the accuracy of word identification when only the frequency region below 1.2kHz was spectrally distorted. Hence, it is the temporal distortion rather than the spectral distortion of the low-frequency components that disrupts word identification.


Journal of the Acoustical Society of America | 2010

Compensations in response to real-time formant perturbations of different magnitudes

Ewen N. MacDonald; Robyn Goldberg; Kevin G. Munhall

Previous auditory perturbation studies have demonstrated that talkers spontaneously compensate for real-time formant-shifts by altering formant production in a manner opposite to the perturbation. Here, two experiments were conducted to examine the effect of amplitude of perturbation on the compensatory behavior for the vowel /epsilon/. In the first experiment, 20 male talkers received three step-changes in acoustic feedback: F1 was increased by 50, 100, and 200 Hz, while F2 was simultaneously decreased by 75, 125, and 250 Hz. In the second experiment, 21 male talkers received acoustic feedback in which the shifts in F1 and F2 were incremented by +4 and -5 Hz on each utterance to a maximum of +350 and -450 Hz, respectively. In both experiments, talkers altered production of F1 and F2 in a manner opposite to that of the formant-shift perturbation. Compensation was approximately 25%-30% of the perturbation magnitude for shifts in F1 and F2 up to 200 and 250 Hz, respectively. As larger shifts were applied, compensation reached a plateau and then decreased. The similarity of results across experiments suggests that the compensatory response is dependent on the perturbation magnitude but not on the rate at which the perturbation is introduced.


Journal of Occupational and Environmental Hygiene | 2004

Noise exposure of music teachers.

Alberto Behar; Ewen N. MacDonald; Jason Y. Lee; Jie Cui; Hans Kunov; Willy Wong

A noise exposure survey was performed to assess the risk of hearing loss to school music teachers during the course of their activities. Noise exposure of 18 teachers from 15 schools was measured using noise dosimeters. The equivalent continuous noise level (Leq) of each teacher was recorded during single activities (classes) as well as for the entire day, and a normalized 8-hour exposure, termed the noise exposure level (Lex) was also computed. The measured Leq exceeded the 85-dBA limit for 78% of the teachers. Lex exceeded 85 dBA for 39% of the teachers. Limited recommendations on how to reduce the noise exposures are provided. The need for a hearing conservation program has also been emphasized.


Ear and Hearing | 2012

Word recognition for temporally and spectrally distorted materials: the effects of age and hearing loss.

Sherri L. Smith; Margaret K. Pichora-Fuller; Richard H. Wilson; Ewen N. MacDonald

Objectives: The purpose of Experiment 1 was to measure word recognition in younger adults with normal hearing when speech or babble was temporally or spectrally distorted. In Experiment 2, older listeners with near-normal hearing and with hearing loss (for pure tones) were tested to evaluate their susceptibility to changes in speech level and distortion types. The results across groups and listening conditions were compared to assess the extent to which the effects of the distortions on word recognition resembled the effects of age-related differences in auditory processing or pure-tone hearing loss. Design: In Experiment 1, word recognition was measured in 16 younger adults with normal hearing using Northwestern University Auditory Test No. 6 words in quiet and the Words-in-Noise test distorted by temporal jittering, spectral smearing, or combined jittering and smearing. Another 16 younger adults were evaluated in four conditions using the Words-in-Noise test in combinations of unaltered or jittered speech and unaltered or jittered babble. In Experiment 2, word recognition in quiet and in babble was measured in 72 older adults with near-normal hearing and 72 older adults with hearing loss in four conditions: unaltered, jittered, smeared, and combined jittering and smearing. Results: For the listeners in Experiment 1, word recognition was poorer in the distorted conditions compared with the unaltered condition. The signal to noise ratio at 50% correct word recognition was 4.6 dB for the unaltered condition, 6.3 dB for the jittered, 6.8 dB for the smeared, 6.9 dB for the double-jitter, and 8.2 dB for the combined jitter-smear conditions. Jittering both the babble and speech signals did not significantly reduce performance compared with jittering only the speech. In Experiment 2, the older listeners with near-normal hearing and hearing loss performed best in the unaltered condition, followed by the jitter and smear conditions, with the poorest performance in the combined jitter-smear condition in both quiet and noise. Overall, listeners with near-normal hearing performed better than listeners with hearing loss by ~30% in quiet and ~6 dB in noise. In the quiet distorted conditions, when the level of the speech was increased, performance improved for the hearing loss group, but decreased for the older group with near-normal hearing. Recognition performance of younger listeners in the jitter-smear condition and the performance of older listeners with near-normal hearing in the unaltered conditions were similar. Likewise, the performance of older listeners with near-normal hearing in the jitter-smear condition and the performance of older listeners with hearing loss in the unaltered conditions were similar. Conclusions: The present experiments advance our understanding regarding how spectral or temporal distortions of the fine structure of speech affect word recognition in older listeners with and without clinically significant hearing loss. The Speech Intelligibility Index was able to predict group differences, but not the effects of distortion. Individual differences in performance were similar across all distortion conditions with both age and hearing loss being implicated. The speech materials needed to be both spectrally and temporally distorted to mimic the effects of age-related differences in auditory processing and hearing loss.


Journal of the Acoustical Society of America | 2011

A cross-language study of compensation in response to real-time formant perturbation.

Takashi Mitsuya; Ewen N. MacDonald; David W. Purcell; Kevin G. Munhall

Past studies have shown that when formants are perturbed in real time, speakers spontaneously compensate for the perturbation by changing their formant frequencies in the opposite direction to the perturbation. Further, the pattern of these results suggests that the processing of auditory feedback error operates at a purely acoustic level. This hypothesis was tested by comparing the response of three language groups to real-time formant perturbations, (1) native English speakers producing an English vowel /ε/, (2) native Japanese speakers producing a Japanese vowel (/e([inverted perpendicular])/), and (3) native Japanese speakers learning English, producing /ε/. All three groups showed similar production patterns when F1 was decreased; however, when F1 was increased, the Japanese groups did not compensate as much as the native English speakers. Due to this asymmetry, the hypothesis that the compensatory production for formant perturbation operates at a purely acoustic level was rejected. Rather, some level of phonological processing influences the feedback processing behavior.


The Journal of Neuroscience | 2013

Multivoxel patterns reveal functionally differentiated networks underlying auditory feedback processing of speech.

Zane Z. Zheng; Alejandro Vicente-Grabovetsky; Ewen N. MacDonald; Kevin G. Munhall; Rhodri Cusack; Ingrid S. Johnsrude

The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection, and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared with during passive listening. One network of regions appears to encode an “error signal” regardless of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a frontotemporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems.


PLOS ONE | 2011

Perceiving a Stranger's Voice as Being One's Own: A ‘Rubber Voice’ Illusion?

Zane Z. Zheng; Ewen N. MacDonald; Kevin G. Munhall; Ingrid S. Johnsrude

We describe an illusion in which a strangers voice, when presented as the auditory concomitant of a participants own speech, is perceived as a modified version of their own voice. When the congruence between utterance and feedback breaks down, the illusion is also broken. Compared to a baseline condition in which participants heard their own voice as feedback, hearing a strangers voice induced robust changes in the fundamental frequency (F0) of their production. Moreover, the shift in F0 appears to be feedback dependent, since shift patterns depended reliably on the relationship between the participants own F0 and the stranger-voice F0. The shift in F0 was evident both when the illusion was present and after it was broken, suggesting that auditory feedback from production may be used separately for self-recognition and for vocal motor control. Our findings indicate that self-recognition of voices, like other body attributes, is malleable and context dependent.


Hearing Research | 2010

Effects on speech intelligibility of temporal jittering and spectral smearing of the high-frequency components of speech.

Ewen N. MacDonald; M. Kathleen Pichora-Fuller; Bruce A. Schneider

In a previous study, we demonstrated that word recognition performance was reduced when the low-frequency components of speech (0-1.2 kHz) were distorted by temporal jittering, but not when they were distorted by spectral smearing (Pichora-Fuller et al., 2007). Temporal jittering distorts the fine structure of the speech signal with negligible alteration of either its long-term spectral or amplitude envelope characteristics. Spectral smearing simulates the effects of broadened auditory filters that occur with cochlear hearing loss (Baer and Moore, 1993). In the present study, the high-frequency components of speech (1.2-7 kHz) were distorted with jittering and smearing. Word recognition in noise for both distortion conditions was poorer than in the intact condition. However, unlike our previous study, no significant difference was found in word recognition performance in the two distorted conditions. Whereas temporal distortion seems to have a deleterious effect that cannot be attributed to spectral distortion when only the lower frequencies are distorted, when the higher frequencies are distorted both temporal and spectral distortion reduce speech intelligibility.


Ear and Hearing | 2016

Exploring the Relationship Between Working Memory, Compressor Speed, and Background Noise Characteristics

Barbara Ohlenforst; Pamela E. Souza; Ewen N. MacDonald

Objectives: Previous work has shown that individuals with lower working memory demonstrate reduced intelligibility for speech processed with fast-acting compression amplification. This relationship has been noted in fluctuating noise, but the extent of noise modulation that must be present to elicit such an effect is unknown. This study expanded on previous study by exploring the effect of background noise modulations in relation to compression speed and working memory ability, using a range of signal to noise ratios. Design: Twenty-six older participants between ages 61 and 90 years were grouped by high or low working memory according to their performance on a reading span test. Speech intelligibility was measured for low-context sentences presented in background noise, where the noise varied in the extent of amplitude modulation. Simulated fast- or slow-acting compression amplification combined with individual frequency-gain shaping was applied to compensate for the individual’s hearing loss. Results: Better speech intelligibility scores were observed for participants with high working memory when fast compression was applied than when slow compression was applied. The low working memory group behaved in the opposite way and performed better under slow compression compared with fast compression. There was also a significant effect of the extent of amplitude modulation in the background noise, such that the magnitude of the score difference (fast versus slow compression) depended on the number of talkers in the background noise. The presented signal to noise ratios were not a significant factor on the measured intelligibility performance. Conclusion: In agreement with earlier research, high working memory allowed better speech intelligibility when fast compression was applied in modulated background noise. In the present experiment, that effect was present regardless of the extent of background noise modulation.

Collaboration


Dive into the Ewen N. MacDonald's collaboration.

Top Co-Authors

Avatar

Torsten Dau

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michal Fereczkowski

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Borys Kowalewski

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Takashi Mitsuya

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sébastien Santurette

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David W. Purcell

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Olaf Strelcyk

Technical University of Denmark

View shared research outputs
Researchain Logo
Decentralizing Knowledge