Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew B. Fitzgerald is active.

Publication


Featured researches published by Matthew B. Fitzgerald.


Proceedings of the National Academy of Sciences of the United States of America | 2001

Different patterns of human discrimination learning for two interaural cues to sound-source location

Beverly A. Wright; Matthew B. Fitzgerald

Two of the primary cues used to localize the sources of sounds are interaural level differences (ILDs) and interaural time differences (ITDs). We conducted two experiments to explore how practice affects the human discrimination of values of ILDs and ongoing ITDs presented over headphones. We measured discrimination thresholds of 13 to 32 naive listeners in a variety of conditions during a pretest and again, 2 weeks later, during a posttest. Between those two tests, we trained a subset of listeners 1 h per day for 9 days on a single ILD or ITD condition. Listeners improved on both ILD and ITD discrimination. Improvement was initially rapid for both cue types and appeared to generalize broadly across conditions, indicating conceptual or procedural learning. A subsequent slower-improvement stage, which occurred solely for the ILD cue, only affected conditions with the trained stimulus frequency, suggesting that stimulus processing had fundamentally changed. These different learning patterns indicate that practice affects the attention to, or low-level encoding of, ILDs and ITDs at sites at which the two cue types are processed separately. Thus, these data reveal differences in the effect of practice on ILD and ITD discrimination, and provide insight into the encoding of these two cues to sound-source location in humans.


Journal of the Acoustical Society of America | 2005

A perceptual learning investigation of the pitch elicited by amplitude-modulated noise.

Matthew B. Fitzgerald; Beverly A. Wright

Noise that is amplitude modulated at rates ranging from 40 to 850 Hz can elicit a sensation of pitch. Here, the processing of this temporally based pitch was investigated using a perceptual-learning paradigm. Nine listeners were trained (1 hour per day for 6-8 days) to discriminate a standard rate of sinusoidal amplitude modulation (SAM) from a faster rate in a single condition (150 Hz SAM rate, 5 kHz low-pass carrier). All trained listeners improved significantly on that condition. These trained listeners subsequently showed no more improvement than nine untrained controls on pure-tone and rippled-noise discrimination with the same pitch, and on SAM-rate discrimination with a 30 Hz rate, although they did show some improvement with a 300 Hz rate. In addition, most trained, but not control, listeners were worse at detecting SAM at 150 Hz after, compared to before training. These results indicate that listeners can learn to improve their ability to discriminate SAM rate with multiple-hour training and that the mechanism that is modified by learning encodes (1) the pitch of SAM noise but not that of pure tones and rippled noise, (2) different SAM rates separately, and (3) differences in SAM rate more effectively than cues for SAM detection.


The Journal of Neuroscience | 2006

Perceptual-Learning Evidence for Separate Processing of Asynchrony and Order Tasks

Julia Mossbridge; Matthew B. Fitzgerald; Erin S. O'Connor; Beverly A. Wright

Normal perception depends, in part, on accurate judgments of the temporal relationships between sensory events. Two such relative-timing skills are the ability to detect stimulus asynchrony and to discriminate stimulus order. Here we investigated the neural processes contributing to the performance of auditory asynchrony and order tasks in humans, using a perceptual-learning paradigm. In each of two parallel experiments, we tested listeners on a pretest and a posttest consisting of auditory relative-timing conditions. Between these two tests, we trained a subset of listeners ∼1 h/d for 6–8 d on a single relative-timing condition. The trained listeners practiced asynchrony detection in one experiment and order discrimination in the other. Both groups were trained at sound onset with tones at 0.25 and 4.0 kHz. The remaining listeners in each experiment, who served as controls, did not receive multihour training during the 8–10 d between the pretest and posttest. These controls improved even without intervening training, adding to evidence that a single session of exposure to perceptual tasks can yield learning. Most importantly, each of the two groups of trained listeners learned more on their respective trained conditions than controls, but this learning occurred only on the two trained conditions. Neither group of trained listeners generalized their learning to the other task (order or asynchrony), an untrained temporal position (sound offset), or untrained frequency pairs. Thus, it appears that multihour training on relative-timing skills affects task-specific neural circuits that are tuned to a given temporal position and combination of stimulus components.


The Journal of Neuroscience | 2010

Enhancing Perceptual Learning by Combining Practice with Periods of Additional Sensory Stimulation

Beverly A. Wright; Andrew T. Sabin; Yuxuan Zhang; Nicole Marrone; Matthew B. Fitzgerald

Perceptual skills can be improved even in adulthood, but this learning seldom occurs by stimulus exposure alone. Instead, it requires considerable practice performing a perceptual task with relevant stimuli. It is thought that task performance permits the stimuli to drive learning. A corresponding assumption is that the same stimuli do not contribute to improvement when encountered separately from relevant task performance because of the absence of this permissive signal. However, these ideas are based on only two types of studies, in which the task was either always performed or not performed at all. Here we demonstrate enhanced perceptual learning on an auditory frequency-discrimination task in human listeners when practice on that target task was combined with additional stimulation. Learning was enhanced regardless of whether the periods of additional stimulation were interleaved with or provided exclusively before or after target-task performance, and even though that stimulation occurred during the performance of an irrelevant (auditory or written) task. The additional exposures were only beneficial when they shared the same frequency with, though they did not need to be identical to, those used during target-task performance. Their effectiveness also was diminished when they were presented 15 min after practice on the target task and was eliminated when that separation was increased to 4 h. These data show that exposure to an acoustic stimulus can facilitate learning when encountered outside of the time of practice on a perceptual task. By properly using additional stimulation one may markedly improve the efficiency of perceptual training regimens.


Attention Perception & Psychophysics | 2004

The time course of attention in a simple auditory detection task

Beverly A. Wright; Matthew B. Fitzgerald

What is the time course of human attention in a simple auditory detection task? To investigate this question, we determined the detectability of a 20-msec, 1000-Hz tone presented at expected and unexpected times. Twelve listeners who expected the tone to occur at a specific time after a 300-msec narrowband noise rarely detected signals presented 150–375 msec before or 100–200 msec after that expected time. The shape of this temporal-attention window depended on the expected presentation time of the tone and the temporal markers available in the trials. Further, though expecting the signal to occur in silence, listeners often detected signals presented at unexpected times during the noise. Combined with previous data, these results further clarify the listening strategy humans use when trying to detect an expected sound: Humans seem to listen specifically for that sound, while ignoring the background in which it is presented, around the time when the sound is expected to occur.


Acta Oto-laryngologica | 2007

The effect of perimodiolar placement on speech perception and frequency discrimination by cochlear implant users

Matthew B. Fitzgerald; William H. Shapiro; Paulette D. McDonald; Heidi S. Neuburger; Sara Ashburn-Reed; Sara Immerman; Daniel Jethanamest; J. Thomas Roland; Mario A. Svirsky

Conclusion. Neither speech understanding nor frequency discrimination ability was better in Nucleus Contour™ users than in Nucleus 24 straight electrode users. Furthermore, perimodiolar electrode placement does not result in better frequency discrimination. Objectives. We addressed three questions related to perimodiolar electrode placement. First, do patients implanted with the Contour™ electrode understand speech better than with an otherwise identical device that has a straight electrode? Second, do these groups have different frequency discrimination abilities? Third, is the distance of the electrode from the modiolus related to frequency discrimination ability? Subjects and methods. Contour™ and straight electrode users were matched on four important variables. We then tested these listeners on CNC word and HINT sentence identification tasks, and on a formant frequency discrimination task. We also examined X-rays and measured the distance of the electrodes from the modiolus to determine whether there is a relationship between this factor and frequency discrimination ability. Results. Both speech understanding and frequency discrimination abilities were similar for listeners implanted with the Contour™ vs a straight electrode. Furthermore, there was no linear relationship between electrode–modiolus distance and frequency discrimination ability. However, we did note a second-order relationship between these variables, suggesting that frequency discrimination is worse when the electrodes are either too close or too far away from the modiolus.


Journal of the Acoustical Society of America | 2011

Perceptual learning and generalization resulting from training on an auditory amplitude-modulation detection task

Matthew B. Fitzgerald; Beverly A. Wright

Fluctuations in sound amplitude provide important cues to the identity of many sounds including speech. Of interest here was whether the ability to detect these fluctuations can be improved with practice, and if so whether this learning generalizes to untrained cases. To address these issues, normal-hearing adults (n = 9) were trained to detect sinusoidal amplitude modulation (SAM; 80-Hz rate, 3-4 kHz bandpass carrier) 720 trials/day for 6-7 days and were tested before and after training on related SAM-detection and SAM-rate-discrimination conditions. Controls (n = 9) only participated in the pre- and post-tests. The trained listeners improved more than the controls on the trained condition between the pre- and post-tests, but different subgroups of trained listeners required different amounts of practice to reach asymptotic performance, ranging from 1 (n = 6) to 4-6 (n = 3) sessions. This training-induced learning did not generalize to detection with two untrained carrier spectra (5 kHz low-pass and 0.5-1.5 kHz bandpass) or to rate discrimination with the trained rate and carrier spectrum, but there was some indication that it generalized to detection with two untrained rates (30 and 150 Hz). Thus, practice improved the ability to detect amplitude modulation, but the generalization of this learning to untrained cases was somewhat limited.


Acta Oto-laryngologica | 2015

Bilateral cochlear implants with large asymmetries in electrode insertion depth: implications for the study of auditory plasticity

Mario A. Svirsky; Matthew B. Fitzgerald; Elad Sagi; E. Katelyn Glassman

Abstract Conclusion: The human frequency-to-place map may be modified by experience, even in adult listeners. However, such plasticity has limitations. Knowledge of the extent and the limitations of human auditory plasticity can help optimize parameter settings in users of auditory prostheses. Objectives: To what extent can adults adapt to sharply different frequency-to-place maps across ears? This question was investigated in two bilateral cochlear implant users who had a full electrode insertion in one ear, a much shallower insertion in the other ear, and standard frequency-to-electrode maps in both ears. Methods: Three methods were used to assess adaptation to the frequency-to-electrode maps in each ear: (1) pitch matching of electrodes in opposite ears, (2) listener-driven selection of the most intelligible frequency-to-electrode map, and (3) speech perception tests. Based on these measurements, one subject was fitted with an alternative frequency-to-electrode map, which sought to compensate for her incomplete adaptation to the standard frequency-to-electrode map. Results: Both listeners showed remarkable ability to adapt, but such adaptation remained incomplete for the ear with the shallower electrode insertion, even after extended experience. The alternative frequency-to-electrode map that was tested resulted in substantial increases in speech perception for one subject in the short insertion ear.


Cochlear Implants International | 2013

Factors influencing consistent device use in pediatric recipients of bilateral cochlear implants

Matthew B. Fitzgerald; Janet Green; Yixin Fang; Susan B. Waltzman

Abstract Objectives To determine which demographic or performance variables are associated with inconsistent use of a second implant in pediatric recipients of sequential bilateral cochlear implants (CIs). Methods A retrospective chart review was conducted on pediatric recipients of sequential bilateral CIs. Children were divided into two age groups, 5–9 and 10–17 years of age. For each group, we examined whether inconsistent use of the second implant (CI-2) was associated with a variety of demographic variables, or speech-perception scores. Results In children aged 5–9 years, inconsistent use of CI-2 was not significantly associated with any demographic variable, but was related to both the word-recognition score with CI-2, and the difference in word-recognition scores between the first implant (CI-1) and CI-2. In children aged 10–17 years, these relationships were not significant due to smaller number of subjects. Finally, CI-2 word-recognition scores across all children were significantly correlated with the age of implantation for both CI-1 and CI-2, and the time between CI-1 and CI-2 surgeries. Discussion Speech-recognition scores obtained with CI-2, and the extent to which it differs from CI-1, are most closely related with inconsistent use of CI-2 in pediatric sequential implantees. These results are consistent with similar data previously reported by other investigators. While children implanted with CI-2 at a later age generally perform more poorly, most children still use both implants, and benefit from CI-2 even when receiving the implant as an adolescent. Conclusion In pediatric recipients of sequential bilateral CIs, inconsistent use of CI-2 is related to the speech recognition scores with CI-2, and the difference in speech-recognition scores between CI-1 and CI-2. In addition, speech-recognition scores with CI-2 are related to the amount of time between CI-1 and CI-2 surgeries, and the age of implantation for both CI-1 and CI-2.


Ear and Hearing | 2013

Feasibility of Real-Time Selection of Frequency Tables in an Acoustic Simulation of a Cochlear Implant

Matthew B. Fitzgerald; Elad Sagi; Tasnim A. Morbiwala; Chin-Tuan Tan; Mario A. Svirsky

Objectives: Perception of spectrally degraded speech is particularly difficult when the signal is also distorted along the frequency axis. This might be particularly important for post-lingually deafened recipients of cochlear implants (CIs), who must adapt to a signal where there may be a mismatch between the frequencies of an input signal and the characteristic frequencies of the neurons stimulated by the CI. However, there is a lack of tools that can be used to identify whether an individual has adapted fully to a mismatch in the frequency-to-place relationship and if so, to find a frequency table that ameliorates any negative effects of an unadapted mismatch. The goal of the proposed investigation is to test the feasibility of whether real-time selection of frequency tables can be used to identify cases in which listeners have not fully adapted to a frequency mismatch. The assumption underlying this approach is that listeners who have not adapted to a frequency mismatch will select a frequency table that minimizes any such mismatches, even at the expense of reducing the information provided by this frequency table. Design: Thirty-four normal-hearing adults listened to a noise-vocoded acoustic simulation of a CI and adjusted the frequency table in real time until they obtained a frequency table that sounded “most intelligible” to them. The use of an acoustic simulation was essential to this study because it allowed the authors to explicitly control the degree of frequency mismatch present in the simulation. None of the listeners had any previous experience with vocoded speech, in order to test the hypothesis that the real-time selection procedure could be used to identify cases in which a listener has not adapted to a frequency mismatch. After obtaining a self-selected table, the authors measured consonant nucleus consonant word-recognition scores with that self-selected table and two other frequency tables: a “frequency-matched” table that matched the analysis filters with the noisebands of the noise-vocoder simulation, and a “right information” table that is similar to that used in most CI speech processors, but in this simulation results in a frequency shift equivalent to 6.5 mm of cochlear space. Results: Listeners tended to select a table that was very close to, but shifted slightly lower in frequency from the frequency-matched table. The real-time selection process took on average 2 to 3 min for each trial, and the between-trial variability was comparable with that previously observed with closely related procedures. The word-recognition scores with the self-selected table were clearly higher than with the right-information table and slightly higher than with the frequency-matched table. Conclusions: Real-time self-selection of frequency tables may be a viable tool for identifying listeners who have not adapted to a mismatch in the frequency-to-place relationship, and to find a frequency table that is more appropriate for them. Moreover, the small but significant improvements in word-recognition ability observed with the self-selected table suggest that these listeners based their selections on intelligibility rather than some other factor. The within-subject variability in the real-time selection procedure was comparable with that of a genetic algorithm, and the speed of the real-time procedure appeared to be faster than either a genetic algorithm or a simplex procedure.

Collaboration


Dive into the Matthew B. Fitzgerald's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge