Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael F. Dorman is active.

Publication


Featured researches published by Michael F. Dorman.


Hearing Research | 2005

The influence of a sensitive period on central auditory development in children with unilateral and bilateral cochlear implants

Anu Sharma; Michael F. Dorman; Andrej Kral

We examined the longitudinal development of the cortical auditory evoked potential (CAEP) in 21 children who were fitted with unilateral cochlear implants and in two children who were fitted with bilateral cochlear implants either before age 3.5 years or after age 7 years. The age cut-offs (<3.5 years for early-implanted and >7 years for late-implanted) were based on the sensitive period for central auditory development described in [Ear Hear. 23 (6), 532.] Our results showed a fundamentally different pattern of development of CAEP morphology and P1 cortical response latency for early- and late-implanted children. Early-implanted children and one child who received bilateral implants by age 3.5 years showed rapid development in CAEP waveform morphology and P1 latency. Late-implanted children showed aberrant waveform morphology and significantly slower decreases in P1 latency postimplantation. In the case of a child who received his first implant by age 3.5 years and his second implant after age 7 years, CAEP responses elicited by the second implant were similar to late-implanted children. Our results are consistent with animal models of central auditory development after implantation and confirm the presence of a relatively brief sensitive period for central auditory development in young children.


Journal of the Acoustical Society of America | 1997

Speech intelligibility as a function of the number of channels of stimulation for signal processors using sine-wave and noise-band outputs

Michael F. Dorman; Philipos C. Loizou; Dawne Rainey

Vowels, consonants, and sentences were processed through software emulations of cochlear-implant signal processors with 2-9 output channels. The signals were then presented, as either the sum of sine waves at the center of the channels or as the sum of noise bands the width of the channels, to normal-hearing listeners for identification. The results indicate, as previous investigations have suggested, that high levels of speech understanding can be obtained using signal processors with a small number of channels. The number of channels needed for high levels of performance varied with the nature of the test material. For the most difficult material--vowels produced by men, women, and girls--no statistically significant differences in performance were observed when the number of channels was increased beyond 8. For the least difficult material--sentences--no statistically significant differences in performance were observed when the number of channels was increased beyond 5. The nature of the output signal, noise bands or sine waves, made only a small difference in performance. The mechanism mediating the high levels of speech recognition achieved with only few channels of stimulation may be the same one that mediates the recognition of signals produced by speakers with a high fundamental frequency, i.e., the levels of adjacent channels are used to determine the frequency of the input signal. The results of an experiment in which frequency information was altered but temporal information was not altered indicates that vowel recognition is based on information in the frequency domain even when the number of channels of stimulation is small.


Journal of Rehabilitation Research and Development | 2008

Cochlear implants: current designs and future possibilities.

Blake S. Wilson; Michael F. Dorman

The cochlear implant is the most successful of all neural prostheses developed to date. It is the most effective prosthesis in terms of restoration of function, and the people who have received a cochlear implant outnumber the recipients of other types of neural prostheses by orders of magnitude. The primary purpose of this article is to provide an overview of contemporary cochlear implants from the perspective of two designers of implant systems. That perspective includes the anatomical situation presented by the deaf cochlea and how the different parts of an implant system (including the users brain) must work together to produce the best results. In particular, we present the design considerations just mentioned and then describe in detail how the current levels of performance have been achieved. We also describe two recent advances in implant design and performance. In concluding sections, we first present strengths and limitations of present systems and then offer some possibilities for further improvements in this technology. In all, remarkable progress has been made in the development of cochlear implants but much room still remains for improvements, especially for patients presently at the low end of the performance spectrum.


Journal of the Acoustical Society of America | 1998

The recognition of sentences in noise by normal-hearing listeners using simulations of cochlear-implant signal processors with 6–20 channels

Michael F. Dorman; Philipos C. Loizou; Jeanette Fitzke; Zhemin Tu

Sentences were processed through simulations of cochlear-implant signal processors with 6, 8, 12, 16, and 20 channels and were presented to normal-hearing listeners at +2 db S/N and at -2 db S/N. The signal-processing operations included bandpass filtering, rectification, and smoothing of the signal in each band, estimation of the rms energy of the signal in each band (computed every 4 ms), and generation of sinusoids with frequencies equal to the center frequencies of the bands and amplitudes equal to the rms levels in each band. The sinusoids were summed and presented to listeners for identification. At issue was the number of channels necessary to reach maximum performance on tests of sentence understanding. At +2 dB S/N, the performance maximum was reached with 12 channels of stimulation. At -2 dB S/N, the performance maximum was reached with 20 channels of stimulation. These results, in combination with the outcome that in quiet, asymptotic performance is reached with five channels of stimulation, demonstrate that more channels are needed in noise than in quiet to reach a high level of sentence understanding and that, as the S/N becomes poorer, more channels are needed to achieve a given level of performance.


Neuroreport | 2002

Rapid development of cortical auditory evoked potentials after early cochlear implantation

Anu Sharma; Michael F. Dorman; Anthony J. Spahr

The aim of our research was to estimate the time course of development and plasticity of the human central auditory pathways following cochlear implantation. We recorded cortical auditory-evoked potentials in 3-year-old congenitally deaf children after they were fitted with cochlear implants. Immediately after implantation cortical response latencies resembled those of normal-hearing newborns. Over the next few months, the cortical evoked responses showed rapid changes in morphology and latency that resulted in age-appropriate latencies by 8 months after implantation. Overall, the development of cortical response latencies for the implanted children was more rapid than for their normal-hearing age-matched peers. Our results demonstrate a high degree of central auditory system plasticity during early human development.


Journal of Communication Disorders | 2009

Cortical development, plasticity and re-organization in children with cochlear implants

Anu Sharma; Amy Nash; Michael F. Dorman

UNLABELLED A basic tenet of developmental neurobiology is that certain areas of the cortex will re-organize, if appropriate stimulation is withheld for long periods. Stimulation must be delivered to a sensory system within a narrow window of time (a sensitive period) if that system is to develop normally. In this article, we will describe age cut-offs for a sensitive period for central auditory development in children who receive cochlear implants. We will review de-coupling and re-organization of cortical areas, which are presumed to underlie the end of the sensitive period in congenitally deaf humans and cats. Finally, we present two clinical cases which demonstrate the use of the P1 cortical auditory evoked potential as a biomarker for central auditory system development and re-organization in congenitally deaf children fitted with cochlear implants. LEARNING OUTCOMES Readers of this article should be able to (i) describe the importance of the sensitive period as it relates to development of central auditory pathways in children with cochlear implants; (ii) discuss the hypothesis of de-coupling of primary from higher-order auditory cortex as it relates to the end of the sensitive period; (iii) discuss cross-modal re-organization which may occur after long periods of auditory deprivation; and (iv) understand the use of the P1 response as a biomarker for development of central auditory pathways.


Attention Perception & Psychophysics | 1977

Stop-consonant recognition: Release bursts and formant transitions as functionally equivalent, context-dependent cues

Michael F. Dorman; Michael Studdert-Kennedy; Lawrence J. Raphael

Three experiments assessed the roles of release bursts and formant transitions as acoustic cues to place of articulation in syllable-initial voiced stop consonants by systematically removing them from American English /b,d,g/, spoken before nine different vowels by two speakers, and by transposing the bursts across all vowels for each class of stop consonant. The results showed that bursts were largely invariant in their effect, but carried significant perceptual weight in only one syllable out of 27 for Speaker 1, in only 13 syllables out of 27 for Speaker 2. Furthermore, bursts and transitions tended to be reciprocally related: Where the perceptual weight of one increased, the weight of the other declined. They were thus shown to be functionally equivalent, context-dependent cues, each contributing to the rapid spectral changes that follow consonantal release. The results are interpreted as pointing to the possible role of the front-cavity resonance in signaling place of articulation.


International Journal of Audiology | 2007

Deprivation-induced cortical reorganization in children with cochlear implants

Anu Sharma; Phillip M. Gilley; Michael F. Dorman; Robert Baldwin

A basic finding in developmental neurophysiology is that some areas of the cortex cortical areas will reorganize following a period of stimulus deprivation. In this review, we discuss mainly electroencephalography (EEG) studies of normal and deprivation-induced abnormal development of the central auditory pathways in children and in animal models.We describe age cut-off for sensitive periods for central auditory development in congenitally deaf children who are fitted with a cochlear implant. We speculate on mechanisms of decoupling and reorganization which may underlie the end of the sensitive period. Finally, we describe new magentoencephalography (MEG) evidence of somatosensory cross-modal plasticity following long-term auditory deprivation.


Journal of the Acoustical Society of America | 2000

Neurophysiologic correlates of cross-language phonetic perception.

Anu Sharma; Michael F. Dorman

This study examined neurophysiologic correlates of the perception of native and nonnative phonetic categories. Behavioral and electrophysiologic responses were obtained from Hindi and English listeners in response to a stimulus continuum of naturally produced, bilabial CV stimuli that differed in VOT from -90 to 0 ms. These speech sounds constitute phonemically relevant categories in Hindi but not in English. As expected, the native Hindi listeners identified the stimuli as belonging to two distinct phonetic categories (/ba/ and /pa/) and were easily able to discriminate a stimulus pair across these categories. On the other hand, English listeners discriminated the same stimulus pair at a chance level. In the electrophysiologic experiment N1 and MMN cortical evoked potentials (considered neurophysiologic indices of stimulus processing) were measured. The changes in N1 latency which reflected the duration of pre-voicing across the stimulus continuum were not significantly different for Hindi and English listeners. On the other hand, in response to the /ba/-/pa/ stimulus contrast, a robust MMN was seen only in Hindi listeners and not in English listeners. These results suggest that neurophysiologic levels of stimulus processing reflected by the MMN and N1 are differentially altered by linguistic experience.


Journal of the Acoustical Society of America | 1999

On the number of channels needed to understand speech

Philipos C. Loizou; Michael F. Dorman; Zhemin Tu

Recent studies have shown that high levels of speech understanding could be achieved when the speech spectrum was divided into four channels and then reconstructed as a sum of four noise bands or sine waves with frequencies equal to the center frequencies of the channels. In these studies speech understanding was assessed using sentences produced by a single male talker. The aim of experiment 1 was to assess the number of channels necessary for a high level of speech understanding when sentences were produced by multiple talkers. In experiment 1, sentences produced by 135 different talkers were processed through n (2 < or = n < or = 16) number of channels, synthesized as a sum of n sine waves with frequencies equal to the center frequencies of the filters, and presented to normal-hearing listeners for identification. A minimum of five channels was needed to achieve a high level (90%) of speech understanding. Asymptotic performance was achieved with eight channels, at least for the speech material used in this study. The outcome of experiment 1 demonstrated that the number of channels needed to reach asymptotic performance varies as a function of the recognition task and/or need for listeners to attend to fine phonetic detail. In experiment 2, sentences were processed through 6 and 16 channels and quantized into a small number of steps. The purpose of this experiment was to investigate whether listeners use across-channel differences in amplitude to code frequency information, particularly when speech is processed through a small number of channels. For sentences processed through six channels there was a significant reduction in speech understanding when the spectral amplitudes were quantized into a small number (< 8) of steps. High levels (92%) of speech understanding were maintained for sentences processed through 16 channels and quantized into only 2 steps. The findings of experiment 2 suggest an inverse relationship between the importance of spectral amplitude resolution (number of steps) and spectral resolution (number of channels).

Collaboration


Dive into the Michael F. Dorman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Philipos C. Loizou

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Anu Sharma

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sarah Cook

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge