Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carl L. Thompson is active.

Publication


Featured researches published by Carl L. Thompson.


Journal of the Acoustical Society of America | 1973

Dichotic speech perception: An interpretation of right‐ear advantage and temporal offset effects

Charles I. Berlin; Sena S. Lowe‐Bell; John K. Cullen; Carl L. Thompson; Carl F. Loovis

In two experiments on normals we presented CV nonsense syllables both dichotically and monotically, with onsets of the syllables separated by 0, 15, 30, 60, and 90 msec (first experiment) and 0, 90, 180, 250, and 500 msec (second experiment). We found that when one of the CVs trailed the other by 30–60 msec, the trailing CV became more intelligible than when it was given simultaneously; the leading syllables intelligibility dropped from its “simultaneous” level when leading by 15 and 30 msec. The leading message was more intelligible between 15 and 250 msec when the two channels were mixed monotically. In the dichotic simultaneous conditon, voiceless consonants were more intelligible than voiced, especially in voiced‐voiceless pairs. When the voiced CV trailed the voiceless CV, the former became almost as intelligible as its voiceless counterpart. A left hemisphere “speech processor” was postulated, with suppression of information from ipsilateral sources during contralateral stimulation. The postulated...


Journal of the Acoustical Society of America | 1972

Is Speech “Special”? Perhaps the Temporal Lobectomy Patient Can Tell Us

Charles I. Berlin; Sena S. Lowe‐Bell; John K. Cullen; Carl L. Thompson; Marion R. Stafford

When dichotic nonsense syllables are presented to temporal lobectomy patients at equal intensities, the ear contralateral to the site of the lesion performs more poorly than the ipsilateral ear; if the ipsilateral ear is stimulated below threshold, the contralateral ear performs near 100%. However, if the intensity of speech in the ipsilateral ear is increased above SRT, the contralateral scores drop markedly as intelligibility increases in the ipsilateral ear. This “trade off” does not occur when noise is the competing stimulus. This phenomenon is interpreted as a sign of “speech identification” and is suggested as a potential technique for differentiating speech from nonspeech elements.


Journal of the Acoustical Society of America | 1974

The Effect of Varied Bandwidth, Signal‐To‐Noise Ratio, and Intensity on the Perception of Consonant‐Vowels in a Dichotic Context: Additivity of Central Processing

Carl L. Thompson; Diane Samson; John K. Cullen; Larry F. Hughes

Normal subjects were presented simultaneously aligned consonant‐vowel stimuli in three forced‐choice dichotic experiments in which the information content of one signal was changed by (1) varying the intensity, (2) low‐pass filtering, and (3) alterations in signal‐to‐noise ratios. The results of all three experiments were complementary. As signal characteristics were varied to produce a reduction of information to one ear, and thus a decrement in performance, an increment in performance of the unaltered ear was observed. The trade‐off ratios of decrement and increment were such that total performance (summed ear scores) remained virtually constant. Dichotic ear effects (e.g., right ear outperforming left ear) appear to be orthogonal to this phenomenon. These observations indicate that perception of dichotically presented consonant‐vowel depends upon the central additivity of information from two quasi‐independent channels, overall performance being limited by the capacity of the central processor. [Work s...


Journal of the Acoustical Society of America | 1977

“Fox‐box illusion”: Simultaneous presentation of conflicting auditory and visual CV's

A. Yonovitz; J. T. Lozar; Carl L. Thompson; Dianne R. Ferrell; Mark A. Ross

Eight CVs (with a) were audio/video color recorded. The consonants included p, b, k, g, s, z, f, and v. Using video‐editing techniques the audio from these consonants was dubbed such that all possible pairings of each visual consonant occurred with each audio consonant. Thus 64 stimuli were constructed that consisted of 56 conflicting visual and auditory stimuli and eight nonconflicting stimuli. Subjects were seated in front of a video monitor with earphones and asked to write their response to each stimulus item. The auditory‐alone mode confirmed unequivocally correct intelligibility of each consonant. However, when conflicting visual speech reading cues were present confusions were reliable and consistent. Confusions within the fricative class (f, v, s, and z) occurred as well as confusions within the stop class (p, b, k, and g). Interclass confusions were also prominent. Especially noteworthy was the perception of consonants not present acoustically (e.g., θ, O, t, 1). An extremely robust confusion oc...


Journal of the Acoustical Society of America | 1972

Interaural Intensity Differences in Dichotic Speech Perception

Carl L. Thompson; M. Stafford; John K. Cullen; Larry F. Hughes; Sena S. Lowe‐Bell; Charles I. Berlin

Three studies are reported in which the intensities of dichotic stimuli were varied. In Study 1, one signal remained at 80 dB SPL while the other was varied from 30 to 80 dB in 10‐dB steps. In Study 2, one ear was held at 50 dB while the other ears signal was varied from 30 to 80 dB in 5‐dB steps. In Study 3, the signals were varied from 30 to 80 dB SPL in 10‐dB steps but with equal intensities to the two cars. Eleven subjects were used in each of the former studies and 12 in the latter. Study 1 shows asymmetry in favor of the right‐ear messages even when they are 10 dB less intense than the left. Data of Studies 2 and 3 also show asymmetries but in a more complex fashion.


Journal of the Acoustical Society of America | 1971

Size of the Dichotic Right‐Ear Effect as a Function of Alignment Criteria

J. E. Hannah; Carl L. Thompson; John K. Cullen; Larry F. Hughes; Charles I. Berlin

Synthetic CVs with V‐O‐Ts ranging from −30 to +90 msec, were aligned for dichotic presentation according to either onset, transition, V‐O‐T, “boundary” and/or combinations thereof. Twenty‐four female right‐handed subjects, who listened to messages aligned as “simultaneous” according to those landmarks, generated the following superiority of right ear over left ear: (1) onset (16.3%); (2) onset +V‐O‐T (12.8%); (3) onset +transition (12.5%); (4) transition (10.7%); (5) “boundary” +V‐O‐T (10.6%); (6) all factors aligned (9.0%); (7) V‐O‐T (9.0%); (8) “boundary” (7.6%); (9) “boundary” +transition (5.0%). Over‐all intelligibility and laterality effects were not correlated. Unvoiced CVs dominated V‐U pairings unless “boundary” and/or V‐O‐Ts were aligned; then voiced consonants were more intelligible. “Lag effect,” where the trailing member of a dichotic pair is more intelligible, was redefined; lag effects were largest when “boundary” was taken as the reference for simultaneity, and one transition lagged behind ...


Journal of the Acoustical Society of America | 1970

Dichotic and Monotic Simultaneous and Time‐Staggered Speech

S. S. Lowe; John K. Cullen; Carl L. Thompson; Charles I. Berlin; L. Kirkpatrick; J. T. Ryan

Twelve female normal listeners, all right‐handed, heard rhyming pairs of (stop+/a/) nonsense syllables dichotically and monotically. Word onsets were simultaneous, then shifted by 15, 30, 60, and 90 msec. Each subject heard 600 pairs. Monotically, the lag‐syllable discriminations were poor at all delays with differences leveling off at 30 msec (93% lead vs 19% lag). Dichotically, lag‐ear discrimination was roughly 22% better for all lag times when the right ear lagged. [First found by Shankweiler and Studdert‐Kennedy, personal communication.] Left‐ear lag scores improved and overcame the right‐ear advantage only after 30‐msec delay. Total right‐ear scores for entire dichotic portion of experiment (M=77%) exceeded total left‐ear scores (M=66%) thus maintaining an over‐all right‐ear laterality effect. Thus, previous experimenters who allowed as much as 90‐msec delay to be randomly distributed among their so‐called “simultaneous pairs” might still have expected to find a right‐ear laterality effect. [Supported in part by NINDS.]


Journal of the Acoustical Society of America | 1970

Voiceless‐Versus‐Voiced Consonant‐Vowel Perception in Dichotic and Monotic Listening

Charles I. Berlin; M. E. Willet; Carl L. Thompson; John K. Cullen; S. S. Lowe

We had previously reported [J. Acoust. Soc. Amer. 45, 299 (A) (1969)] that in dichotic listening to consonant‐vowel (CV) utterances in natural speech, more voiceless consonants were correctly perceived than voiced consonants. In that experiment, the voiceless CVs had a slightly higher fundamental frequency than the voiced; therefore, synthetic CVs with uniform fundamental frequency and duration were used in the present experiment. Twenty normal right‐handed females listened to simultaneous (±212 msec) (stop+/a/) synthetic nonsense syllables both monotically and dichotically. In addition to the expected right‐ear laterality effect in dichotic listening, we confirmed our previous finding: dichotically, voiceless consonants predominated (73% vs 48%). Monotically, voiced consonants were most often heard correctly (60% vs 47%). An explanation related to onset of change from aperiodic to periodic portions of voiceless vs voiced utterances is presented. [We are grateful to Arthur Abramson and Lee Lisker for the ...


Journal of the Acoustical Society of America | 1976

Masking level differences: Auditory evoked responses with homophasic and antiphasic signal and noise

Carl L. Thompson; A. Yonovitz; J. T. Lozar

Two studies were devised to determine if objective quantification of the masking level difference is possible using the auditory evoked response (AER). In the first study, click stimuli were presented under three conditions: both the stimulus and masker in phase (SoNo); stimulus in phase, masker antiphasic (SoN pi); and stimulus antiphasic with masker in phase (S pi No). In the second study 1000 Hz puretone stimuli were presented under SoNo and S pi No phasic conditions. AERs were obtained at various intensity levels for each condition. The AER demonstrated differences in N1-P2 amplitudes evoked by the homophasic and antiphasic conditions for threshold and suprathreshold levels.


Journal of the Acoustical Society of America | 1973

Phonetic Errors in Dichotic Listening to Simultaneous and Time‐Staggered CVs

Charles I. Berlin; Larry F. Hughes; Carl L. Thompson; John K. Cullen; Sena S. Lowe‐Bell

These data represent further analyses of experiments previously reported [C. I. Berlin, C. L. Loovis, S. S. Lowe, J. K. Cullen, Jr., and C. L. Thompson, J. Acoust. Soc. Amer. 48, 70–71 (1970)]. Analysis of the errors made by normal subjects reveal that at simultaneity unvoiced consonants are better perceived than voiced consonants, especially when the unvoiced compete with the voiced consonants. When two voiced consonants compete, the V‐V competition yields intelligibility about as good as the U‐U competition; however, when U and V compete, then the intelligibility of the unvoiced syllable goes up while the intelligibility of the voiced syllable drops. As time asynchronies are introduced, the intelligibility of the unvoiced consonants remains constant while the intelligibility of the voiced consonant in the competing pair increases until, by 90‐msec temporal offset, intelligibility of both voiceless and voiced CVs are the same. A preliminary hypothesis on switching during analysis time will be presented.

Collaboration


Dive into the Carl L. Thompson's collaboration.

Top Co-Authors

Avatar

Charles I. Berlin

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John K. Cullen

LSU Health Sciences Center New Orleans

View shared research outputs
Top Co-Authors

Avatar

Larry F. Hughes

Southern Illinois University School of Medicine

View shared research outputs
Top Co-Authors

Avatar

S. S. Lowe

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Yonovitz

University of Texas Health Science Center at Houston

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harriet L. Berlin

Louisiana State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge