Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sheila E. Blumstein is active.

Publication


Featured researches published by Sheila E. Blumstein.


Journal of the Acoustical Society of America | 1978

Invariant cues for place of articulation in stop consonants

Kenneth N. Stevens; Sheila E. Blumstein

In a series of experiments, identification responses for place of articulation were obtained for synthetic stop consonants in consonant-vowel syllables with different vowels. The acoustic attributes of the consonants were systematically manipulated, the selection of stimulus characteristics being guided in part by theoretical considerations concerning the expected properties of the sound generated in the vocal tract as place of articulation is varied. Several stimulus series were generated with and without noise bursts at the onset, and with and without formant transitions following consonantal release. Stimuli with transitions only, and with bursts plus transitions, were consistently classified according to place of articulation, whereas stimuli with bursts only and no transitions were not consistently identified. The acoustic attributes of the stimuli were examined to determine whether invariant properties characterized each place of atriculation independent of vowel context. It was determined that the gross shape of the spectrum sampled at the consonantal release showed a distinctive shape for each place of articulation: a prominent midfrequency spectral peak for velars, a diffuse-rising spectrum for alveolars, and a diffuse-falling spectrum for labials. These attributes are evident for stimuli containing transitions only, but are enhanced by the presence of noise bursts at the onset.


Journal of the Acoustical Society of America | 1979

Acoustic invariance in speech production: Evidence from measurements of the spectral characteristics of stop consonants

Sheila E. Blumstein; Kenneth N. Stevens

On the basis of theoretical considerations and the results of experiments with synthetic consonant-vowel syllables, it has been hypothesized that the short-time spectrum sampled at the onset of a stop consonant should exhibit gross properties that uniquely specify the consonantal place of articulation independent of the following vowel. The aim of this paper is to test this hypothesis by measuring the spectrum sampled at the onsets and offsets of a large number of consonant-vowel (CV) and vowel-consonant (VC) syllables containing both voiced and voiceless stops produced by several speakers. Templates were devised in an attempt to capture three classes of spectral shapes: diffuse-rising, diffuse-falling, and compact, corresponding to alveolar, labial, and velar consonants, respectively. Spectra were derived from the utterances by sampling at the consonantal release of CV syllables and at the implosion and burst release of VC syllables, and these spectra (smoothed by a linear prediction algorithm) were matched against the templates. It was found that about 85% of the spectra at initial consonant release and at final burst release were correctly classified by the templates, although there was some variability across vowel contexts. The spectra sampled at the implosion were not consistently classified. A preliminary examination of spectra sampled at the release of nasal consonants in CV syllables showed a somewhat lower accuracy of classification by the same templates. Overall, the results support an hypothesis that, in natural speech, the acoustic characteristics of stop consonants, specified in terms of the gross spectral shape sampled at the discontinuity in the acoustic signal, show invariant properties independent of the adjacent vowel or of the voicing characteristics of the consonant. The implication is that the auditory system is endowed with detectors that are sensitive to these kinds of gross spectral shapes, and that the existence of these detectors helps the infant to organize the sounds of speech into their natural classes.


Journal of Cognitive Neuroscience | 2000

The Role of Segmentation in Phonological Processing: An fMRI Investigation

Martha W. Burton; Steven L. Small; Sheila E. Blumstein

Phonological processes map sound information onto higher levels of language processing and provide the mechanisms by which verbal information can be temporarily stored in working memory. Despite a strong convergence of data suggesting both left lateralization and distributed encoding in the anterior and posterior perisylvian language areas, the nature and brain encoding of phonological subprocesses remain ambiguous. The present study used functional magnetic resonance imaging (fMRT) to investigate the conditions under which anterior (lateral frontal) areas are activated during speech-discrimination tasks that differ in segmental processing demands. In two experiments, subjects performed same/ different judgments on the first sound of pairs of words. In the first experiment, the speech stimuli did not require overt segmentation of the initial consonant from the rest of the word, since the different pairs only varied in the phonetic voicing of the initial consonant (e.g., dip-tip). In the second experiment, the speech stimuli required segmentation since different pairs both varied in initial consonant voicing and contained different vowels and final consonants (e.g., dip-ten). These speech conditions were compared to a tone-discrimination control condition. Behavioral data showed that subjects were highly accurate in both experiments, but revealed different patterns of reaction-time latencies between the two experiments. The imaging data indicated that whereas both speech conditions showed superior temporal activation when compared to tone discrimination, only the second experiment showed consistent evidence of frontal activity. Taken together, the results of Experiments 1 and 2 suggest that phonological processing per se does not necessarily recruit frontal areas. We postulate that frontal activation is a product of segmentation processes in speech perception, or alternatively, working memory demands required for such processing.


Brain and Language | 1981

Lexical decision and aphasia: evidence for semantic processing.

William P. Milberg; Sheila E. Blumstein

Abstract Wernickes and Brocas-Conduction aphasics and a Global aphasic were presented with a lexical-decision task in which English words and pronounceable nonwords were preceded by semantically related, unrelated, or nonword primes. The patients were also given a simple semantic-judgment task using the word pairs from the lexical-decision task. Wernickes aphasics performed similar to normals and Brocas-Conduction aphasics showing significantly shorter latencies in making real-word identifications when preceded by a semantically related word. In addition, both superordinate and coordinate associates showed semantic-priming effects. Performance on the semantic-judgment task showed significantly more impairment in the aphasic group than in the normal controls. These results suggest that aphasics with even severe language impairments retain stored semantic information that may be automatically activated, yet is inaccessible to conscious semantic decision during metalinguistic tasks.


Cortex | 1974

Hemispheric Processing of Intonation Contours

Sheila E. Blumstein; William E. Cooper

Summary Two dichotic experiments were conducted to investigate the lateralization of intonation contours. In the first experiment, intonation contours that had been filtered from real speech exemplars of four English sentence types yielded a significant left ear advantage when subjects were given a perceptual matching task. This left ear advantage was maintained when subjects had to identify the same stimuli by their English sentence types. In the second experiment, non-filtered versions of four intonation contours superimposed on a nonsense syllable medium, as well as their filtered equivalents, were presented to subjects, again in a matching task. For both sets of stimuli, a left ear advantage was obtained. Thus, neither the requirements of a linguistic response nor the presence of a phonetic medium succeeded in altering the left ear advantages obtained in the perceptual matching tests. Results from the two experiments suggest that the right hemisphere is directly involved in the perception of intonation contours, and that normal language perception involves the active participation of both cerebral hemispheres.


Journal of Cognitive Neuroscience | 2003

An Event-Related fMRI Investigation of Implicit Semantic Priming

Jesse Rissman; James C. Eliassen; Sheila E. Blumstein

The neural basis underlying implicit semantic priming was investigated using event-related fMRI. Prime-target pairs were presented auditorily for lexical decision (LD) on the target stimulus, which was either semantically related or unrelated to the prime, or was a nonword. A tone task was also administered as a control. Behaviorally, all participants demonstrated semantic priming in the LD task. fMRI results showed that for all three conditions of the LD task, activation was seen in the superior temporal gyrus (STG), the middle temporal gyrus (MTG), and the inferior parietal lobe, with greater activation in the unrelated and nonword conditions than in the related condition. Direct comparisons of the related and unrelated conditions revealed foci in the left STG, left precentral gyrus, left and right MTGs, and right caudate, exhibiting significantly lower activation levels in the related condition. The reduced activity in the temporal lobe suggests that the perception of the prime word activates a lexical semantic network that shares common elements with the target word, and, thus, the target can be recognized with enhanced neural efficiency. The frontal lobe reductions most likely reflect the increased efficiency in monitoring the activation of lexical representations in the temporal lobe, making a decision, and planning the appropriate motor response.


Journal of the Acoustical Society of America | 1976

Perceptual invariance and onset spectra for stop consonants in different vowel environments.

Sheila E. Blumstein; Kenneth N. Stevens

A series of listening tests with brief synthetic consonant-vowel syllables was carried out to determine whether the initial part of a syllable can provide cues to place of articulation for voiced stop consonants independent of the remainder of the syllable. The data show that stimuli as short as 10-20 ms sampled from the onset of a consonant-vowel syllable, can be reliably identified for consonantal place of articulation, whether the second and higher formants contain moving or straight transitions and whether or not an initial burst is present. In most instances, these brief stimuli also contain sufficient information for vowel indentification. Stimulus continua in which formant transitions ranged from values appropriate to [b], [d], [g] in various vowel environments, and in which stimulus durations were 20 and 46 ms, yielded categorical labeling functions with a few exceptions. These results are consistent with a theory of speech perception in which consonant place of articulation is cued by invariant properties derived from the spectrum sampled in a 10-20 ms time window adjacent to consonantal onset or offset.


Brain and Language | 1982

Semantic processing in aphasia: Evidence from an auditory lexical decision task☆

Sheila E. Blumstein; William P. Milberg; Robin Shrier

Abstract W. Milberg and S. E. Blumstein (1981, Brain and Language 14, 371–385) demonstrated semantic facilitation effects in a visual lexical decision task administered to Wernicke and other aphasics with severe comprehension deficits. In an attempt to explore the generalizability of these findings in a task where the acoustic-phonetic system could not be bypassed to access meaning, Wernickes, Global, Brocas, and Conduction aphasics were administered a lexical decision task in the auditory modality. The patients were also given a simple semantic judgment task using the word pairs from the lexical decision task. The aphasic patients showed evidence of semantic facilitation whether they were categorized by diagnostic group or comprehension level. Performance of the semantic judgment task correlated with the severity of auditory comprehension deficits, whereas the consistency of the semantic facilitation effect did not. Even patients with severe comprehension deficits showed semantic facilitation. These results decrease the likelihood that auditory comprehension deficits are due to semantic organization per se and increase the likelihood that the deficits lie in one of the many processes involved in access to that information.


Brain and Language | 1980

Production deficits in aphasia: A voice-onset time analysis ☆

Sheila E. Blumstein; William E. Cooper; Harold Goodglass; Sheila Statlender; Jonathan Gottlieb

Abstract An experimental study was conducted to examine phonetic and phonemic deficits in the speech production of aphasics. Subjects included four Brocas aphasics, four Conduction aphasics, five Wernickes aphasics, one nonaphasic dysarthric patient, and four normal controls. The subjects read a list of words containing word-initial stop consonants which were subsequently measured acoustically for voice-onset time. The results showed that Brocas aphasics exhibit a more severe production disorder than Conduction aphasics who in turn exhibit a more severe disorder than Wernickes aphasics, in accord with clinical observations. In addition, although Brocas aphasics produced both phonetic and phonemic errors, the results showed that they have a pervasive phonetic disorder which affects their correct target productions as well as the total number of phonetic errors produced. This deficit however seems to be a speech deficit rather than a low-level motor control problem. In contrast, the Wernickes aphasics show a deficit characterized by isolated phonemic mistargeting errors. Finally, the pattern of productions for the Conduction aphasics indicates that some patients show a predominantly phonetic disorder similar to the Brocas aphasics and others show predominantly a phonemic disorder similar to the Wernickes aphasics.


Brain and Language | 1975

The reliability of ear advantage in dichotic listening.

Sheila E. Blumstein; Harold Goodglass; Vivien C. Tartter

Test-re-test reliability of dichotic listening performance on consonants, vowels, and music was investigated in a population of 42 right-handed subjects. Pearson product moment correlations between (R − L)/(R + L) ear scores on first and second tests were .74 for consonants, .21 for vowels, and .46 for music. Twenty nine percent of subjects reversed ear advantage for consonants on re-testing, 19% reversed for music, and 46% for vowels. Each type of stimulus reveals a significant subgroup (15% for consonants) who retain a deviant ear advantage on retest. In any sample, subjects whose ear advantage scores are on the deviant side are more likely to reverse ear advantages on re-test than subjects who score in the modal direction. These findings are interpreted and discussed in relation to the validity of dichotic listening as an index of cerebral functional asymmetry.

Collaboration


Dive into the Sheila E. Blumstein's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenneth N. Stevens

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Emily B. Myers

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William E. Cooper

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge