Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alvin M. Liberman is active.

Publication


Featured researches published by Alvin M. Liberman.


Cognition | 1985

The motor theory of speech perception revised.

Alvin M. Liberman; Ignatius G. Mattingly

A motor theory of speech perception, initially proposed to account for results of early experiments with synthetic speech, is now extensively revised to accommodate recent findings, and to relate the assumptions of the theory to those that might be made about other perceptual modes. According to the revised theory, phonetic information is perceived in a biologically distinct system, a ‘module’ specialized to detect the intended gestures of the speaker that are the basis for phonetic categories. Built into the structure of this module is the unique but lawful relationship between the gestures and the acoustic patterns in which they are variously overlapped. In consequence, the module causes perception of phonetic structure without translation from preliminary auditory impressions. Thus, it is comparable to such other modules as the one that enables an animal to localize sound. Peculiar to the phonetic module are the relation between perception and production it incorporates and the fact that it must compete with other modules for the same stimulus variations.


Journal of the Acoustical Society of America | 1954

Acoustic Loci and Transitional Cues for Consonants

Pierre Delattre; Alvin M. Liberman; Franklin S. Cooper

Previous studies with synthetic speech have shown that second‐formant transitions are cues for the perception of the stop and nasal consonants. The results of those experiments can be simplified if it is assumed that each consonant has a characteristic and fixed frequency position, or locus, for the second formant, corresponding to the relatively fixed place of production of the consonant. On that basis, the transitions may be regarded as “movements” from the locus to the steady state of the vowel.The experiments reported in this paper provide additional evidence concerning the existence and positions of these second‐formant loci for the voiced stops, b, d, and g. There appears to be a locus for d at 1800 cps and for b at 720 cps. A locus for g can be demonstrated only when the adjoining vowel has its second formant above about 1200 cps; below that level no g locus was found.The results of these experiments indicate that, for the voiced stops, the transition cannot begin at the locus and go from there to ...


Attention Perception & Psychophysics | 1975

An effect of linguistic experience: The discrimination of [r] and [l] by native speakers of Japanese and English

Kuniko Miyawaki; James J. Jenkins; Winifred Strange; Alvin M. Liberman; Robert R. Verbrugge; Osamu Fujimura

To test the effect of linguistic experience on the perception of a cue that is known to be effective in distinguishing between [r] and [l] in English, 21 Japanese and 39 American adults were tested on discrimination of a set of synthetic speech-like stimuli. The 13 “speech” stimuli in this set varied in the initial stationary frequency of the third formant (F3) and its subsequent transition into the vowel over a range sufficient to produce the perception of [r a] and [l a] for American subjects and to produce [r a] (which is not in phonemic contrast to [l a ]) for Japanese subjects. Discrimination tests of a comparable set of stimuli consisting of the isolated F3 components provided a “nonspeech” control. For Americans, the discrimination of the speech stimuli was nearly categorical, i.e., comparison pairs which were identified as different phonemes were discriminated with high accuracy, while pairs which were identified as the same phoneme were discriminated relatively poorly. In comparison, discrimination of speech stimuli by Japanese subjects was only slightly better than chance for all comparison pairs. Performance on nonspeech stimuli, however, was virtually identical for Japanese and American subjects; both groups showed highly accurate discrimination of all comparison pairs. These results suggest that the effect of linguistic experience is specific to perception in the “speech mode.”


Trends in Cognitive Sciences | 2000

On the relation of speech to language

Alvin M. Liberman; Doug H. Whalen

There are two widely divergent theories about the relation of speech to language. The more conventional view holds that the elements of speech are sounds that rely for their production and perception on two wholly separate processes, neither of which is distinctly linguistic. Accordingly, the primary motor and perceptual representations are inappropriate for linguistic purposes until a cognitive process of some sort has connected them to language and to each other. The less conventional theory takes the speech elements to be articulatory gestures that are the primary objects of both production and perception. Those gestures form a natural class that serves a linguistic function and no other. Therefore, their representations are immediately linguistic, requiring no cognitive intervention to make them appropriate for use by the other components of the language system. The unconventional view provides the more plausible answers to three important questions: (1) How was the necessary parity between speaker and listener established in evolution, and how maintained? (2) How does speech meet the special requirements that underlie its ability, unique among natural communication systems, to encode an indefinitely large number of meanings? (3) What biological properties of speech make it easier than the reading and writing of its alphabetic transcription?


Journal of the Acoustical Society of America | 1952

Some Experiments on the Perception of Synthetic Speech Sounds

Franklin S. Cooper; Pierre Delattre; Alvin M. Liberman; John M. Borst; Louis J. Gerstman

Synthetic methods applied to isolated syllables have permitted a systematic exploration of the acoustic cues to the perception of some of the consonant sounds. Methods, results, and working hypotheses are discussed.


Psychological Science | 2000

The Angular Gyrus in Developmental Dyslexia: Task-Specific Differences in Functional Connectivity Within Posterior Cortex

Kenneth R. Pugh; W. Einar Mencl; Bennett A. Shaywitz; Sally E. Shaywitz; Robert K. Fulbright; R. Todd Constable; Pawel Skudlarski; Karen E. Marchione; Annette R. Jenner; Jack M. Fletcher; Alvin M. Liberman; Donald Shankweiler; Leonard Katz; Cheryl Lacadie; John C. Gore

Converging evidence from neuroimaging studies of developmental dyslexia reveals dysfunction at posterior brain regions centered in and around the angular gyrus in the left hemisphere. We examined functional connectivity (covariance) between the angular gyrus and related occipital and temporal lobe sites, across a series of print tasks that systematically varied demands on phonological assembly. Results indicate that for dyslexic readers a disruption in functional connectivity in the language-dominant left hemisphere is confined to those tasks that make explicit demands on assembly. In contrast, on print tasks that do not require phonological assembly, functional connectivity is strong for both dyslexic and nonimpaired readers. The findings support the view that neurobiological anomalies in developmental dyslexia are largely confined to the phonological-processing domain. In addition, the findings suggest that right-hemisphere posterior regions serve a compensatory role in mediating phonological performance in dyslexic readers.


Journal of the Acoustical Society of America | 1957

Some Results of Research on Speech Perception

Alvin M. Liberman

Recent experiments with synthetic speech have succeeded in isolating some of the acoustic cues which underlie the perception of speech. This paper describes, and attempts to interpret, some of the research in that area.


Attention Perception & Psychophysics | 1979

Some effects of later-occurring information on the perception of stop consonant and semivowel

Joanne L. Miller; Alvin M. Liberman

In three experiments, we determined how perception of the syllable-initial distinction between the stop consonant [b] and the semivowel [w], when cued by duration of formant transitions, is affected by parts of the sound pattern that occur later in time. For the first experiment, we constructed four series of syllables, similar in that each had initial formant transitions ranging from one short enough for [ba] to one long enough for [wa], hut different in overall syllable duration. The consequence in perception was that, as syllable duration increased, the [b-w] boundary moved toward transitions of longer duration. Then, in the second experiment, we increased the duration of the sound by adding a second syllable, [da], (thus creating [bada-wada]), and observed that lengthening the second syllable also shifted the perceived [b-w] boundary in the first syllable toward transitions of longer duration; however, this effect was small by comparison with that produced when the first syllable was lengthened equivalently. In the third experiment, we found that altering the structure of the syllable had an effect that is not to be accounted for by the concomitant change in syllable duration: lengthening the syllable by adding syllable-final transitions appropriate for the stop consonant [d] (thus creating [bad-wad]) caused the perceived [b-w] boundary to shift toward transitions of shorter duration, an effect precisely opposite to that produced when the syllable was lengthened to the same extent by adding steady-state vowel. We suggest that, in all these cases, the later-occurring information specifies rate of articulation and that the effect on the earlier-occurring cue reflects an appropriate perceptual normalization.


Annals of Dyslexia | 1990

Whole Language vs. Code Emphasis: Underlying assumptions and their implications for reading instruction

I. Y. Liberman; Alvin M. Liberman

Promoters of Whole Language hew to the belief that learning to read and write can be as natural and effortless as learning to perceive and produce speech. From this it follows that there is no special key to reading and writing, no explicit principle to be taught that, once learned, makes the written language transparent to a child who can speak. Lacking such a principle, Whole Language falls back on a method that encourages children to get from print just enough information to provide a basis for guessing at the gist. A very different method, called Code Emphasis, presupposes that learning the spoken language is, indeed, perfectly natural and seemingly effortless, but only because speech is managed, as reading and writing are not, by a biological specialization that automatically spells or parses all the words the child commands. Hence, a child normally learns to use words without ever becoming explicitly aware that each one is formed by the consonants and vowels that an alphabet represents. Yet it is exactly this awareness that must be taught if the child is to grasp the alphabetic principle and so understand how the artifacts of an alphabet transcribe the natural units of language. There is evidene that preliterate children do not, in fact, have much of this awareness; that the amount they do have predicts their reading achievement; that the awareness can be taught; and that the relative difficulty of learning it that some childen have may be a reflection of a weakness in the phonological component of their natural capacity for language.


Cognitive Psychology | 1970

The grammars of speech and language

Alvin M. Liberman

Abstract The conversion between phonetic message and acoustic signal, i.e. , speech, is a grammatical code, similar in interesting ways to syntax and phonology. Being more accessible to experiment, speech should, therefore, be an inviting object of study for those interested in the psychology of grammar. Experiments on speech have already provided some information about the psychological processes associated with the use of grammatical codes.

Collaboration


Dive into the Alvin M. Liberman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. H. Whalen

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Terry Halwes

University of Connecticut

View shared research outputs
Researchain Logo
Decentralizing Knowledge