Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Studdert-Kennedy is active.

Publication


Featured researches published by Michael Studdert-Kennedy.


Brain and Language | 1975

A continuum of lateralization for speech perception

Donald Shankweiler; Michael Studdert-Kennedy

A group of 22 unselected adults and a group of 30 right-handed male adults were tested on a series of handedness measures and on a dichotic CV-syllable test. Multiple regression methods were used to determine a correlation coefficient between handedness measures and dichotic ear advantages of .69 (p < .05) for the first group and of .54 (p < .01) for the second group. Implications of these findings for the concept of cerebral dominance are discussed.


Language and Speech | 1973

Auditory and linguistic processes in the perception of intonation contours.

Michael Studdert-Kennedy; Kerstin Hadding

The fundamental frequency contour of a 700-msec. vocoded utterance, November [novεmb ], was systematically varied to produce 72 contours, different in f0 at the stress and over the terminal glide. The contours were recorded (1) carried on the speech wave, (2) as modulated sine waves. Swedish and American subjects classified (1) both speech and sine-wave contours as either terminally rising or terminally falling (psychophysical judgments), (2) speech contours as questions or statements (linguistic judgments). For both groups, two factors acted in complementary relation to govern linguistic judgments: perceived terminal glide and f0 at the stress. Listeners tended to classify contours with an apparent terminal rise and/or high stress as questions, contours with an apparent terminal fall and/or low stress as statements. For both speech and sine waves psychophysical judgments of terminal glide were influenced by earlier sections of the contour, but the effects were reduced for sine-wave contours, and there were several instances in which speech psychophysical judgments followed the linguistic more closely than the sine wave judgments. It is suggested that these instances may reflect the control exerted by linguistic decision over perceived auditory shape.


Brain and Language | 1985

Extending formant transitions may not improve aphasics' perception of stop consonant place of articulation

Karen Riedel; Michael Studdert-Kennedy

Synthetic speech stimuli were used to investigate whether aphasics ability to perceive stop consonant place of articulation was enhanced by the extension of initial formant transitions in CV syllables. Phoneme identification and discrimination tests were administered to 12 aphasic patients, 5 fluent and 7 nonfluent. There were no significant differences in performance due to the extended transitions, and no systematic pattern of performance due to aphasia type. In both groups, discrimination was generally high and significantly better than identification, demonstrating that auditory capacity was retained, while phonetic perception was impaired; this result is consistent with repeated demonstrations that auditory and phonetic processes may be dissociated in normal listeners. Moreover, significant rank order correlations between performances on the Token Test and on both perceptual tasks suggest that impairment on these tests may reflect a general cognitive rather than a language-specific deficit.


Cognition | 1980

Language by hand and by eye ☆ ☆☆: A Review of Edward S. Klima and Ursula Bellugi's The Signs of Language

Michael Studdert-Kennedy

Language is form, not substance. Yet every semiotic system is surely constrained by its mode of expression. Communication by odor, for example, is limited by the relatively slow rates at which volatile chemicals disperse and smell receptors adapt. By the same token, we might suppose that the nature of sound, temporally distributed and rapidly fading, has shaped the structure of language. But it is not obvious how. What properties of language reflect its expressive mode? What properties reflect general cognitive constraints necessary to any imaginable expression of human language? How far are those constraints themselves a function of the mode in which language has evolved? Until recently, such questions would hardly have been addressed, because we had no unequivocal example of language in another mode, and because there are grounds for believing that language and speech form a tight anatomical and physiological nexus. Specialized structures and functions have evolved to meet the needs of spoken communication: vocal tract morphology, lip, jaw and tongue innervation, mechanisms of breath control, and perhaps even matching perceptual mechanisms (Lenneberg, 1967; Lieberman, 1972; Du Brul, 1977). Moreover, language processes are controlled by the left cerebral hemisphere in over 95% of the population, and this lateralization is correlated with left-side enlargement of the posterior planum temporale (Geschwind and Levitsky, 1968), a portion of Wernicke’s area, adjacent to the primary auditory area of the cortex and known to be involved in language representation. Wernicke’s area is itself linked to Broca’s area, a portion of the frontal lobes, adjacent to the area of the motor cortex that controls muscles important for speech, including those of the pharynx, tongue, jaw, lips and face; damage to Broca’s area may cause loss of the ability to speak grammatically, or even to


Cognition | 1981

The emergence of phonetic structure

Michael Studdert-Kennedy

To explain the unique efficiency of speech as an acoustic carrier of linguistic information and to resolve the paradox that units corresponding to phonetic segments are not to be found in the signal, consonants and vowels were said to be encoded into syllabic units. This approach stimulated a decade of research into the nature of the speech code and of its presumably specialized perceptual decoding mechanisms, but began to lose force as its implicit circularity became apparent. An alternative resolution of the paradox proposes that the signal carries no message: it carries informat~on concerning its source. The message, that is, the phonetic structure, emerges from the peculiar relation between the source and the listener, as a human and as a speaker of a particular language. This approach, like its predecessor and like much recent work in child phonology and phonetic theory, takes the study of speech to be a promising entry into the biology of language. The earliest claim for the special status of speech as an acoustic signal sprang from the difficulty of devising an effective alternative code to use in reading machines for the blind. Many years of sporadic, occasionally concentrated effort have still yielded no acoustic system by which blind (or sighted) users can follow a text much more quickly than the 35 words a minute of skilled Morse code operators. Given the very high rates at which we handle an optical transform of language, in reading and writing, this failure with acoustic codes is particularly striking. Evidently, the advantage of speech lies not in the modality itself, but in the particular way it exploits the modality. What acoustic properties set speech in this privileged relation to language? The concept of encodedness was an early attempt to answer this question (Liberman, Cooper, Shankweiler, & Studdert-Kennedy, 1967). Liberman and his colleagues embraced the paradox that, although speech carries a linguistic message, units corresponding to those of the message are not to be found in the signal. They proposed that speech should be viewed not as a cipher on linguistic structure, offering the listener a signal isomorphic, unit for uni t, with the message, but as a code 0 The code collapsed the phonemic segments (consonants and vowels) into acoustic syllables, so that cues to the *Also in Cognition, 1981, 10, 301-306. +Also Queens College and Graduate Center,. City University of New York. Acknowledgment. I thank Alvin Liberman, Ignatius Mattingly, and Bruno Repp for much fruitful discussion and advice. Preparation of the paper was supported in part by NICHD Grant HD 01994 to Haskins Laboratories. [HASKINS LABORATORIES: Status Report on Speech Research SR-67/68 (1981)J


Archive | 1986

Development of the Speech Perceptuomotor System

Michael Studdert-Kennedy

The intent of the present paper is to reflect on the development of the speech perceptuomotor system in light of the infant’s evident capacity for intermodal (or, better. amodal) perception, discussed by Meltzof f and by Kuhl at this meeting. The central issue is imitation. How does a child (or, for that matter, an adult) transform a pattern of light or sound into a pattern of muscular controls that serves to reproduce a structure functionally equivalent to the model? The hypothesis to be outlined is that imitation is a specialized mode of action, in which the structure of an amodal percept directly specifies the structure of the action to be performed (cf. Meltzoff and Moore, 1983).


Attention Perception & Psychophysics | 1980

Cross-series adaptation using song and string

Robert E. Remez; James E. Cutting; Michael Studdert-Kennedy

The acoustic-auditory feature “risetime” has been claimed to underlie both the phoneticaffricate-fricative distinction and the nonphoneticplucked-string/bowed-string distinction. We used the perceptual adaptation technique to determine whether the risetime differences of the [d3a]-[3a] distinction would therefore be registered by the same mechanism that mediates risetime differences for the plucked-bowed distinction. Two continua were used, one of digitally modified natural speech and one of synthetic violin sounds, in which the risetime was varied across each set of tokens from 0 msec to 80 msec in steps of 10 msec. The speech was sung and the violin notes were synthesized with the same fundamental frequency, 294 Hz. Adaptation of the category boundaries was observed only when speech adaptors were tested with the speech continuum and when violin adaptors were tested with the violin continuum. When crossseries tests were performed (violin adaptors tested with the speech series, and speech adaptors tested with the violin series), no effect of adaptation was observed. This finding indicates that these speech and violin sounds, despite obvious acoustic similarities, do not share the same feature detectors.


Journal of the Acoustical Society of America | 1964

Reaction Time during the Discrimination of Synthetic Stop Consonants

Michael Studdert-Kennedy; A. M. Liberman; Kenneth N. Stevens

The purpose was to determine how quickly and accurately listeners could discriminate between pairs of synthetic stop consonants distributed along an acoustic continuum. The stimuli, synthesized on OVE II at the Royal Institute of Technology, Stockholm, were 13 consonant‐vowel syllables in which the second‐ and third‐formant transitions varied so as to produce /b,d,g/. Stimuli were presented in ABX format and listeners responded by pressing a button. Reaction time was measured from the onset of the third member of each stimulus triad to the onset of the response. The percentage of correct discriminations rose to a peak in the vicinity of the phoneme boundaries, while reaction times fell. These results contrast with those of a previous study on the identification of the same stimuli: in that study, consistency of identification fell in the vicinity of the phoneme boundaries, while reaction times rose. Some implications for an understanding of the processes of identification and discrimination are discussed....


Journal of the Acoustical Society of America | 1964

Crosslinguistic Study of Vowel Discrimination

Kenneth N. Stevens; S. E. G. Öhman; Michael Studdert-Kennedy; A. M. Liberman

The objective of this experiment is to compare the identification and discrimination of a series of rounded and unrounded vowels by Swedish and American English listeners. All stimuli were produced with the OVE II synthesizer at the Royal Institute of Technology in Stockholm. The unrounded vowels are in the phonemic system of both Swedish and American English, and encompass the three vowels /i/, /ɪ/, and /ɛ/. The rounded vowels encompass the Swedish vowels /i/, /y/, /u/, but are not in the phonemic system of American English. Each series consists of 13 vowels, for which the first three formant frequencies vary in approximately equal logarithmic steps through the indicated range. By discrimination tests showed that both series of vowels gave essentially identical discrimination data for both groups of listeners. Identification functions for the unrounded vowels were similar for the two groups of listeners, but American English listeners were less consistent than Swedish listeners in the identification of t...


Annals of the New York Academy of Sciences | 1983

Limits on alternative auditory representations of speech.

Michael Studdert-Kennedy

The history of attempts to construct reading machines for the blind may guide us in our attempts to devise speech-listening aids for the deaf. The goal of the early reading machine work was to construct an acoustic alphabet, that is, to find a set of discrete, discriminable acoustic patterns that might substitute for the visual alphabet. To devise such a set---of tones, chords, bursts of filtered noise, perhaps varying in amplitude or duration-was not difficult, and, if the patterns were presented one at a time in a comfortable test format, listeners readily learned to identify them. However, if patterns were presented in rapid sequence, listener performance dropped precipitously. In fact, no one has yet devised an acoustic alphabet more effective than the dots and dashes of Morse code, with which highly skilled operators may reach a rate of 30-40 words a minute-roughly one-fifth of the rate at which we typically follow spoken language. Not surprisingly, the search for an acoustic alphabet for reading machines has been largely abandoned in favor of synthetic (or compiled) speech. The failure to devise a viable acoustic alphabet seems all the more surprising when we consider the success of the visual alphabet. On the face of it, one might have expected that transposition into another sensory modality would have been more damaging-certainly more “unnatural”-than remaining within the biologically given modality of sound. But, as we all know, that is far from the case. Even without benefit of special “speed-reading” instruction, reading rates of 300-400 words a minute are commonplace. What are we to make of this paradox? Why do speech and a visual alphabet succeed where a sound alphabet fails? The answer seems to lie, first, in certain properties of the visual and auditory systems, and second, in the different types of information that speech and alphabets convey. Although the temporal distribution of light carries important information for the perception of an event as it develops in time, the spatial distribution of light at any instant is sufficient to specify a stationary object. In fact, the eye is particularly sensitive to spatial patterns of contour and it is this sensitivity that writing systems exploit: we read by a series of static fixations during which information from many points in the visual field is gathered simultaneously. By contrast, sound is never static; we perceive auditorily by virtue of the way in which events structure the spectrum over time. Recent studies of the response to speech of auditory nerve fibers in cat 1-3 have demonstrated that the nerve

Collaboration


Dive into the Michael Studdert-Kennedy's collaboration.

Top Co-Authors

Avatar

Peter F. MacNeilage

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenneth N. Stevens

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Barbara L. Davis

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keith R. Kluender

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Lori L. Holt

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge