Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Ladefoged is active.

Publication


Featured researches published by Peter Ladefoged.


Journal of the Acoustical Society of America | 1957

Information Conveyed by Vowels

Peter Ladefoged; Donald E. Broadbent

Most speech sounds may be said to convey three kinds of information: linguistic information which enables the listener to identify the words that are being used; socio‐linguistic information, which enables him to appreciate something about the background of the speaker; and personal information which helps to identify the speaker. An experiment has been carried out which shows that the linguistic information conveyed by a vowel sound does not depend on the absolute values of its formant frequencies, but on the relationship between the formant frequencies for that vowel and the formant frequencies of other vowels pronounced by that speaker. Six versions of the sentence Please say what this word is were synthesized on a Parametric Artificial Talking device. Four test words of the form b‐(vowel)‐t were also synthesized. It is shown that the identification of the test word depends on the formant structure of the introductory sentence. Some psychological implications of this experiment are discussed, and hypot...


Journal of Phonetics | 2001

Phonation types : a cross-linguistic overview

Matthew Gordon; Peter Ladefoged

Abstract Differences in phonation type signal important linguistic information in many languages, including contrasts between otherwise identical lexical items and boundaries of prosodic constituents. Phonation differences can be classified along a continuum ranging from voiceless, through breathy voiced, to regular, modal voicing, and then on through creaky voice to glottal closure. Cross-linguistic investigation suggests that this phonation continuum can be defined in terms of a recurring set of articulatory, acoustic, and timing properties. Nevertheless, there exist several languages whose phonation contrasts do not neatly fall within the phonation categories defined by other languages.


Journal of the Acoustical Society of America | 1963

Loudness, Sound Pressure, and Subglottal Pressure in Speech

Peter Ladefoged; Norris P. McKinney

Three experiments using one speaker and 30 listeners provided correlated data on subglottal pressure, average volume velocity through the glottis, fundamental frequency, effective sound pressure in front of the speaker, and judgments of loudness. Experiment 1: In 12 repetitions each of bee, bay, bar, bore, and boo spoken at various loudnesses, it was found that in the middle range (subglottal pressure) ∝ (effective sound pressure)0.6; and the level of the vowel in bar was about 5 dB greater than in bee or boo produced with the same subglottal pressure. Experiment 2: In 32 repetitions of the vowel /ɑ/ uttered at various loudnesses with the pitch uncontrolled, it was found that an increase of subglottal pressure of about 6.5 cm aq accompanied an increase in frequency of half an octave; and to a first approximation (rate of flow) ∝ (subglottal pressure). It follows that (rate of work done on the air) ∝ (subglottal pressure)2 ∝ (effective sound pressure)1.2. The exponent 1.2 compares well with the exponent 1....


Journal of the Acoustical Society of America | 1993

Individual differences in vowel production

Keith Johnson; Peter Ladefoged; Mona Lindau

It is often assumed that a relatively small set of articulatory features are universally used in language sound systems. This paper presents a study which tests this assumption. The data are x-ray microbeam pellet trajectories during the production of the vowels of American English by five speakers. Speakers were consistent with themselves from one production of a word to the next, but the articulatory patterns manifested by this homogeneous group were speaker specific. Striking individual differences were found in speaking rate, the production of the tense/lax distinction of English, and in global patterns of articulation. In terms of a task-dynamic model of speech production, these differences suggested that the speakers used different gestural target and stiffness values, and employed different patterns of interarticulator coordination to produce the vowels of American English. The data thus suggest that, at some level of speech motor control, speech production tasks are specified in terms of acoustic output rather than spatiotemporal targets or gestures.


Journal of the Acoustical Society of America | 1957

On the Fusion of Sounds Reaching Different Sense Organs

Donald E. Broadbent; Peter Ladefoged

A simple place theory of hearing raises the problem that several mixed harmonics may be attributed by the listener to their appropriate fundamental frequencies: the recognition of a vowel sound in the presence of other sounds requires that the formants of the vowel be detected as such and not classified with the other sounds. Thus, the neural message from a particular part of the basilar membrane probably conveys in some way information on the fundamental frequency, to a harmonic of which that part of the membrane is responding. The problem of fusion of sounds on the two ears is merely an extension of the problem of fusion of different frequencies in one ear.It is shown that synthetically produced speech will fuse when the first formant is presented to one ear and the second to the other, but it will not do so if the formants are given different fundamental frequencies. Even when both formants are given to the same ear, the latter condition fails to fuse. A further experiment with sustained formants shows...


Language | 1980

WHAT ARE LINGUISTIC SOUNDS MADE OF

Peter Ladefoged

Linguistic phonetic aspects of languages can be described in terms of about 17 articulatory parameters, and/or a similar number of acoustic parameters. Descriptions of phonological patterns in languages involve features that are not in a one-to-one relationship with these phonetic parameters, and that cannot account for some linguistic phonetic differences among languages. Speakers and listeners producing and interpreting linguistic events probably use something like the proposed phonetic parameters. There is no necessity for most phonological features to be part of mental representations. *


Journal of the Acoustical Society of America | 1978

Generating vocal tract shapes from formant frequencies

Peter Ladefoged; Richard A. Harshman; Louis Goldstein; Lloyd Rice

An algorithm that uses only the first three formant frequencies has been devised for generating vocal tract shapes as seen on midsagittal x-ray diagrams of most English vowels. The shape of the tongue is characterized in terms of the sum of two factors derived from PARAFAC analysis: a front raising component and a back raising component. Stepwise multiple regression techniques were used to show that the proportions of these two components, and of a third parameter corresponding to the distance between the lips, are highly correlated with the formant frequencies in 50 vowels. The recovery algorithm developed from these correlations was tested on a number of published sets of tracings from x-ray diagrams, and appears to be generalizable to other speakers.


Journal of the International Phonetic Association | 1998

Phonetic Structures of Scottish Gaelic

Peter Ladefoged; Jenny Ladefoged; Alice Turk; Kevin Hind; St. John Skilton

Scottish Gaelic is an endangered language with very few fluent speakers under 60. Recordings were collected in the neighbourhood of Greater Bernera, Lewis, from 11 native speakers. Aerodynamic and palatographic data were collected from one 70-year-old male speaker. Palatographic data made in 1955 by Frederick Macaulay, a Gaelic speaker from South Uist, provided additional information. Analysis showed that all the stops were voiceless unaspirated or aspirated, with the aspirated stops being preaspirated intervocalically. Spectra of various consonants were also determined. Vowel analyses showed the nature of the 7 long and short vowels. Special attention was paid to the back unrounded vowels. Problems of syllabicity were examined and shown to affect pitch contours.


Phonetica | 1992

Stops in the world's languages.

Caroline Henton; Peter Ladefoged; Ian Maddieson

This account of the great variety of stops in the worlds languages shows that, apart from their place of articulation, these sounds can be described principally in terms of the activities that occur at three phases: onset, closure, and release. Other potentially contrastive features discussed include length, and the use of the glottalic airstream mechanism (other airstream mechanisms are not considered here). Phonologically only two phases--closure and release--are exploited; independent distinctions of features such as phonation type or articulatory manner cannot be found in the onset phase. We examine the combinatorial possibilities of the features that are used and discuss implications for phonological feature systems.


Language and Speech | 1963

Some Physiological Parameters in Speech

Peter Ladefoged

A series of experiments has shown that: (1) Variations in articulation do not affect the subglottal pressure as long as the vocal cords are vibrating; but in voiceless sounds in which there is a high rate of flow the subglottal pressure is lower than in corresponding phrases with voiced sounds. (2) There is a clear physiological correlate of stress such that nouns of the form insult always have an extra increase in subglottal pressure which occurs earlier than in the corresponding verb forms, irrespective of the intonation pattern. (3) If the set of the vocal cords is not adjusted, an increase of subglottal pressure of about 7 cm. aq. will produce an increase in the frequency of vibration of about half an octave. (4) The variations in the laryngeal factors controlling the frequency of vibration of the vocal cords in utterances are much simpler than the variations in the fundamental frequency; at times the ‘tension of the vocal cords’ may be increasing, although the fundamental frequency is decreasing.

Collaboration


Dive into the Peter Ladefoged's collaboration.

Top Co-Authors

Avatar

Ian Maddieson

University of California

View shared research outputs
Top Co-Authors

Avatar

Mona Lindau

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard A. Harshman

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Louis Goldstein

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenneth N. Stevens

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge