Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Franklin S. Cooper is active.

Publication


Featured researches published by Franklin S. Cooper.


Journal of the Acoustical Society of America | 1954

Acoustic Loci and Transitional Cues for Consonants

Pierre Delattre; Alvin M. Liberman; Franklin S. Cooper

Previous studies with synthetic speech have shown that second‐formant transitions are cues for the perception of the stop and nasal consonants. The results of those experiments can be simplified if it is assumed that each consonant has a characteristic and fixed frequency position, or locus, for the second formant, corresponding to the relatively fixed place of production of the consonant. On that basis, the transitions may be regarded as “movements” from the locus to the steady state of the vowel.The experiments reported in this paper provide additional evidence concerning the existence and positions of these second‐formant loci for the voiced stops, b, d, and g. There appears to be a locus for d at 1800 cps and for b at 720 cps. A locus for g can be demonstrated only when the adjoining vowel has its second formant above about 1200 cps; below that level no g locus was found.The results of these experiments indicate that, for the voiced stops, the transition cannot begin at the locus and go from there to ...


Journal of the Acoustical Society of America | 1952

Some Experiments on the Perception of Synthetic Speech Sounds

Franklin S. Cooper; Pierre Delattre; Alvin M. Liberman; John M. Borst; Louis J. Gerstman

Synthetic methods applied to isolated syllables have permitted a systematic exploration of the acoustic cues to the perception of some of the consonant sounds. Methods, results, and working hypotheses are discussed.


Language and Speech | 1958

Some Cues for the Distinction Between Voiced and Voiceless Stops in Initial Position

Alvin M. Liberman; Pierre Delattre; Franklin S. Cooper

Experiments with synthetic speech produced by the Pattern Playback indicated that the voiced stops in initial position could be made to sound like their voiceless counterparts by cutting back the beginning of the first-formant transition. Normally, the cutback of the first formant raises its starting frequency and also delays the time at which it begins relative to the other two formants. Both these changes appeared, in general, to be necessary. With certain combinations of stop and back vowel, however, a delay in the onset of the first formant was by itself sufficient to produce a strong voiceless effect. Substituting noise for harmonics in the transitions of the second and third formants (for the duration of the first-formant cutback) increased the impression of voicelessness over that obtained with cutback alone. In judging stimuli that lay near phoneme boundaries, many listeners demonstrated what appeared to be a very high degree of acuity. It is possible that this is the result of long experience in the use of the language and thus represents an effect of learning on perception.


Journal of the Acoustical Society of America | 1974

Effect of Speaking Rate on Labial Consonant‐Vowel Articulation

Tatsujiro Ushijima; Hajime Hirose; Franklin S. Cooper

This paper describes the results of a study of the effect of speaking rate on the articulation of the consonants /p,w/, in combination with the vowels /i,a,u/. Two subjects read a list of nonsense syllables containing /p,w/, in all possible VCV combinations with /i,a,u,/ at both moderate and fast speaking rates. EMG recordings from muscles that control movements of the lips, tongue, and jaw were recorded simultaneously with high‐speed lateral‐view x‐ray films of the tongue and jaw, and high‐speed full‐face motion pictures of the lips. For labial consonant production, an increase in speaking rate is accompanied by an increase in the activity level of the muscle (orbicularis oris) and slightly faster rates of lip movement (both closing and opening). Vowel production, however, shows opposite effects: an increase in speaking rate is accompanied by a decrease in the activity level of the genioglossus muscle and, as shown by the x‐ray films, evidence of target undershoot. Jaw movement data show more variable, c...


Journal of the Acoustical Society of America | 1957

Effect of Third‐Formant Transitions on the Perception of the Voiced Stop Consonants

Katherine S. Harris; Howard S. Hoffman; Alvin M. Liberman; Pierre Delattre; Franklin S. Cooper

The pattern playback was used to generate synthetic syllables consisting of a wide range of second‐ and third‐formant transitions in initial position with the vowels i and ae. It was found that third‐formant transitions affected the perception of the stop consonants. (The effects of the second‐formant transitions were as previously reported.) When the third formants and their associated transitions were added at the frequency levels appropriate to each of the two vowels, positive transitions strengthened the perception of d at the expense of b or g, while negative transitions had the opposite effect. There was some evidence that variations of the steady‐state level of the third formant changed the effect of a given third‐formant transition. The results suggest that the effect of third‐formant transitions is largely independent of the second‐formant transitions with which they are combined. (This work was supported in part by the Carnegie Corporation of New York, and in part by the Department of Defense in...


Journal of the Acoustical Society of America | 1959

Minimal Rules for Synthesizing Speech

Alvin M. Liberman; Frances Ingemann; Leigh Lisker; Pierre Delattre; Franklin S. Cooper

In attempting to synthesize speech by rule, one must take account of the fact that the perceptually discrete phonemes are typically encoded at the acoustic level into segments of approximately syllabic length. It is, therefore, not possible to synthesize speech by stringing together prefabricated phonemes. By taking advantage of knowledge about the acoustic cues for speech perception, however, one can write rules for synthesis in terms of the constituent phonemes plus a few rules of combination. Thus, the number of rules can approximate the number of phonemes rather than the number of syllables. Indeed, one can reduce the number of rules still further by writing them in terms of subphonemic dimensions, viz., place and manner of articulation and voicing. Several complicating factors make it impossible to achieve an ideal minimum. First, rules must be added to take care of certain prosodic and positional variations. Failure to do so not only affects naturalness, but also impairs intelligibility, even at the...


Journal of the Acoustical Society of America | 1966

Transillumination of the Larynx in Running Speech

Leigh Lisker; Arthur S. Abramson; Franklin S. Cooper; Malcolm H. Schvey

A fundamental distinction between speech sounds depends on whether the excitation is a noise source or a quasiperiodic one. The noisy or voiceless sounds are presumably produced with an opened and quiescent larynx, while for voiced sounds the larynx is closed down and in rapid oscillation. Direct evidence of this has come from motion pictures recorded through the open mouth, a method obviously limited to particular sounds. Running speech requires different methods, and is being studied by a transillumination technique. A fiber‐optics bundle introduced into the laryngeal vestibule through the nose is used to illuminate the glottis, while a photocell placed below the thyroid cartilage registers the variable light transmitted through the glottis and the tissues of the neck. The “glottograms” so obtained are compared with acoustic waveforms simultaneously recorded with air and throat microphones to determine how the voiced‐voiceless distinction correlates with closed versus open states of the larynx.


Journal of the Acoustical Society of America | 1970

Speaker Identification by Speech Spectrograms: A Scientists' View of its Reliability for Legal Purposes

Richard H. Bolt; Franklin S. Cooper; Edward E. David; Peter B. Denes; James M. Pickett; Kenneth N. Stevens

Can you reliably identify a person by examining the spectrographic patterns of his speech sounds? This is a scientific problem of social consequence because of the interest of the courts in this question. The Technical Committee on Speech Communication of the Acoustical Society of America has asked some of its members to review the matter from a scientific point of view. The topics they considered included the nature of speech information as it relates to speaker identification, a comparison of voice patterns and fingerprint patterns, experimental evidence on voice identification, and requirements for validation of such identification methods. Findings and conclusions are reported; supporting information is given in appendixes.


Journal of the Acoustical Society of America | 1969

Computer‐Controlled PCM System for Investigation of Dichotic Speech Perception

Franklin S. Cooper; Ignatius G. Matingly

To facilitate study of dichotic speech perception, a computer‐controlled PCM system has been devised for preparation of dichotic tests from natural speech. A test is a series of paired utterances, presented one at each ear, simultaneously or with interaural delay. In compiling a test, each of several tape‐recorded utterances is digitized at a 8‐kHz rate (10 bits/sample) and read into computer memory. The stored utterance is played back, and its waveform simultaneously displayed on a storage oscilloscope. The experimenter defines the exact beginning and end of the utterance and adjusts its intensity. The edited utterance is saved with others on a disk file. According to the experimenters test order, the two utterances of each pair are retrieved, converted with a specified relative delay, and recorded each on a separate audio tape track. The system is a useful tool for investigation of lateralization and fusion in speech perception. [Support from the Office of Naval Research and the National Institute of H...


Journal of the Acoustical Society of America | 1973

Speaker identification by speech spectrograms: some further observations

Richard H. Bolt; Franklin S. Cooper; Edward E. David; Peter B. Denes; James M. Pickett; Kenneth N. Stevens

This letter reviews recent research on speaker identification by means of comparisons of speech spectrograms by human observers. Various factors affecting the reliability of identification are discussed, particularly those that would be present in practical forensic situations. Our interpretations of the new data lead us to reiterate our previous conclusion: that the degree of reliability of identification under practical conditions has not been scientifically established.

Collaboration


Dive into the Franklin S. Cooper's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leigh Lisker

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Paul A. Zahl

Memorial Hospital of South Bend

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenneth N. Stevens

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge