Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael M. Cohen is active.

Publication


Featured researches published by Michael M. Cohen.


Attention Perception & Psychophysics | 1977

Voice onset time and fundamental frequency as cues to the /zi/-/si/ distinction

Dominic W. Massaro; Michael M. Cohen

The present series of experiments used factorial designs to evaluate which acoustic features are primarily responsible for the voicing distinction in the syllables /zi/ and /si/. Increases in frication duration tend to make the syllable more voiceless only if vocal cord vibration is absent or at a very low level during the frication period. Increasing the period between the onset of frication and the onset of vocal cord vibration changes the syllable from a pre-dominantly voiced to a predominantly voiceless sound. This period, called voice onset time, can account for the change in perception regardless of simultaneous changes in the total frication duration or the relative duration of the frication period that contains vocal cord vibration. Changes in fundamental frequency had a large influence on the voicing judgments. With low fundamental frequencies, the judgments were predominantly voiced, whereas with high fundamental frequencies, voiceless judgments were predominant. The quantitative judgments of individual observers were described by a ratio-rule model that assumes a multiplicative combination of the independent cues, voice onset time and fundamental frequency. The model also provided a good description of previous studies of the acoustic cues used in the perception of voicing of fricatives.


Behavior Research Methods | 1976

Real-time speech synthesis

Michael M. Cohen; Dominic W. Massaro

This paper describes how a speech synthesizer can be controlled by a small computer in real time. The synthesizer allows precise control of the speech output that is necessary for experimental purposes. The control information is computed in real time during synthesis in order to reduce data storage. The properties of the synthesizer and the control program are prsented along with an example of the speech synthesis.


Archive | 1975

Preperceptual Auditory Storage in Speech Recognition

Dominic W. Massaro; Michael M. Cohen

Given that the acoustic stimulus for speech perception is extended in time and that perception cannot be immediate, it seems necessary to postulate a preperceptual auditory storage that holds the first part of the sound pattern until it is complete and perception has occurred. The duration of this preperceptual auditory storage places an upper time limit on the sound patterns functional in speech recognition. A recognition masking task has been developed to study the properties of preperceptual auditory storage and the temporal course of the speech perception process. In this task, a short speech stimulus is preceded or followed after some variable silent interval by a second sound. Both sounds are presented at a normal listening intensity. A number of studies have shown that the second sound interferes with the perception of the first if the second sound is presented before recognition of the first is complete. Backward masking results have shown that speech perception is not immediate but requires time for the synthesis of the sound pattern held in preperceptual auditory storage. The present studies evaluate some of the properties of preperceptual auditory storage and the primary recognition process. The fact that a second sound can interfere with perception of a first sound even if the sounds are presented to opposite ears locates preperceptual auditory storage at a central rather than a peripheral level. A first sound can interfere with a second sound if the sounds occur within roughly 80 msec, whereas the second interferes with the first out to an intersound interval of roughly 250 msec.


Archive | 1997

Section Introduction. Talking Heads in Speech Synthesis

Dominic W. Massaro; Michael M. Cohen

This book documents that indeed “Progress in speech synthesis” is indeed being made. Just a little experience with the requirements of speech synthesis converts even the most optimistic to the realization of the tremendous endeavor that is required. Both a highly interdisciplinary approach and almost an unlimited supply of technological and human resources are necessary for starters. We can also expect that progress, although cumulative, will necessarily be gradual and too slow for many of us. We applaud the authors for their significant contributions and look forward to the continued progress of their promising research programs.


Archive | 1992

On the Similarity of Categorization Models

Michael M. Cohen; Dominic W. Massaro


Archive | 1988

Visible language in speech perception: Lipreading and reading

Dominic W. Massaro; Lynn Thompson; Michael M. Cohen


Archive | 1987

Process and connectionist models of pattern recognition

Dominic W. Massaro; Michael M. Cohen


AVSP | 1997

Audiovisual speech perception in dyslexics: impaired unimodal perception but no audiovisual integration deficit.

Ruth Campbell; A. Whittingham; U. Frith; Dominic W. Massaro; Michael M. Cohen


AVSP | 2005

Visual contribution to speech perception: measuring the intelligibility of talking heads.

Slim Ouni; Michael M. Cohen; Hope Ishak; Dominic W. Massaro


Archive | 1999

Demonstrations of Dialogue Design Tools in the CSLU Toolkit

Ron Cole; Jacques de Villiers; Kal Shobaki; Dominic W. Massaro; Jonas Beskow; Michael M. Cohen

Collaboration


Dive into the Michael M. Cohen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hope Ishak

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Slim Ouni

University of Lorraine

View shared research outputs
Top Co-Authors

Avatar

Jonas Beskow

Royal Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge