Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Howard C. Nusbaum is active.

Publication


Featured researches published by Howard C. Nusbaum.


Speech Communication | 1985

Speech Perception, Word Recognition and the Structure of the Lexicon

David B. Pisoni; Howard C. Nusbaum; Paul A. Luce; Louisa M. Slowiaczek

This paper reports the results of three projects concerned with auditory word recognition and the structure of the lexicon. The first project was designed to experimentally test several specific predictions derived from MACS, a simulation model of the Cohort Theory of word recognition. Using a priming paradigm, evidence was obtained for acoustic-phonetic activation in word recognition in three experiments. The second project describes the results of analyses of the structure and distribution of words in the lexicon using a large lexical database. Statistics about similarity spaces for high and low frequency words were applied to previously published data on the intelligibility of words presented in noise. Differences in identification were shown to be related to structural factors about the specific words and the distribution of similar words in their neighborhoods. Finally, the third project describes efforts at developing a new theory of word recognition known as Phonetic Refinement Theory. The theory is based on findings from human listeners and was designed to incorporate some of the detailed acoustic-phonetic and phonotactic knowledge that human listeners have about the internal structure of words and the organization of words in the lexicon, and how, they use this knowledge in word recognition. Taken together, the results of these projects demonstrate a number of new and important findings about the relation between speech perception and auditory word recognition, two areas of research that have traditionally been approached from quite different perspectives in the past.


Human Factors | 1985

Some effects of training on the perception of synthetic speech.

Eileen C. Schwab; Howard C. Nusbaum; David B. Pisoni

The present study was conducted to determine the effects of training on the perception of synthetic speech. Three groups of subjects were tested with synthetic speech using the same tasks before and after training. One group was trained with synthetic speech. A second group went through the identical training procedures using natural speech. The third group received no training. Although performance of the three groups was the same prior to training, significant differences on the post-test measures of word recognition were observed: the group trained with synthetic speech performed much better than the other two groups. A six-month follow-up indicated that the group trained with synthetic speech displayed long-term retention of the knowledge and experience gained with prior exposure to synthetic speech generated by a text-to-speech system.


Journal of Experimental Psychology: Learning, Memory and Cognition | 1988

Perceptual Learning of Synthetic Speech Produced by Rule

Steven L. Greenspan; Howard C. Nusbaum; David B. Pisoni

To examine the effects of stimulus structure and variability on perceptual learning, we compared transcription accuracy before and after training with synthetic speech produced by rule. Subjects were trained with either isolated words or fluent sentences of synthetic speech that were either novel stimuli or a fixed list of stimuli that was repeated. Subjects who were trained on the same stimuli every day improved as much as did the subjects who were given novel stimuli. In a second experiment, the size of the repeated stimulus set was reduced. Under these conditions, subjects trained with repeated stimuli did not generalize to novel stimuli as well as did subjects trained with novel stimuli. Our results suggest that perceptual learning depends on the degree to which the training stimuli characterize the underlying structure of the full stimulus set. Furthermore, we found that training with isolated words only increased the intelligibility of isolated words, although training with sentences increased the intelligibility of both isolated words and sentences.


Human Factors | 1985

Effects of Speech Rate and Pitch Contour on the Perception of Synthetic Speech

Louisa M. Slowiaczek; Howard C. Nusbaum

The increased use of voice-response systems has resulted in a greater need for systematic evaluation of the role of segmental and suprasegmental factors in determining the intelligibility of synthesized speech. Two experiments were conducted to examine the effects of pitch contour and speech rate on the perception of synthetic speech. In Experiment 1, subjects transcribed sentences that were either syntactically correct and meaningful or syntactically correct but semantically anomalous. In Experiment 2, subjects transcribed sentences that varied in length and syntactic structure. In both experiments a text-to-speech system generated synthetic speech at either 150 or 250 words/min. Half of the test sentences were generated with a flat pitch (monotone) and half were generated with normally inflected clausal intonation. The results indicate that the identification of words in fluent synthetic speech is influenced by speaking rate, meaning, length, and, to a lesser degree, pitch contour. The results suggest that in many applied situations the perception of the segmental information in the speech signal may be more critical to the intelligibility of synthesized speech than are suprasegmental factors.


Behavior Research Methods Instruments & Computers | 1985

Constraints on the perception of synthetic speech generated by rule.

Howard C. Nusbaum; David B. Pisoni

Within the next few years, there will be an extensive proliferation of various types of voice response devices in human-machine communication systems. Unfortunately, at present, relatively little basic or applied research has been carried out on the intelligibility, comprehension, and perceptual processing of synthetic speech produced by these devices. On the basis of our research, we identify five factors that must be considered in studying the perception of synthetic speech: (1) the specific demands imposed by a particular task, (2) the inherent limitations of the human information processing system, (3) the experience and training of the human listener, (4) the linguistic structure of the message set, and (5) the structure and quality of the speech signal.


international conference on acoustics, speech, and signal processing | 1985

Some acoustic-phonetic correlates of speech produced in noise

David B. Pisoni; Robert H. Bernacki; Howard C. Nusbaum; Moshe Yuchtman

Acoustical analyses were carried out on the digits 0-9 spoken by two male talkers in the quiet and in 90 dB SPL of masking noise in their headphones. The results replicated previous studies demonstrating reliable increases in amplitude, duration and vocal pitch while talking in noise. We also found reliable differences in the tilt of the short-term spectrum of consonants and vowels. The results are discussed in terms of: (1) the development of algorithms for recognition of speech in noise; (2) the nature of the acoustic changes that take place when talkers produce speech under adverse conditions such as noise, stress or high cognitive load; and, (3) the role of training and feedback in controlling and modifying a talkers speech to improve performance of current speech recognizers.


Attention Perception & Psychophysics | 1979

Contextual effects in vowel perception I: anchor-induced contrast effects.

James R. Sawusch; Howard C. Nusbaum

Results from recent experiments using a selective adaption paradigm with vowels have been interpreted as the result of the fatigue of a set of feature detectors. These results could also be interpreted, however, as resulting from changes in auditory memory (auditory contrast) or changing response criteria (response bias). In the present studies, subjects listened to vowels under two conditions: an equiprobable control, with each of the stimuli occurring equally often, and an anchoring condition, with one vowel occurring more often than any of the others. Contrast effects were found in that vowel category boundaries tended to shift toward the category of the anchor, relative to the equiprobable control. Results from these experiments were highly similar to previous selective adaptation results and suggest that neither feature detector fatigue nor response criterion changes can adequately account for the adaptation/ anchoring results found with vowels.


Attention Perception & Psychophysics | 1980

Contextual effects in vowel perception II: Evidence for two processing mechanisms

James R. Sawusch; Howard C. Nusbaum; Eileen C. Schwab

Recent experiments have indicated that contrast effects can be obtained with vowels by anchoring a test series with one of the endpoint vowels. These contextual effects cannot be attributed to feature detector fatigue or to the induction of an overt response bias. In the present studies, anchored ABX discrimination functions and signal detection analyses of identification data (before and after anchoring) for an [i]-[I] vowel series were used to demonstrate that [i] and [I] anchoring produce contrast effects by affecting different perceptual mechanisms. The effects of [i] anchoring were to increase within-[i] category sensitivity, while [I] anchoring shifted criterion placements. When vowels were placed in CVC syllables to reduce available auditory memory, there was a significant decrease in the size of the [I]-anchor contrast effects. The magnitude of the [i]-anchor effect was unaffected by the reduction in vowel information available in auditory memory. These results suggest that [i] and [I] anchors affect mechanisms at different levels of processing. The [i] anchoring results may reflect normalization processes in speech perception that operate at an early level of perceptual processing, while the [I] anchoring results represent changes in response criterion mediated by auditory memory for vowel information.


Attention Perception & Psychophysics | 1983

The role of “chirp” identification in duplex perception

Howard C. Nusbaum; Eileen C. Schwab; James R. Sawusch

Duplex perception occurs when the phonetically distinguishing transitions of a syllable are presented to one ear and the rest of the syllable (the “base”) is simultaneously presented to the other ear. Subjects report hearing both a nonspeech “chirp” and a speech syllable correctly cued by the transitions. In two experiments, we compared phonetic identification of intact syllables, duplex percepts, isolated transitions, and bases. In both experiments, subjects were able to identify the phonetic information encoded into isolated transitions in the absence of an appropriate syllabic context. Also, there was no significant difference in phonetic identification of isolated transitions and duplex percepts. Finally, in the second experiment, the category boundaries from identification of isolated transitions and duplex percepts were not significantly different from each other. However, both boundaries were statistically different from the category boundary for intact syllables. Taken together, these results suggest that listeners do not need to perceptually integrate F2 transitions or F2 and F3 transition pairs with the base in duplex perception. Rather, it appears that listeners identify the chirps as speech without reference to the base.


Attention Perception & Psychophysics | 1983

Auditory and phonetic processes in place perception for stops

James R. Sawusch; Howard C. Nusbaum

Use of the selective adaptation procedure with speech stimuli has led to a number of theoretical positions with regard to the level or levels of processing affected by adaptation. Recent experiments (i.e., Sawusch & Jusczyk, 1981) have, however, yielded strong evidence that only auditory coding processes are affected by selective adaptation. In the present experiment, a test series that varied along the phonetic dimension of place of articulation for stops ([da]-[ga]) was used in conjunction with a [ska] syllable that shared the phonetic value of velar with the [ga] end of the test series but had a spectral structure that closely matched a stimulus from the [da] end of the series. As an adaptor, the [ska] and Ida] stimuli produced identical effects, whereas in a paired-comparison procedure, the [ska] produced effects consistent with its phonetic label. These results offer further support for the contention that selective adaptation affects only the auditory coding of speech, whereas the paired-comparison procedure affects only the phonetic coding of speech. On the basis of these results and previous place-adaptation results, a process model of speech perception is described.

Collaboration


Dive into the Howard C. Nusbaum's collaboration.

Top Co-Authors

Avatar

David B. Pisoni

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Eileen C. Schwab

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

James R. Sawusch

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Louisa M. Slowiaczek

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Paul A. Luce

State University of New York System

View shared research outputs
Top Co-Authors

Avatar

Michael J. Dedina

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Moshe Yuchtman

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Robert H. Bernacki

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Steven L. Greenspan

Indiana University Bloomington

View shared research outputs
Researchain Logo
Decentralizing Knowledge