Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John W. Mullennix is active.

Publication


Featured researches published by John W. Mullennix.


Journal of Experimental Psychology: General | 1995

The special role of rimes in the description, use, and acquisition of English orthography.

Rebecca Treiman; John W. Mullennix; Ranka Bijeljac-Babic; E. Daylene Richmond-Welty

The links between spellings and sounds in a large set of English words with consonant-vowel-consonant phonological structure were examined. orthographic rimes, or units consisting of a vowel grapheme and a final consonant grapheme, had more stable pronunciations than either individual vowels or initial consonant-plus-vowel units. In 2 large-scale studies of word pronunciation, the consistency of pronunciation of the orthographic rime accounted for variance in latencies and errors beyond that contributed by the consistency of pronunciation of the individual graphemes and by other factors. In 3 experiments, as well, children and adults made more errors on words with less consistently pronounced orthographic rimes than on words with more consistently pronounced orthographic rimes. Relations between spellings and sounds in the simple monomorphemic words of English are more predictable when the level of onsets and rimes is taken into account than when only graphemes and phonemes are considered.


Journal of the Acoustical Society of America | 1989

Some effects of talker variability on spoken word recognition

John W. Mullennix; David B. Pisoni; Christopher S. Martin

The perceptual consequences of trial-to-trial changes in the voice of the talker on spoken word recognition were examined. The results from a series of experiments using perceptual identification and naming tasks demonstrated that perceptual performance decreases when the voice of the talker changes from trial to trial compared to performance when the voice on each trial remains the same. In addition, the effects of talker variability on word recognition appeared to be more robust and less dependent on task than the effects of word frequency and lexical structure. Possible hypotheses regarding the nature of the processes giving rise to these effects are discussed, with particular attention to the idea that the processing of information about the talkers voice is intimately related to early perceptual processes that extract acoustic-phonetic information from the speech signal.


Attention Perception & Psychophysics | 1990

Stimulus variability and processing dependencies in speech perception

John W. Mullennix; David B. Pisoni

Processing dependencies in speech perception between voice and phoneme were investigated using the Garner (1974) speeded classification procedure. Variability in the voice of the talker and in the cues to word-initial consonants were manipulated. The results showed that the processing of a talker’s voice and the perception of voicing are asymmetrically dependent. In addition, when stimulus variability was increased in each dimension, the amount of orthogonal interference obtained for each dimension became significantly larger. The processing asymmetry between voice and phoneme was interpreted in terms of a parallel-contingent relationship of talker normalization processes to auditory-to-phonetic coding processes. The processing of voice information appears to be qualitatively different from the encoding of segmental phonetic information, although they are not independent. Implications of these results for current theories of speech perception are discussed.


Journal of Experimental Psychology: Learning, Memory and Cognition | 1989

Effects of talker variability on recall of spoken word lists.

Christopher S. Martin; John W. Mullennix; David B. Pisoni; W. V. Summers

Three experiments were conducted to investigate recall of lists of words containing items spoken by either a single talker or by different talkers. In each experiment, recall of early list items was better for lists spoken by a single talker than for lists of the same words spoken by different talkers. The use of a memory preload procedure demonstrated that recall of visually presented preload digits was superior when the words in a subsequent list were spoken by a single talker than by different talkers. In addition, a retroactive interference task demonstrated that the effects of talker variability on the recall of early list items were not due to use of talker-specific acoustic cues in working memory at the time of recall. Taken together, the results suggest that word lists produced by different talkers require more processing resources in working memory than do lists produced by a single talker. The findings are discussed in terms of the role that active rehearsal plays in the transfer of spoken items into long-term memory and the factors that may affect the efficiency of rehearsal.


Journal of the Acoustical Society of America | 1995

The perceptual representation of voice gender

John W. Mullennix; Keith Johnson; Meral Topcu‐Durgun; Lynn M. Farnsworth

The perceptual representation of voice gender was examined with two experimental paradigms: identification/discrimination and selective adaptation. The results from the identification and discrimination of a synthetic male-female voice continuum indicated that voice gender perception was not categorical. In addition, results from selective adaptation experiments with natural and synthetic voice stimuli indicated that the perceptual representation of voice adapted is an auditory-based representation. Overall, these findings suggest that the perceptual representation of voice gender is auditory based and is qualitatively different from the representation of phonetic information.


Human Factors | 1991

Comprehension of synthetic speech produced by rule: word monitoring and sentence-by-sentence listening times

James V. Ralston; David B. Pisoni; Scott E. Lively; Beth G. Greene; John W. Mullennix

Previous comprehension studies using postperceptual memory tests have often reported negligible differences in performance between natural speech and several kinds of synthetic speech produced by rule, despite large differences in segmental intelligibility. The present experiments investigated the comprehension of natural and synthetic speech using two different on-line tasks: word monitoring and sentence-by-sentence listening. On-line task performance was slower and less accurate for passages of synthetic speech than for passages of natural speech. Recognition memory performance in both experiments was less accurate following passages of synthetic speech than of natural speech. Monitoring performance, sentence listening times, and recognition memory accuracy all showed moderate correlations with intelligibility scores obtained using the Modified Rhyme Test. The results suggest that poorer comprehension of passages of synthetic speech is attributable in part to the greater encoding demands of synthetic speech. In contrast to earlier studies, the present results demonstrate that on-line tasks can be used to measure differences in comprehension performance between natural and synthetic speech.


Computers in Human Behavior | 2003

Social perception of male and female computer synthesized speech

John W. Mullennix; Steven E. Stern; Stephen J. Wilson; Corrie-lynn Dyson

The present study addressed the issue of whether social perception of human speech and computerized text-to-speech (TTS) is affected by gender of voice and gender of listener. Listeners were presented with a persuasive argument in either male or female human or synthetic voice and were assessed on attitude change and their ratings of various speech qualities. The results indicated that female human speech was rated as preferable to female synthetic speech, and that male synthetic speech was rated as preferable to female synthetic speech. Degree of persuasion did not differ across human and synthetic speech, however, female listeners were persuaded more by the argument than male listeners were. Patterns of ratings across male and female listeners were fairly similar across human and synthetic speech, suggesting that gender stereotyping for human voices and computerized voices may occur in a similar fashion.


Human Factors | 1999

The persuasiveness of synthetic speech versus human speech

Steven E. Stern; John W. Mullennix; Corrie-lynn Dyson; Stephen J. Wilson

Is computer-synthesized speech as persuasive as the human voice when presenting an argument? After completing an attitude pretest, 193 participants were randomly assigned to listen to a persuasive appeal under three conditions: a high-quality synthesized speech system (DECtalk Express), a low-quality synthesized speech system (Monologue), and a tape recording of a human voice. Following the appeal, participants completed a posttest attitude survey and a series of questionnaires designed to assess perceptions of speech qualities, perceptions of the speaker, and perceptions of the message. The human voice was generally perceived more favorably than the computer-synthesized voice, and the speaker was perceived more favorably when the voice was a human voice than when it was computer synthesized. There was, however, no evidence that computerized speech, as compared with the human voice, affected persuasion or perceptions of the message. Actual or potential applications of this research include issues that should be considered when designing synthetic speech systems.


Journal of the Acoustical Society of America | 1987

Integral processing of phonemes: evidence for a phonetic mode of perception

Gail R. Tomiak; John W. Mullennix; James R. Sawusch

To investigate the extent and locus of integral processing in speech perception, a speeded classification task was utilized with a set of noise-tone analogs of the fricative-vowel syllables (fae), (integral of ae), (fu), and (integral of u). Unlike the stimuli used in previous studies of selective perception of syllables, these stimuli did not contain consonant-vowel transitions. Subjects were asked to classify on the basis of one of the two syllable components. Some subjects were told that the stimuli were computer generated noise-tone sequences. These subjects processed the noise and tone separably. Irrelevant variation of the noise did not affect reaction times (RTs) for the classification of the tone, and vice versa. Other subjects were instructed to treat the stimuli as speech. For these subjects, irrelevant variation of the fricative increased RTs for the classification of the vowel, and vice versa. A second experiment employed naturally spoken fricative-vowel syllables with the same task. Classification RTs showed a pattern of integrality in that irrelevant variation of either component increased RTs to the other. These results indicate that knowledge of coarticulation (or its acoustic consequences) is a basic element of speech perception. Furthermore, the use of this knowledge in phonetic coding is mandatory, even in situations where the stimuli do not contain coarticulatory information.


Journal of Applied Psychology | 2002

Effects of perceived disability on persuasiveness of computer-synthesized speech

Steven E. Stern; John W. Mullennix; Stephen J. Wilson

Are perceptions of computer-synthesized speech altered by the belief that the person using this technology is disabled? In a 2 x 2 factorial design, participants completed an attitude pretest and were randomly assigned to watch an actor deliver a persuasive appeal under 1 of the following 4 conditions: disabled or nondisabled using normal speech and disabled or nondisabled using computer-synthesized speech. Participants then completed a posttest survey and a series of questionnaires assessing perceptions of voice, speaker, and message. Natural speech was perceived more favorably and was more persuasive than computer-synthesized speech. When the speaker was perceived to be speech-disabled, however, this difference diminished. This finding suggests that negatively viewed assistive technologies will be perceived more favorably when used by people with disabilities.

Collaboration


Dive into the John W. Mullennix's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David B. Pisoni

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher S. Martin

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Beth G. Greene

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

James V. Ralston

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rebecca Treiman

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Scott E. Lively

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Stephen J. Wilson

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge