Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lynne C. Nygaard is active.

Publication


Featured researches published by Lynne C. Nygaard.


Psychological Science | 1994

Speech Perception as a Talker-Contingent Process

Lynne C. Nygaard; Mitchell S. Sommers; David B. Pisoni

To determine how familiarity with a talkers voice affects perception of spoken words, we trained two groups of subjects to recognize a set of voices over a 9-day period One group then identified novel words produced by the same set of talkers at four signal-to-noise ratios Control subjects identified the same words produced by a different set of talkers The results showed that the ability to identify a talkers voice improved intelligibility of novel words produced by that talker The results suggest that speech perception may involve talker-contingent processes whereby perceptual learning of aspects of the vocal source facilitates the subsequent phonetic analysis of the acoustic signal


Attention Perception & Psychophysics | 1998

Talker-specific learning in speech perception

Lynne C. Nygaard; David B. Pisoni

The effects of perceptual learning of talker identity on the recognition of spoken words and sentences were investigated in three experiments. In each experiment, listeners were trained to learn a set of 10 talkers’ voices and were then given an intelligibility test to assess the influence of learning the voices on the processing of the linguistic content of speech. In the first experiment, listeners learned voices from isolated words and were then tested with novel isolated words mixed in noise. The results showed that listeners who were given words produced by familiar talkers at test showed better identification performance than did listeners who were given words produced by unfamiliar talkers. In the second experiment, listeners learned novel voices from sentence-length utterances and were then presented with isolated words. The results showed that learning a talker’s voice from sentences did not generalize well to identification of novel isolated words. In the third experiment, listeners learned voices from sentence-length utterances and were then given sentence-length utterances produced by familiar and unfamiliar talkers at test. We found that perceptual learning of novel voices from sentence-length utterances improved speech intelligibility for words in sentences. Generalization and transfer from voice learning to linguistic processing was found to be sensitive to the talker-specific information available during learning and test. These findings demonstrate that increased sensitivity to talker-specific information affects the perception of the linguistic properties of speech in isolated words and sentences.


Attention Perception & Psychophysics | 1999

Effects of talker, rate, and amplitude variation on recognition memory for spoken words

Ann R. Bradlow; Lynne C. Nygaard; David B. Pisoni

This study investigated the encoding of the surface form of spoken words using a continuous recognition memory task. The purpose was to compare and contrast three sources of stimulus variability—talker, speaking rate, and overall amplitude—to determine the extent to which each source of variability is retained in episodic memory. In Experiment 1, listeners judged whether each word in a list of spoken words was “old” (had occurred previously in the list) or “new.” Listeners were more accurate at recognizing a word as old if it was repeated by the same talker and at the same speaking rate; however, there was no recognition advantage for words repeated at the same overall amplitude. In Experiment 2, listeners were first asked to judge whether each word was old or new, as before, and then they had to explicitly judge whether it was repeated by the same talker, at the same rate, or at the same amplitude. On the first task, listeners again showed an advantage in recognition memory for words repeated by the same talker and at same speaking rate, but no advantage occurred for the amplitude condition. However, in all three conditions, listeners were able to explicitly detect whether an old word was repeated by the same talker, at the same rate, or at the same amplitude. These data suggest that although information about all three properties of spoken words is encoded and retained in memory, each source of stimulus variation differs in the extent to which it affects episodic memory for spoken words.


Journal of Language and Social Psychology | 2002

Gender Differences in Vocal Accommodation:: The Role of Perception

Laura L. Namy; Lynne C. Nygaard; Denise Sauerteig

This study examined how perceptual sensitivity contributes to gender differences in vocal accommodation. Male and female shadowers repeated isolated words presented over headphones by male and female speakers, and male and female listeners evaluated whether accommodation occurred. Female shadowers accommodated more than males, and more to males than to female speakers, although some speakers elicited greater accommodation than others. Gender differences in accommodation emerged even when immediate social motives were minimized, suggesting that accommodation may be due, in part, to differences in perceptual sensitivity to vocal characteristics.


Journal of the Acoustical Society of America | 1994

Stimulus variability and spoken word recognition. I. Effects of variability in speaking rate and overall amplitude

Mitchell S. Sommers; Lynne C. Nygaard; David B. Pisoni

The present experiments investigated how several different sources of stimulus variability within speech signals affect spoken-word recognition. The effects of varying talker characteristics, speaking rate, and overall amplitude on identification performance were assessed by comparing spoken-word recognition scores for contexts with and without variability along a specified stimulus dimension. Identification scores for word lists produced by single talkers were significantly better than for the identical items produced in multiple-talker contexts. Similarly, recognition scores for words produced at a single speaking rate were significantly better than for the corresponding mixed-rate condition. Simultaneous variations in both speaking rate and talker characteristics produced greater reductions in perceptual identification scores than variability along either dimension alone. In contrast, variability in the overall amplitude of test items over a 30-dB range did not significantly alter spoken-word recognition scores. The results provide evidence for one or more resource-demanding normalization processes which function to maintain perceptual constancy by compensating for acoustic-phonetic variability in speech signals that can affect phonetic identification.


Journal of the Acoustical Society of America | 2009

Perceptual learning of systematic variation in Spanish-accented speech

Sabrina K. Sidaras; Jessica E. D. Alexander; Lynne C. Nygaard

Spoken language is characterized by an enormous amount of variability in how linguistic segments are realized. In order to investigate how speech perceptual processes accommodate to multiple sources of variation, adult native speakers of American English were trained with English words or sentences produced by six Spanish-accented talkers. At test, listeners transcribed utterances produced by six familiar or unfamiliar Spanish-accented talkers. With only brief exposure, listeners perceptually adapted to accent-general regularities in spoken language, generalizing to novel accented words and sentences produced by unfamiliar accented speakers. Acoustic properties of vowel production and their relation to identification performance were assessed to determine if the English listeners were sensitive to systematic variation in the realization of accented vowels. Vowels that showed the most improvement after Spanish-accented training were distinct from nearby vowels in terms of their acoustic characteristics. These findings suggest that the speech perceptual system dynamically adjusts to the acoustic consequences of changes in talkers voice and accent.


Journal of Experimental Psychology: Human Perception and Performance | 2008

Communicating Emotion : Linking Affective Prosody and Word Meaning

Lynne C. Nygaard; Jennifer S. Queen

The present study investigated the role of emotional tone of voice in the perception of spoken words. Listeners were presented with words that had either a happy, sad, or neutral meaning. Each word was spoken in a tone of voice (happy, sad, or neutral) that was congruent, incongruent, or neutral with respect to affective meaning, and naming latencies were collected. Across experiments, tone of voice was either blocked or mixed with respect to emotional meaning. The results suggest that emotional tone of voice facilitated linguistic processing of emotional words in an emotion-congruent fashion. These findings suggest that information about emotional tone is used in the processing of linguistic content influencing the recognition and naming of spoken words in an emotion-congruent manner.


Attention Perception & Psychophysics | 1995

Effects of stimulus variability on perception and representation of spoken words in memory

Lynne C. Nygaard; Mitchell S. Sommers; David B. Pisoni

A series of experiments was conducted to investigate the effects of stimulus variability on the memory representations for spoken words. A serial recall task was used to study the effects of changes in speaking rate, talker variability, and overall amplitude on the initial encoding, rehearsal, and recall of lists of spoken words. Interstimulus interval (ISI) was manipulated to determine the time course and nature of processing. The results indicated that at short ISIs, variations in both talker and speaking rate imposed a processing cost that was reflected in poorer serial recall for the primacy portion of word lists. At longer ISIs, however, variation in talker characteristics resulted in improved recall in initial list positions, whereas variation in speaking rate had no effect on recall performance. Amplitude variability had no effect on serial recall across all ISIs. Taken together, these results suggest that encoding of stimulus dimensions such as talker characteristics, speaking rate, and overall amplitude may be the result of distinct perceptual operations. The effects of these sources of stimulus variability in speech are discussed with regard to perceptual saliency, processing demands, and memory representation for spoken words.


Memory & Cognition | 2002

Resolution of lexical ambiguity by emotional tone of voice.

Lynne C. Nygaard; Erin R. Lunders

In the present study, the effects of emotional tone of voice on the perception of word meaning were investigated. In two experiments, listeners were presented with emotional homophones that had one affective meaning (happy or sad) and one neutral meaning. In both experiments, the listeners were asked to transcribe the emotional homophones presented in three different affective tones—happy, neutral, and sad. In the first experiment, trials were blocked by tone of voice, and in the second experiment, tone of voice varied from trial to trial. The results showed that the listeners provided more affective than neutral transcriptions when the tone of voice was congruent with the emotional meaning of the homophone. These findings suggest that emotional tone of voice affects the processing of lexically ambiguous words by biasing the selection of word meaning.


Cognitive Science | 2009

The semantics of prosody: acoustic and perceptual evidence of prosodic correlates to word meaning.

Lynne C. Nygaard; Debora S. Herold; Laura L. Namy

This investigation examined whether speakers produce reliable prosodic correlates to meaning across semantic domains and whether listeners use these cues to derive word meaning from novel words. Speakers were asked to produce phrases in infant-directed speech in which novel words were used to convey one of two meanings from a set of antonym pairs (e.g., big/small). Acoustic analyses revealed that some acoustic features were correlated with overall valence of the meaning. However, each word meaning also displayed a unique acoustic signature, and semantically related meanings elicited similar acoustic profiles. In two perceptual tests, listeners either attempted to identify the novel words with a matching meaning dimension (picture pair) or with mismatched meaning dimensions. Listeners inferred the meaning of the novel words significantly more often when prosody matched the word meaning choices than when prosody mismatched. These findings suggest that speech contains reliable prosodic markers to word meaning and that listeners use these prosodic cues to differentiate meanings. That prosody is semantic suggests a reconceptualization of traditional distinctions between linguistic and nonlinguistic properties of spoken language.

Collaboration


Dive into the Lynne C. Nygaard's collaboration.

Top Co-Authors

Avatar

David B. Pisoni

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mitchell S. Sommers

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexandra Jesse

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dawn M. Behne

Norwegian University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge