Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dawn M. Behne is active.

Publication


Featured researches published by Dawn M. Behne.


Applied Psycholinguistics | 2004

The role of linguistic experience in the hemispheric processing of lexical tone

Yue Wang; Dawn M. Behne; Allard Jongman; Joan A. Sereno

This study investigated hemispheric lateralization of Mandarin tone. Four groups of listeners were examined: native Mandarin listeners, English‐Mandarin bilinguals, Norwegian listeners with experience with Norwegian tone, and American listeners with no tone experience. Tone pairs were dichotically presented and listeners identified which tone they heard in each ear. For the Mandarin listeners, 57% of the total errors occurred in the left ear, indicating a right-ear (left-hemisphere) advantage. The English‐ Mandarin bilinguals exhibited nativelike patterns, with 56% left-ear errors. However, no ear advantage was found for the Norwegian or American listeners (48 and 47% left-ear errors, respectively). Results indicate left-hemisphere dominance of Mandarin tone by native and proficient bilingual listeners, whereas nonnative listeners show no evidence of lateralization, regardless of their familiarity with lexical tone.


Journal of the Acoustical Society of America | 2008

Linguistic experience and audio-visual perception of non-native fricatives

Yue Wang; Dawn M. Behne; Haisheng Jiang

This study examined the effects of linguistic experience on audio-visual (AV) perception of non-native (L2) speech. Canadian English natives and Mandarin Chinese natives differing in degree of English exposure [long and short length of residence (LOR) in Canada] were presented with English fricatives of three visually distinct places of articulation: interdentals nonexistent in Mandarin and labiodentals and alveolars common in both languages. Stimuli were presented in quiet and in a cafe-noise background in four ways: audio only (A), visual only (V), congruent AV (AVc), and incongruent AV (AVi). Identification results showed that overall performance was better in the AVc than in the A or V condition and better in quiet than in cafe noise. While the Mandarin long LOR group approximated the native English patterns, the short LOR group showed poorer interdental identification, more reliance on visual information, and greater AV-fusion with the AVi materials, indicating the failure of L2 visual speech category formation with the short LOR non-natives and the positive effects of linguistic experience with the long LOR non-natives. These results point to an integrated network in AV speech processing as a function of linguistic background and provide evidence to extend auditory-based L2 speech learning theories to the visual domain.


Journal of Phonetics | 2009

Influence of native language phonetic system on audio-visual speech perception

Yue Wang; Dawn M. Behne; Haisheng Jiang

This study examines how native language (L1) experience affects auditory–visual (AV) perception of nonnative (L2) speech. Korean, Mandarin and English perceivers were presented with English CV syllables containing fricatives with three places of articulation: labiodentals nonexistent in Korean, interdentals nonexistent in Korean and Mandarin, and alveolars occurring in all three L1s. The stimuli were presented as auditory-only, visual-only, congruent AV and incongruent AV. Results show that for the labiodentals which are nonnative in Korean, the Koreans had lower accuracy for the visual domain than the English and the Mandarin perceivers, but they nevertheless achieved native-level perception in the auditory and AV domains. For the interdentals nonexistent in Korean and Mandarin, while both nonnative groups had lower accuracy in the auditory domain than the native English group, they benefited from the visual information with improved performance in AV perception. Comparing the two nonnative groups, the Mandarin perceivers showed poorer auditory and AV identification for the interdentals and greater AV-fusion with the incongruent AV material than did the Koreans. These results indicate that nonnative perceivers are able to use visual speech information in L2 perception, although acquiring accurate use of the auditory and visual domains may not be similarly achieved across native groups, a process influenced by L1 experience.


Journal of the Acoustical Society of America | 2009

Audio-visual identification of place of articulation and voicing in white and babble noisea)

Magnus Alm; Dawn M. Behne; Yue Wang; Ragnhild Eg

Research shows that noise and phonetic attributes influence the degree to which auditory and visual modalities are used in audio-visual speech perception (AVSP). Research has, however, mainly focused on white noise and single phonetic attributes, thus neglecting the more common babble noise and possible interactions between phonetic attributes. This study explores whether white and babble noise differentially influence AVSP and whether these differences depend on phonetic attributes. White and babble noise of 0 and -12 dB signal-to-noise ratio were added to congruent and incongruent audio-visual stop consonant-vowel stimuli. The audio (A) and video (V) of incongruent stimuli differed either in place of articulation (POA) or voicing. Responses from 15 young adults show that, compared to white noise, babble resulted in more audio responses for POA stimuli, and fewer for voicing stimuli. Voiced syllables received more audio responses than voiceless syllables. Results can be attributed to discrepancies in the acoustic spectra of both the noise and speech target. Voiced consonants may be more auditorily salient than voiceless consonants which are more spectrally similar to white noise. Visual cues contribute to identification of voicing, but only if the POA is visually salient and auditorily susceptible to the noise type.


Journal of the Acoustical Society of America | 2013

Audio-visual speech experience with age influences perceived audio-visual asynchrony in speech.

Magnus Alm; Dawn M. Behne

Previous research indicates that perception of audio-visual (AV) synchrony changes in adulthood. Possible explanations for these age differences include a decline in hearing acuity, a decline in cognitive processing speed, and increased experience with AV binding. The current study aims to isolate the effect of AV experience by comparing synchrony judgments from 20 young adults (20 to 30 yrs) and 20 normal-hearing middle-aged adults (50 to 60 yrs), an age range for which a decline of cognitive processing speed is expected to be minimal. When presented with AV stop consonant syllables with asynchronies ranging from 440 ms audio-lead to 440 ms visual-lead, middle-aged adults showed significantly less tolerance for audio-lead than young adults. Middle-aged adults also showed a greater shift in their point of subjective simultaneity than young adults. Natural audio-lead asynchronies are arguably more predictable than natural visual-lead asynchronies, and this predictability may render audio-lead thresholds more prone to experience-related fine-tuning.


Journal of the Acoustical Society of America | 1991

Effects of alcohol on speech: I. Durations of isolated words, sentences, and passages in fluent speech.

Dawn M. Behne; Susan M. Rivera; David B. Pisoni

Although the effects of alcohol on speech production have not been widely investigated, previous research has suggested that utterances produced while a talker is intoxicated may be longer than those produced while the talker is sober [e.g., Sobell et al., Folia Phonetica 34, 316–323 (1982); D. B. Pisoni and C. S. Martin, Alcoholism: Clinical Exp. Res. 13, 577–587 (1989)]. As part of a larger investigation of the effects of alcohol on speech, nine talkers were recorded while sober and intoxicated. Talkers produced isolated monosyllabic words, isolated spondees, isolated sentences, and passages of fluent speech. Two questions of utterance duration were addressed: (1) Does alcohol affect the duration of utterances? (2) Does alcohol affect the duration of different utterance types in the same way? The results revealed that isolated sentences and sentences from within passages produced in the intoxicated condition were reliably longer than those produced in the sober condition. However, for isolated monosyllabic words and spondees, utterance durations were not reliably different between the sober and intoxicated conditions. Results are discussed in terms of the effects of alcohol on speech motor control.


Multimedia Tools and Applications | 2015

Audiovisual robustness: exploring perceptual tolerance to asynchrony and quality distortion

Ragnhild Eg; Carsten Griwodz; Pål Halvorsen; Dawn M. Behne

Rules-of-thumb for noticeable and detrimental asynchrony between audio and video streams have long since been established from the contributions of several studies. Although these studies share similar findings, none have made any discernible assumptions regarding audio and video quality. Considering the use of active adaptation in present and upcoming streaming systems, audio and video will continue to be delivered in separate streams; consequently, the assumption that the rules-of-thumb hold independent of quality needs to be challenged. To put this assumption to the test, we focus on the detection, not the appraisal, of asynchrony at different levels of distortion. Cognitive psychologists use the term temporal integration to describe the failure to detect asynchrony. The term refers to a perceptual process with an inherent buffer for short asynchronies, where corresponding auditory and visual signals are merged into one experience. Accordingly, this paper discusses relevant causes and concerns with regards to asynchrony, it introduces research on audiovisual perception, and it moves on to explore the impact of audio and video quality on the temporal integration of different audiovisual events. Three content types are explored, speech from a news broadcast, music presented by a drummer, and physical action in the form of a chess game. Within these contexts, we found temporal integration to be very robust to quality discrepancies between the two modalities. In fact, asynchrony detection thresholds varied considerably more between the different content than they did between distortion levels. Nevertheless, our findings indicate that the assumption concerning the independence of asynchrony and audiovisual quality may have to be reconsidered.


Journal of the Acoustical Society of America | 2006

The effects of musical experience on linguistic pitch perception: A comparison of Norwegian professional singers and instrumentalists

Maren Helene Ro; Dawn M. Behne; Yue Wang

Speech prosody and music share tonal attributes well suited for studying cross‐domain transfer effects. The present study investigates whether the specific pitch experience acquired by professional singers and instrumentalists transfers to the perception of corresponding prosodic elements in a native and non‐native language. Norwegian and Mandarin words with tonal distinctions, together with corresponding hummed tones, were presented dichotically in a forced attention listening test to three groups of native Norwegian listeners: professional singers, professional instrumentalists, and nonmusicians. While instrumentalists and singers were both more accurate (higher percent correct for both ears) than nonmusicians for Mandarin linguistic and hummed tones, only instrumentalists showed positive transfer to corresponding native Norwegian stimuli. Results indicate a pattern of perceiving tonal distinctions that mirrors the pitch experience acquired through professional vocal and instrumental training: Instrumen...


Journal of the Acoustical Society of America | 1998

Perceived vowel quantity in Swedish: Effects of postvocalic voicing

Dawn M. Behne; Peter E. Czigler; Kirk P. H. Sullivan

Swedish is described as having a distinction between phonologically long and short vowels. This distinction is realized primarily through the duration of the vowels, but in some cases also through resonance characteristics of the vowels. In Swedish, like many languages, vowel duration is also longer preceding a voiced postvocalic consonant than a voiceless one. This study examines the weight of vowel duration and the first and second formant frequencies F1–F2 frequencies when distinguishing phonologically long and short vowel before a voiceless consonant (experiment 1) and before a voiced consonant (experiment 2). For three pairs of Swedish vowels ([i:]‐[ɪ], [o:]‐[ɔ], [ɑ:]‐[a]) 100 /kVt/ (experiment 1) and 100 /kVd/ (experiment 2) words were resynthesized having ten degrees of vowel duration and ten degrees of F1 and F2 adjustment. In both experiments listeners decided whether presented words contained a phonologically long or short vowel. Reaction times were also recorded. Results show that vowel duratio...


Journal of the Acoustical Society of America | 1999

Perceived vowel quantity in Swedish: Native and British listeners

Dawn M. Behne; Kirk P. H. Sullivan; Peter E. Czigler

In many languages, vowels are characterized by their use of contrastive phonological vowel quantity and vowel quality. In Swedish, vowels have traditionally been described as being distinct in quality as well as having a phonological distinction between short and long vowel quantities. In English, however, phonological distinctions among vowels are described as primarily qualitative. This investigation examines the perceptual use of vowel duration and the first two vowel formant frequencies in distinguishing Swedish vowel pairs by three groups of listeners: native Swedish listeners (SS), British English listeners who do not know Swedish (EE), and British listeners who know Swedish well (ES). For each of three pairs of Swedish vowels (IPAKiel[i=DC]IPAKiel‐[I], IPAKiel[o=DC]‐IPAKiel[O], IPAKiel[A=DC]IPAKiel‐[a]), /kVt/ words were resynthesized having ten degrees of vowel duration and ten degrees of F1 and F2 adjustment. Listeners’ responses and reaction times in a rhyming task show that unlike native listen...

Collaboration


Dive into the Dawn M. Behne's collaboration.

Top Co-Authors

Avatar

Yue Wang

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar

Peter E. Czigler

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Magnus Alm

Norwegian University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ragnhild Eg

Simula Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge