Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kristin J. Van Engen is active.

Publication


Featured researches published by Kristin J. Van Engen.


Language and Speech | 2010

The wildcat corpus of native-and foreign-accented english: Communicative efficiency across conversational dyads with varying language alignment profiles

Kristin J. Van Engen; Melissa Baese-Berk; Rachel E. Baker; Arim Choi; Midam Kim; Ann R. Bradlow

This paper describes the development of the Wildcat Corpus of native- and foreign-accented English, a corpus containing scripted and spontaneous speech recordings from 24 native speakers of American English and 52 non-native speakers of English. The core element of this corpus is a set of spontaneous speech recordings, for which a new method of eliciting dialogue-based, laboratory-quality speech recordings was developed (the Diapix task). Dialogues between two native speakers of English, between two non-native speakers of English (with either shared or different L1s), and between one native and one non-native speaker of English are included and analyzed in terms of general measures of communicative efficiency. The overall finding was that pairs of native talkers were most efficient, followed by mixed native/non-native pairs and non-native pairs with shared L1. Non-native pairs with different L1s were least efficient. These results support the hypothesis that successful speech communication depends both on the alignment of talkers to the target language and on the alignment of talkers to one another in terms of native language background.


Journal of the Acoustical Society of America | 2007

The Wildcat corpus of native‐ and foreign‐accented English

Ann R. Bradlow; Rachel E. Baker; Arim Choi; Midam Kim; Kristin J. Van Engen

This paper describes the development of the Wildcat Corpus of native- and foreign-accented English,a corpus containing scripted and spontaneous speech recordings from 24 native speakers of American English and 52 non-native speakers of English.The core element of this corpus is a set of spontaneous speech recordings, for which a new method of eliciting dialogue-based, laboratory-quality speech recordings was developed (the Diapix task). Dialogues between two native speakers of English, between two non-native speakers of English (with either shared or different LIs), and between one native and one non-native speaker of English are included and analyzed in terms of general measures of communicative efficiency.The overall finding was that pairs of native talkers were most efficient, followed by mixed native/non-native pairs and non-native pairs with shared LI. Non-native pairs with different LIs were least efficient.These results support the hypothesis that successful speech communication depends both on the alignment of talkers to the target language and on the alignment of talkers to one another in terms of native language background.


Language and Cognitive Processes | 2012

Speech-in-speech recognition: A training study

Kristin J. Van Engen

This study aims to identify aspects of speech-in-noise recognition that are susceptible to training, focusing on whether listeners can learn to adapt to target talkers (“tune in”) and learn to better cope with various maskers (“tune out”) after short-term training. Listeners received training on English sentence recognition in speech-shaped noise (SSN), Mandarin babble, or English babble. Results from a speech-in-babble posttest showed evidence of both tuning in and tuning out: (1) listeners were able to take advantage of target talker familiarity; (2) training with babble was more effective than SSN training; and (3) after babble training, listeners improved most in coping with the babble in which they were trained. In general, the results show that processes related both to tuning in to speech targets and tuning out speech maskers can be improved with auditory training.This study aims to identify aspects of speech-in-noise recognition that are susceptible to training, focusing on whether listeners can learn to adapt to target talkers (“tune in”) and learn to better cope with various maskers (“tune out”) after short-term training. Listeners received training on English sentence recognition in speech-shaped noise (SSN), Mandarin babble, or English babble. Results from a speech-in-babble posttest showed evidence of both tuning in and tuning out: (1) listeners were able to take advantage of target talker familiarity; (2) training with babble was more effective than SSN training; and (3) after babble training, listeners improved most in coping with the babble in which they were trained. In general, the results show that processes related both to tuning in to speech targets and tuning out speech maskers can be improved with auditory training.


Journal of the Acoustical Society of America | 2007

A methodological note on signal‐to‐noise ratios in speech research

Kristin J. Van Engen

In speech‐in‐noise tests that utilize a range of signal‐to‐noise ratios (SNRs), SNR is usually manipulated either by changing the noise level relative to a constant signal level or by changing the signal level relative to a constant noise level. In the HINT, for example, noise level is held constant while target sentence level changes. Such manipulations entail differences in overall energy delivered to the ear across SNR conditions. It is possible, then, that overall energy changes are partially responsible for behavioral differences observed across SNRs. For example, [Van Engen and Bradlow (2007)] showed that English 2‐talker babble was more detrimental than Mandarin 2‐talker babble for English sentence recognition by native English listeners at difficult SNRs, where SNR was varied by raising the noise level with respect to the target sentence level. In the present study, it is shown that releveling the mixed signal and noise files to a single output level across SNRs yields similar results. Thus, the i...


Journal of the Acoustical Society of America | 2017

A relationship between processing speech in noise and dysarthric speech

Stephanie A. Borrie; Melissa Baese-Berk; Kristin J. Van Engen; Tessa Bent

There is substantial individual variability in understanding speech in adverse listening conditions. This study examined whether a relationship exists between processing speech in noise (environmental degradation) and dysarthric speech (source degradation), with regard to intelligibility performance and the use of metrical stress to segment the degraded speech signals. Ninety native speakers of American English transcribed speech in noise and dysarthric speech. For each type of listening adversity, transcriptions were analyzed for proportion of words correct and lexical segmentation errors indicative of stress cue utilization. Consistent with the hypotheses, intelligibility performance for speech in noise was correlated with intelligibility performance for dysarthric speech, suggesting similar cognitive-perceptual processing mechanisms may support both. The segmentation results also support this postulation. While stress-based segmentation was stronger for speech in noise relative to dysarthric speech, listeners utilized metrical stress to parse both types of listening adversity. In addition, reliance on stress cues for parsing speech in noise was correlated with reliance on stress cues for parsing dysarthric speech. Taken together, the findings demonstrate a preference to deploy the same cognitive-perceptual strategy in conditions where metrical stress offers a route to segmenting degraded speech.


Journal of the Acoustical Society of America | 2010

Speech‐in‐speech recognition: Phonetic distance and semantic content.

Susanne Brouwer; Kristin J. Van Engen; Lauren Calandruccio; Ann R. Bradlow

Previous research has shown that monolingual English listeners receive a release from informational masking if the competing speech is foreign‐language versus native‐language noise [e.g., Van Engen & Bradlow (2007)]. This study examines whether speech‐in‐speech recognition varies even across typologically close target and noise languages (English versus Dutch) as well as across variation in the semantic content of the masker (meaningful versus semantically anomalous sentences) and the listener status (native versus nonnative listeners). The English and Dutch listeners’ task was to repeat target sentences in noise. The results for the Dutch listeners on Dutch targets showed a release from masking in English versus Dutch noise and in semantically anomalous versus meaningful sentences. Unexpectedly, the results for both listener groups on English targets only showed a release from masking for English anomalous noise relative to all other conditions. However, after normalization of the long term average spect...


Journal of the Acoustical Society of America | 2009

Informational masking in first‐ and second‐language speech recognition.

Kristin J. Van Engen

Human speech recognition in noisy conditions is modulated by cognitive factors such as language background. For example, noise is more detrimental to non‐native listeners than native listeners (e.g., van Wijngaarden et al., 2002), and when noise is a speech signal, native‐language noise is more detrimental than foreign‐language noise for listeners attending to native‐language speech targets (e.g., Van Engen and Bradlow, 2007). It is not clear, however, whether this increased interference is primarily due to the native status of the noise or to the greater similarity between target and noise. To address this issue, English speech recognition in the presence of English and Mandarin babble was assessed for monolingual English listeners and L2 English listeners whose L1 is Mandarin. Results showed that intelligibility for both groups was lower in English versus Mandarin babble; that is, L2 listeners experienced more difficulty in same‐language noise versus native‐language noise. However, monolingual English l...


Journal of the Acoustical Society of America | 2017

Clear speech and lexical competition in younger and older adult listeners

Kristin J. Van Engen

This study investigated whether clear speech reduces the cognitive demands of lexical competition by crossing speaking style with lexical difficulty. Younger and older adults identified more words in clear versus conversational speech and more easy words than hard words. An initial analysis suggested that the effect of lexical difficulty was reduced in clear speech, but more detailed analyses within each age group showed this interaction was significant only for older adults. The results also showed that both groups improved over the course of the task and that clear speech was particularly helpful for individuals with poorer hearing: for younger adults, clear speech eliminated hearing-related differences that affected performance on conversational speech. For older adults, clear speech was generally more helpful to listeners with poorer hearing. These results suggest that clear speech affords perceptual benefits to all listeners and, for older adults, mitigates the cognitive challenge associated with identifying words with many phonological neighbors.This study investigated whether clear speech reduces the cognitive demands of lexical competition by crossing speaking style with lexical difficulty. Younger and older adults identified more words in clear versus conversational speech and more easy words than hard words. An initial analysis suggested that the effect of lexical difficulty was reduced in clear speech, but more detailed analyses within each age group showed this interaction was significant only for older adults. The results also showed that both groups improved over the course of the task and that clear speech was particularly helpful for individuals with poorer hearing: for younger adults, clear speech eliminated hearing-related differences that affected performance on conversational speech. For older adults, clear speech was generally more helpful to listeners with poorer hearing. These results suggest that clear speech affords perceptual benefits to all listeners and, for older adults, mitigates the cognitive challenge associated with ide...


Journal of the Acoustical Society of America | 2017

Effects of talker intelligibility and noise on judgments of accentedness

Sarah Gittleman; Kristin J. Van Engen

The damaging effect of background noise on the intelligibility of foreign-accented speech has been well documented, but little is known about the effect of noise on listeners’ subjective judgments of accents. Noise adds distortion to speech, which may cause it to sound more “foreign.” On the other hand, noise may reduce perceived foreignness by masking cues to the accent. In this study, 40 native English speakers listened to 14 English-speaking native Mandarin speakers in four levels of noise: -4 dB, 0 dB, + 4 dB, and quiet. Participants judged each speaker on a scale from 1 (native-like) to 9 (foreign). The results showed a significant decrease in perceived accentedness as noise level increased. They also showed a significant interaction between noise and intelligibility: intelligibility (which had been measured for the same talkers in a previous study) had the greatest effect on perceived accentedness in quiet, and a reduced effect with increasing noise levels. These findings indicate that listeners’ de...


Journal of the Acoustical Society of America | 2017

Individual variation in the perception of different types of speech degradation

Drew J. McLaughlin; Melissa Baese-Berk; Tessa Bent; Stephanie A. Borrie; Kristin J. Van Engen

Both environmental noise and talker-related variation (e.g., accented speech) can create adverse listening conditions for speech communication. Individuals recruit additional cognitive, linguistic, or perceptual resources when faced with such challenges, and they vary in their ability to understand degraded speech. However, it is unclear whether listeners employ the same additional resources when encountering different types of challenging listening conditions. In the present study, we compare individuals’ ability on a variety of skills —including vocabulary, selective attention, rhythm perception, and working memory—with transcription accuracy (i.e., intelligibility scores) of speech degraded by the addition of speech-shaped noise or multi-talker babble and/or talker variation (i.e., a non-native speaker). Initial analyses show that intelligibility scores across degradations of the same class (i.e., either environmental or talker-related) significantly correlate, but correlations of intelligibility score...

Collaboration


Dive into the Kristin J. Van Engen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Midam Kim

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arim Choi

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonathan E. Peelle

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Rajka Smiljanic

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge