Tessa Bent
Indiana University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tessa Bent.
Journal of the Acoustical Society of America | 2002
Ann R. Bradlow; Tessa Bent
Previous work has established that naturally produced clear speech is more intelligible than conversational speech for adult hearing-impaired listeners and normal-hearing listeners under degraded listening conditions. The major goal of the present study was to investigate the extent to which naturally produced clear speech is an effective intelligibility enhancement strategy for non-native listeners. Thirty-two non-native and 32 native listeners were presented with naturally produced English sentences. Factors that varied were speaking style (conversational versus clear), signal-to-noise ratio (-4 versus -8 dB) and talker (one male versus one female). Results showed that while native listeners derived a substantial benefit from naturally produced clear speech (an improvement of about 16 rau units on a keyword-correct count), non-native listeners exhibited only a small clear speech effect (an improvement of only 5 rau units). This relatively small clear speech effect for non-native listeners is interpreted as a consequence of the fact that clear speech is essentially native-listener oriented, and therefore is only beneficial to listeners with extensive experience with the sound structure of the target language.
Journal of Experimental Psychology: Human Perception and Performance | 2006
Tessa Bent; Ann R. Bradlow; Beverly A. Wright
In the present experiment, the authors tested Mandarin and English listeners on a range of auditory tasks to investigate whether long-term linguistic experience influences the cognitive processing of nonspeech sounds. As expected, Mandarin listeners identified Mandarin tones significantly more accurately than English listeners; however, performance did not differ across the listener groups on a pitch discrimination task requiring fine-grained discrimination of simple nonspeech sounds. The crucial finding was that cross-language differences emerged on a nonspeech pitch contour identification task: The Mandarin listeners more often misidentified flat and falling pitch contours than the English listeners in a manner that could be related to specific features of the sound structure of Mandarin, which suggests that the effect of linguistic experience extends to nonspeech processing under certain stimulus and task conditions.
Journal of the Acoustical Society of America | 2004
Janet B. Pierrehumbert; Tessa Bent; Benjamin Munson; Ann R. Bradlow; J. Michael Bailey
Vowel production in gay, lesbian, bisexual (GLB), and heterosexual speakers was examined. Differences in the acoustic characteristics of vowels were found as a function of sexual orientation. Lesbian and bisexual women produced less fronted /u/ and /ɑ/ than heterosexual women. Gay men produced a more expanded vowel space than heterosexual men. However, the vowels of GLB speakers were not generally shifted toward vowel patterns typical of the opposite sex. These results are inconsistent with the conjecture that innate biological factors have a broadly feminizing influence on the speech of gay men and a broadly masculinizing influence on the speech of lesbian/bisexual women. They are consistent with the idea that innate biological factors influence GLB speech patterns indirectly by causing selective adoption of certain speech patterns characteristic of the opposite sex.
Linguistics | 2005
Saundra K. Wright; Jennifer Hay; Tessa Bent
Abstract In pairs of names, male names often precede female names (e.g. Romeo and Juliet). We investigate this bias and argue that preferences for name ordering are constrained by a combination of gender, phonology, and frequency. First, various phonological constraints condition the optimal ordering of binomial pairs, and findings from our corpus investigations show that male names contain those features which lend them to be preferred in first position, while female names contain features which lend them to be preferred in second position. Thus, phonology predicts that male names are more likely to precede female names than follow them. Results from our name-ordering experiments provide further evidence that this “gendered phonology” plays a role in determining ordering preferences but also that an independent gender bias exists: when phonology is controlled (i.e. when two names are “phonologically equal”), subjects prefer male names first. Finally, frequency leads to another tendency to place male names first. Further investigation shows that frequent names are ordered before less frequent names and that male names are overall more “frequent” than female names. Together, all of these factors conspire toward an overwhelming tendency to place male names before female names.
Phonetica | 2008
Tessa Bent; Ann R. Bradlow; Bruce L. Smith
Two experiments examined production and perception of English temporal patterns by native and non-native participants. Experiment 1 indicated that native and non-native (L1 = Chinese) talkers differed significantly in their production of one English duration pattern (i.e., vowel lengthening before voiced versus voice-less consonants) but not another (i.e., tense versus lax vowels). Experiment 2 tested native and non-native listener identification of words that differed in voicing of the final consonant by the native and non-native talkers whose productions were substantially different in experiment 1. Results indicated that differences in native and non-native intelligibility may be partially explained by temporal pat-tern differences in vowel duration although other cues such as presence of stop releases and burst duration may also contribute. Additionally, speech intelligibility depends on shared phonetic knowledge between talkers and listeners rather than only on accuracy relative to idealized production norms.
Journal of the Acoustical Society of America | 2009
Tessa Bent; Adam Buchwald; David B. Pisoni
Talker intelligibility and perceptual adaptation under cochlear implant (CI)-simulation and speech in multi-talker babble were compared. The stimuli consisted of 100 sentences produced by 20 native English talkers. The sentences were processed to simulate listening with an eight-channel CI or were mixed with multi-talker babble. Stimuli were presented to 400 listeners in a sentence transcription task (200 listeners in each condition). Perceptual adaptation was measured for each talker by comparing intelligibility in the first 20 sentences of the experiment to intelligibility in the last 20 sentences. Perceptual adaptation patterns were also compared across the two degradation conditions by comparing performance in blocks of ten sentences. The most intelligible talkers under CI-simulation also tended to be the most intelligible talkers in multi-talker babble. Furthermore, listeners demonstrated a greater degree of perceptual adaptation in the CI-simulation condition compared to the multi-talker babble condition although the extent of adaptation varied widely across talkers. Listeners reached asymptote later in the experiment in the CI-simulation condition compared with the multi-talker babble condition. Overall, these two forms of degradation did not differ in their effect on talker intelligibility, although they did result in differences in the amount and time-course of perceptual adaptation.
Journal of the Acoustical Society of America | 2008
Jeremy L. Loebach; Tessa Bent; David B. Pisoni
A listeners ability to utilize indexical information in the speech signal can enhance their performance on a variety of speech perception tasks. It is unclear, however, whether such information plays a similar role for spectrally reduced speech signals, such as those experienced by individuals with cochlear implants. The present study compared the effects of training on linguistic and indexical tasks when adapting to cochlear implant simulations. Listening to sentences processed with an eight-channel sinewave vocoder, three separate groups of subjects were trained on a transcription task (transcription), a talker identification task (talker ID), or a gender identification task (gender ID). Pre- to posttest comparisons demonstrated that training produced significant improvement for all groups. Moreover, subjects from the talker ID and transcription training groups performed similarly at posttest and generalization, and significantly better than the subjects from the gender ID training group. These results suggest that training on an indexical task that requires high levels of controlled attention can provide equivalent benefits to training on a linguistic task. When listeners selectively focus their attention on the extralinguistic information in the speech signal, they still extract linguistic information, the degree to which they do so, however, appears to be task dependent.
Journal of Child Language | 2014
Tessa Bent
The acoustic-phonetic realizations of words can vary dramatically depending on a variety of within- and across-talker characteristics such as regional dialect, native language, age, and gender. Robust word learning requires that children are able to recognize words amidst this substantial variability. In the current study, perception of foreign-accented words was assessed in four- to seven-year-old children to test how one form of variability influences word recognition in children. Results demonstrated that children had less accurate word recognition than adults for both native- and foreign-accented words. Both adults and children were less accurate at identifying foreign-accented words compared to native-accented words with children and adults showing similar decrements. For children, age and lexicon size contributed to accurate word recognition.
Journal of the Acoustical Society of America | 2013
Tessa Bent; Rachael Frush Holt
In spoken word identification and memory tasks, stimulus variability from numerous sources impairs performance. In the current study, the influence of foreign-accent variability on spoken word identification was evaluated in two experiments. Experiment 1 used a between-subjects design to test word identification in noise in single-talker and two multiple-talker conditions: multiple talkers with the same accent and multiple talkers with different accents. Identification performance was highest in the single-talker condition, but there was no difference between the single-accent and multiple-accent conditions. Experiment 2 further explored word recognition for multiple talkers in single-accent versus multiple-accent conditions using a mixed design. A detriment to word recognition was observed in the multiple-accent condition compared to the single-accent condition, but the effect differed across the language backgrounds tested. These results demonstrate that the processing of foreign-accent variation may influence word recognition in ways similar to other sources of variability (e.g., speaking rate or style) in that the inclusion of multiple foreign accents can result in a small but significant performance decrement beyond the multiple-talker effect.
Journal of Phonetics | 2016
Tessa Bent; Eriko Atagi; Amal Akbik; Emma C. Bonifield
Abstract Talkers׳ regions of origin and native languages will significantly shape their speech production patterns. Previous results suggest that listeners are highly sensitive to whether a talker is a native or nonnative speaker of the language. Listeners also have some ability to categorize or classify talkers by regional dialect or nonnative accent. However, most previous studies have included variability in only one of these categories (many nonnative accents or many regional dialects). The present study simultaneously examined listeners’ perceptual organization of regional dialects and nonnative accents. Talkers representing six United States regional dialects, six international native dialects, and twelve nonnative accents were included in two tasks: an auditory free classification task—in which listeners grouped talkers based on perceived region of origin—and ladder task—in which listeners arranged talkers based on their perceived distance from standard American English. Listeners were sensitive to the distinction between native and nonnative accents, even when presented with a very wide range of dialects and accents. Further, subgroups within the native and nonnative clusters in the free classification task suggested several organizing factors, including perceived distance from the local standard, specific acoustic-phonetic talker characteristics, and speaking rate.