Susannah V. Levi
New York University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Susannah V. Levi.
Journal of the International Phonetic Association | 2005
Susannah V. Levi
Phonologists have long discussed the properties of ‘stress’ in Turkish, though there are some authors that suggest that Turkish is actually a pitch-accent language. As a first step in determining whether Turkish should be grouped with pitch-accent languages such as Basque and Japanese or with stress-accent languages such as English, this experimental study provides a detailed phonetic examination of word-level accent in Turkish. Seven female native speakers of Turkish were recorded saying 40 target words in three repetitions. The results show that there are significant differences in duration, peak amplitude, and peak F0 between words with final vs. non-final lexical accent. The magnitude of the duration differences, however, is not likely to be sufficiently perceptible to be a reliable cue to accent placement. The F0 peaks were dramatically different between lexically accented and unaccented syllables. A discriminant analysis confirms that F0 is the most robust cue, followed by intensity and then duration. The current study also provides a method for determining lexical accent placement in words where accent placement has been disputed. That is, a marked drop in F0 signals that the syllable preceding the drop contains the lexical accent.
Journal of the Acoustical Society of America | 2006
Cynthia G. Clopper; Susannah V. Levi; David B. Pisoni
Previous research on the perception of dialect variation has measured the perceptual similarity of talkers based on regional dialect using only indirect methods. In the present study, a paired comparison similarity ratings task was used to obtain direct measures of perceptual similarity. Naive listeners were asked to make explicit judgments about the similarity of a set of talkers based on regional dialect. The talkers represented four regional varieties of American English and both genders. Results revealed an additive effect of gender and dialect on mean similarity ratings and two primary dimensions of perceptual dialect similarity: geography (northern versus southern varieties) and dialect markedness (many versus few characteristic properties). The present findings are consistent with earlier research on the perception of dialect variation, as well as recent speech perception studies which demonstrate the integral role of talker gender in speech perception.
Attention Perception & Psychophysics | 2010
Rebecca E. Ronquest; Susannah V. Levi; David B. Pisoni
Our goal in the present study was to examine how observers identify English and Spanish from visual-only displays of speech. First, we replicated the recent findings of Soto-Faraco et al. (2007) with Spanish and English bilingual and monolingual observers using different languages and a different experimental paradigm (identification). We found that prior linguistic experience affected response bias but not sensitivity (Experiment 1). In two additional experiments, we investigated the visual cues that observers use to complete the languageidentification task. The results of Experiment 2 indicate that some lexical information is available in the visual signal but that it is limited. Acoustic analyses confirmed that our Spanish and English stimuli differed acoustically with respect to linguistic rhythmic categories. In Experiment 3, we tested whether this rhythmic difference could be used by observers to identify the language when the visual stimuli is temporally reversed, thereby eliminating lexical information but retaining rhythmic differences. The participants performed above chance even in the backward condition, suggesting that the rhythmic differences between the two languages may aid language identification in visual-only speech signals. The results of Experiments 3A and 3B also confirm previous findings that increased stimulus length facilitates language identification. Taken together, the results of these three experiments replicate earlier findings and also show that prior linguistic experience, lexical information, rhythmic structure, and utterance length influence visual-only language identification.
Phonetica | 2015
Susannah V. Levi
The current study explores the question of how an auditory category is learned by having school-age listeners learn to categorize speech not in terms of linguistic categories, but instead in terms of talker categories (i.e., who is talking). Findings from visual-category learning indicate that working memory skills affect learning, but the literature is equivocal: sometimes better working memory is advantageous, and sometimes not. The current study examined the role of different components of working memory to test which component skills benefit, and which hinder, learning talker categories. Results revealed that the short-term storage component positively predicted learning, but that the Central Executive and Episodic Buffer negatively predicted learning. As with visual categories, better working memory is not always an advantage.
Language and Speech | 2015
Susannah V. Levi
The current study examined whether fine-grained phonetic detail (voice onset time (VOT)) of one segment (/p/ or /k/) generalizes to a different segment (/t/) within the same natural class. Two primes were constructed to exploit the natural variation of VOT: a velar stop followed by a high vowel (keen) resulting in a naturally long VOT and a labial stop followed by a low vowel (pan) resulting in a naturally shorter VOT. Two experiments were conducted, one in which the speakers produced both the prime and the target, and a second in which the speakers heard the primes and then produced the targets. In Experiment 1, VOTs for initial /t/ were shorter following pan than following keen. In Experiment 2 where participants heard the primes, priming was found only when the primes had unexpected relative VOT values (short for keen and long for pan). These results provide evidence for cross-segmental generalization of phonetic detail and also suggest that natural, within-category variability is encoded during language processing.
Bilingualism: Language and Cognition | 2017
Susannah V. Levi
A bilingual advantage has been found in both cognitive and social tasks. In the current study, we examine whether there is a bilingual advantage in how children process information about who is talking (talker-voice information). Younger and older groups of monolingual and bilingual children completed the following talker-voice tasks with bilingual speakers: a discrimination task in English and German (an unfamiliar language), and a talker-voice learning task in which they learned to identify the voices of three unfamiliar speakers in English. Results revealed effects of age and bilingual status. Across the tasks, older children performed better than younger children and bilingual children performed better than monolingual children. Improved talker-voice processing by the bilingual children suggests that a bilingual advantage exists in a social aspect of speech perception, where the focus is not on processing the linguistic information in the signal, but instead on processing information about who is talking.
Journal of the Acoustical Society of America | 2008
Susannah V. Levi; Stephen J. Winters; David B. Pisoni
Previous research has shown that familiar talkers are more intelligible than unfamiliar talkers. In the current study, we tested the source of this familiar talker advantage by manipulating the type of talker information available to listeners. Two groups of native English listeners were familiarized with the voices of five German‐English bilingual talkers; one group learned the voices from German stimuli and the other from English stimuli. Thus, English‐trained listeners had access to both language‐independent and English‐specific talker information, while German‐trained listeners had access to language‐independent and German‐specific talker information. After three days of voice learning, all listeners performed a word recognition task in English. Consistent with previous findings, English‐trained listeners found the speech of familiar talkers to be more intelligible than unfamiliar talkers, as measured by whole words and phonemes correct. In contrast, German‐trained listeners showed no familiar talker ...
Wiley Interdisciplinary Reviews: Cognitive Science | 2018
Susannah V. Levi
The Language Familiarity Effect (LFE)-where listeners are better at processing talker-voice information in their native language than in an unfamiliar language-has received renewed attention in the past 10 years. Numerous studies have sought to probe the underlying causes of this advantage by cleverly manipulating aspects of the stimuli (using phonologically related languages, backwards speech, nonwords) and by examining individual differences across listeners (testing reading ability and pitch perception). Most of these studies find evidence for the importance of phonological information or phonological processing as a supporting mechanism for the LFE. What has not been carefully examined, however, are how other methodological considerations such as task effects and stimulus length can change performance on talker-voice processing tasks. In this review, I provide an overview of the literature on the LFE and examine how methodological decisions affect the presence or absence of the LFE. This article is categorized under: Linguistics > Language in Mind and Brain Psychology > Language.
Journal of the Acoustical Society of America | 2018
Grechen Go; Heather Campbell; Susannah V. Levi
The current study tests production accuracy of a non-native vowel contrast following a modified perceptual learning paradigm. Previous research has found that adults can learn non-native sound categories with sensitivity to distributional properties of their input. In the current study, 34 native-English adults heard stimuli from /o/-/oe/ continuum. Half of the participants heard stimuli drawn from a bimodal distribution and half from a unimodal distribution. To support learning, we incorporated active learning with feedback, lexical support in the form of two images, and overnight consolidation. Production of this contrast was assessed through a repetition task at baseline and at the end of the experiment. Production accuracy is measured as the Euclidean distance between /o/ and /oe/ at baseline and post-training. Preliminary analyses suggest that the distance between these two vowels increased for both groups, indicating that listeners in both conditions were able to transfer perceptual learning to production. These results suggest a way to mitigate the disadvantages previously found for participants in the unimodal condition, by incorporating active engagement with the target stimuli using lexical support, accuracy feedback, and overnight consolidation.The current study tests production accuracy of a non-native vowel contrast following a modified perceptual learning paradigm. Previous research has found that adults can learn non-native sound categories with sensitivity to distributional properties of their input. In the current study, 34 native-English adults heard stimuli from /o/-/oe/ continuum. Half of the participants heard stimuli drawn from a bimodal distribution and half from a unimodal distribution. To support learning, we incorporated active learning with feedback, lexical support in the form of two images, and overnight consolidation. Production of this contrast was assessed through a repetition task at baseline and at the end of the experiment. Production accuracy is measured as the Euclidean distance between /o/ and /oe/ at baseline and post-training. Preliminary analyses suggest that the distance between these two vowels increased for both groups, indicating that listeners in both conditions were able to transfer perceptual learning to prod...
Journal of the Acoustical Society of America | 2018
Ashley Quinto; Kylee Kim; Susannah V. Levi
The current study aimed to replicate a recent study conducted by Narayan, Mak, & Bialystok (2016) that found effects of top-down linguistic information on a talker discrimination task by comparing four conditions: compounds (day-dream), rhymes (day-bay), reverse compounds (dream-day), and unrelated words (day-bee). The original study used both within- and across-gender pairs and same and different trials were analyzed separately, obscuring possible response biases. Narayan et al. found graded performance between the four conditions, but some results were likely to have been influenced by the use of across-gender trials in the different-talker condition. In the current study, only female speakers were used and results were analyzed with signal detection theory (sensitivity and bias measures). Results revealed that participants were faster to respond to rhyming pairs than the three other conditions. In addition, participants were significantly more sensitive to talker differences in rhyming pairs than unrelated pairs, but no other conditions differed. Participants were more biased to respond “same” in the rhyme and compound conditions than in the unrelated condition. These results demonstrate a partial replication of the Narayan, Mak, & Bialystok (2016) findings, suggesting an interaction between linguistic and talker information during speech perception.The current study aimed to replicate a recent study conducted by Narayan, Mak, & Bialystok (2016) that found effects of top-down linguistic information on a talker discrimination task by comparing four conditions: compounds (day-dream), rhymes (day-bay), reverse compounds (dream-day), and unrelated words (day-bee). The original study used both within- and across-gender pairs and same and different trials were analyzed separately, obscuring possible response biases. Narayan et al. found graded performance between the four conditions, but some results were likely to have been influenced by the use of across-gender trials in the different-talker condition. In the current study, only female speakers were used and results were analyzed with signal detection theory (sensitivity and bias measures). Results revealed that participants were faster to respond to rhyming pairs than the three other conditions. In addition, participants were significantly more sensitive to talker differences in rhyming pairs than unrel...