Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Melissa Baese-Berk is active.

Publication


Featured researches published by Melissa Baese-Berk.


Language and Cognitive Processes | 2009

Mechanisms of interaction in speech production

Melissa Baese-Berk; Matthew Goldrick

Many theories predict the presence of interactive effects involving information represented by distinct cognitive processes in speech production. There is considerably less agreement regarding the precise cognitive mechanisms that underlie these interactive effects. For example, are they driven by purely production-internal mechanisms (e.g., Dell, 1986) or do they reflect the influence of perceptual monitoring mechanisms on production processes (e.g., Roelofs, 2004)? Acoustic analyses reveal the phonetic realisation of words is influenced by their word-specific properties – supporting the presence of interaction between lexical-level and phonetic information in speech production. A second experiment examines what mechanisms are responsible for this interactive effect. The results suggest the effect occurs on-line and is not purely driven by listener modelling. These findings are consistent with the presence of an interactive mechanism that is online and internal to the production system.


Language and Speech | 2010

The wildcat corpus of native-and foreign-accented english: Communicative efficiency across conversational dyads with varying language alignment profiles

Kristin J. Van Engen; Melissa Baese-Berk; Rachel E. Baker; Arim Choi; Midam Kim; Ann R. Bradlow

This paper describes the development of the Wildcat Corpus of native- and foreign-accented English, a corpus containing scripted and spontaneous speech recordings from 24 native speakers of American English and 52 non-native speakers of English. The core element of this corpus is a set of spontaneous speech recordings, for which a new method of eliciting dialogue-based, laboratory-quality speech recordings was developed (the Diapix task). Dialogues between two native speakers of English, between two non-native speakers of English (with either shared or different L1s), and between one native and one non-native speaker of English are included and analyzed in terms of general measures of communicative efficiency. The overall finding was that pairs of native talkers were most efficient, followed by mixed native/non-native pairs and non-native pairs with shared L1. Non-native pairs with different L1s were least efficient. These results support the hypothesis that successful speech communication depends both on the alignment of talkers to the target language and on the alignment of talkers to one another in terms of native language background.


Journal of the Acoustical Society of America | 2013

Accent-independent adaptation to foreign accented speech.

Melissa Baese-Berk; Ann R. Bradlow; Beverly A. Wright

Foreign-accented speech can be difficult to understand but listeners can adapt to novel talkers and accents with appropriate experience. Previous studies have demonstrated talker-independent but accent-dependent learning after training on multiple talkers from a single language background. Here, listeners instead were exposed to talkers from five language backgrounds during training. After training, listeners generalized their learning to novel talkers from language backgrounds both included and not included in the training set. These findings suggest that generalization of foreign-accent adaptation is the result of exposure to systematic variability in accented speech that is similar across talkers from multiple language backgrounds.


Journal of Cognitive Neuroscience | 2011

Phonological neighborhood effects in spoken word production: An fmri study

Dasun Peramunage; Sheila E. Blumstein; Emily B. Myers; Matthew Goldrick; Melissa Baese-Berk

The current study examined the neural systems underlying lexically conditioned phonetic variation in spoken word production. Participants were asked to read aloud singly presented words, which either had a voiced minimal pair (MP) neighbor (e.g., cape) or lacked a minimal pair (NMP) neighbor (e.g., cake). The voiced neighbor never appeared in the stimulus set. Behavioral results showed longer voice-onset time for MP target words, replicating earlier behavioral results [Baese-Berk, M., & Goldrick, M. Mechanisms of interaction in speech production. Language and Cognitive Processes, 24, 527–554, 2009]. fMRI results revealed reduced activation for MP words compared to NMP words in a network including left posterior superior temporal gyrus, the supramarginal gyrus, inferior frontal gyrus, and precentral gyrus. These findings support cascade models of spoken word production and show that neural activation at the lexical level modulates activation in those brain regions involved in lexical selection, phonological planning, and, ultimately, motor plans for production. The facilitatory effects for words with MP neighbors suggest that competition effects reflect the overlap inherent in the phonological representation of the target word and its MP neighbor.


Cognition | 2011

Interaction and Representational Integration: Evidence from Speech Errors.

Matthew Goldrick; H. Ross Baker; Amanda Murphy; Melissa Baese-Berk

We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated by lexical frequency. Previous research has demonstrated that during lexical access, the processing of high frequency words is facilitated; in contrast, during phonetic encoding, the properties of low frequency words are enhanced. These contrasting effects provide the opportunity to distinguish two theoretical perspectives on how interaction between processing levels can be increased. A theory in which cascading activation is used to increase interaction predicts that the facilitation of high frequency words will enhance their influence on the phonetic properties of speech errors. Alternatively, if interaction is increased by integrating levels of representation, the phonetics of speech errors will reflect the retrieval of enhanced phonetic properties for low frequency words. Utilizing a novel statistical analysis method, we show that in experimentally induced speech errors low lexical frequency targets and outcomes exhibit enhanced phonetic processing. We sketch an interactive model of lexical, phonological and phonetic processing that accounts for the conflicting effects of lexical frequency on lexical access and phonetic processing.


Psychological Science | 2014

Long-Term Temporal Tracking of Speech Rate Affects Spoken-Word Recognition

Melissa Baese-Berk; Christopher C. Heffner; Laura C. Dilley; Mark A. Pitt; Tuuli H. Morrill; J. Devin McAuley

Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour.


Journal of the Acoustical Society of America | 2015

Speaking rate consistency in native and non-native speakers of English

Melissa Baese-Berk; Tuuli H. Morrill

Non-native speech differs from native speech in multiple ways. Previous research has described segmental and suprasegmental differences between native and non-native speech in terms of group averages. For example, average speaking rate for non-natives is slower than for natives. However, it is unknown whether non-native speech is also more variable than native speech. This study introduces a method of comparing rate change across utterances, demonstrating that non-native speaking rate is more variable than native speech. These results suggest that future work examining non-native speech perception and production should investigate both mean differences and variability in the signal.


Journal of the Acoustical Society of America | 2017

A relationship between processing speech in noise and dysarthric speech

Stephanie A. Borrie; Melissa Baese-Berk; Kristin J. Van Engen; Tessa Bent

There is substantial individual variability in understanding speech in adverse listening conditions. This study examined whether a relationship exists between processing speech in noise (environmental degradation) and dysarthric speech (source degradation), with regard to intelligibility performance and the use of metrical stress to segment the degraded speech signals. Ninety native speakers of American English transcribed speech in noise and dysarthric speech. For each type of listening adversity, transcriptions were analyzed for proportion of words correct and lexical segmentation errors indicative of stress cue utilization. Consistent with the hypotheses, intelligibility performance for speech in noise was correlated with intelligibility performance for dysarthric speech, suggesting similar cognitive-perceptual processing mechanisms may support both. The segmentation results also support this postulation. While stress-based segmentation was stronger for speech in noise relative to dysarthric speech, listeners utilized metrical stress to parse both types of listening adversity. In addition, reliance on stress cues for parsing speech in noise was correlated with reliance on stress cues for parsing dysarthric speech. Taken together, the findings demonstrate a preference to deploy the same cognitive-perceptual strategy in conditions where metrical stress offers a route to segmenting degraded speech.


Phonetica | 2018

Re-Examining Phonetic Variability in Native and Non-Native Speech

Charlotte Vaughn; Melissa Baese-Berk; Kaori Idemaru

Background/Aims: Non-native speech is frequently characterized as being more variable than native speech. However, the few studies that have directly investigated phonetic variability in the speech of second language learners have considered a limited subset of native/non-native language pairings and few linguistic features. Methods: The present study examines group-level withinspeaker variability and central tendencies in acoustic properties of vowels andstops produced by learners of Japanese from two native language backgrounds, English and Mandarin, as well as native Japanese speakers. Results: Results show that non-native speakers do not always exhibit more phonetic variability than native speakers, but rather that patterns of variability are specific to individual linguistic features and their instantiations in L1 and L2. Conclusion: Adopting this more nuanced approach to variability offers important enhancements to several areas of linguistic theory.


Linguistics Vanguard | 2018

Predictability and perception for native and non-native listeners

Melissa Baese-Berk; Tuuli H. Morrill; Laura C. Dilley

Abstract Phonological knowledge is influenced by a variety of cues that reflect predictability (e.g. semantic predictability). Listeners utilize various aspects of predictability when determining what they have heard. In the present paper, we ask how aspects of the acoustic phonetic signal (e.g. speaking rate) interact with other knowledge reflecting predictability (e.g. lexical frequency and collocation strength) to influence how speech is perceived. Specifically, we examine perception of function words by native and non-native speakers. Our results suggest that both native and non-native speakers are sensitive to factors that influence the predictability of the signal, including speaking rate, frequency, and collocation strength, when listening to speech, and use these factors to predict the phonological structure of stretches of ambiguous speech. However, reliance on these cues differs as a function of their experience and proficiency with the target language. Non-native speakers are less sensitive to some aspects of the acoustic phonetic signal (e.g. speaking rate). However, they appear to be quite sensitive to other factors, including frequency. We discuss how these results inform our understanding of the interplay between predictability and speech perception by different listener populations and how use of features reflecting predictability interacts with recovery of phonological structure of spoken language.

Collaboration


Dive into the Melissa Baese-Berk's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laura C. Dilley

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Midam Kim

Northwestern University

View shared research outputs
Researchain Logo
Decentralizing Knowledge