Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Valerie Freeman is active.

Publication


Featured researches published by Valerie Freeman.


Journal of Phonetics | 2014

Hyperarticulation as a signal of stance

Valerie Freeman

Abstract This study analyzes an episode of a televised political talk show for evidence that speakers hyperarticulate concepts about which they express stances, a use of hyperarticulation that interacts with the discourse function of signaling new information. Using content analysis, utterances were coded on two dimensions: Evaluation (presence or absence of stance-expression) and Novelty (new or given information). To compare the resulting groups, four measures indicating hyperarticulation were used: speech rate of phrases, and the duration, pitch, and vowel space expansion (first and second formant values) of stressed vowels in the phrases. Group results showed significant effects for both Evaluation and Novelty, and an interaction between them. Stance-expressing items were hyperarticulated compared to a control group of neutral phrases, and within each group, new information was hyperarticulated compared to given information. Speech rate showed these effects most reliably, with vowel duration showing effects for Evaluation. Vowel space expansion showed the same patterns without statistical significance; pitch was not a reliable indicator. These findings provide acoustic correlates to stance-expression, which have not been extensively investigated previously and which can be applied in future work on the identification of specific types of stance.


Journal of Deaf Studies and Deaf Education | 2017

Speech intelligibility and psychosocial functioning in deaf children and teens with cochlear implants

Valerie Freeman; David B. Pisoni; William G. Kronenberger; Irina Castellanos

Deaf children with cochlear implants (CIs) are at risk for psychosocial adjustment problems, possibly due to delayed speech-language skills. This study investigated associations between a core component of spoken-language ability-speech intelligibility-and the psychosocial development of prelingually deaf CI users. Audio-transcription measures of speech intelligibility and parent reports of psychosocial behaviors were obtained for two age groups (preschool, school-age/teen). CI users in both age groups scored more poorly than typically hearing peers on speech intelligibility and several psychosocial scales. Among preschool CI users, five scales were correlated with speech intelligibility: functional communication, attention problems, atypicality, withdrawal, and adaptability. These scales and four additional scales were correlated with speech intelligibility among school-age/teen CI users: leadership, activities of daily living, anxiety, and depression. Results suggest that speech intelligibility may be an important contributing factor underlying several domains of psychosocial functioning in children and teens with CIs, particularly involving socialization, communication, and emotional adjustment.


spoken language technology workshop | 2014

Recognition of stance strength and polarity in spontaneous speech

Gina-Anne Levow; Valerie Freeman; Alena Hrynkevich; Mari Ostendorf; Richard Wright; Julian Chan; Yi Luan; Trang Tran

From activities as simple as scheduling a meeting to those as complex as balancing a national budget, people take stances in negotiations and decision making. While the related areas of subjectivity and sentiment analysis have received significant attention, work has focused almost exclusively on text, and much stance-taking activity is carried out verbally. This paper investigates automatic recognition of stance-taking in spontaneous speech. It first presents a new annotated corpus of spontaneous, conversational speech designed to elicit high densities of stance-taking at different strengths. Speaker spurts are annotated both for strength of stance-taking behavior and polarity of stance. Based on this annotated corpus, we develop classifiers for automatic recognition of stance-taking behavior in speech. We employ a range of lexical, speaking style, and prosodic features in a boosting framework. The classifiers achieve strong accuracies on both binary detection of stance and four-way recognition of stance strength, well above most common class assignment. Finally, we classify the polarity of stance-taking spurts, obtaining accuracies around 80%. The best classifiers rely primarily on word unigram features, with speaking style and prosodic features yielding lower accuracies but still well above common class assignment.


Journal of the Acoustical Society of America | 2014

Phonetic correlates of stance-taking

Valerie Freeman; Richard Wright; Gina-Anne Levow; Yi Luan; Julian Chan; Trang Tran; Victoria Zayats; Maria Antoniak; Mari Ostendorf

Stance, or a speaker’s attitudes or opinions about the topic of discussion, has been investigated textually in conversation- and discourse analysis and in computational models, but little work has focused on its acoustic-phonetic properties. This is a difficult problem, given that stance is a complex activity that must be expressed along with several other types of meaning (informational, social, etc.) using the same acoustic channels. In this presentation, we begin to identify some acoustic indicators of stance in natural speech using a corpus of collaborative conversational tasks which have been hand-annotated for stance strength (none, weak, moderate, and strong) and polarity (positive, negative, and neutral). A preliminary analysis of 18 dyads completing two tasks suggests that increases in stance strength are correlated with increases in speech rate and pitch and intensity medians and ranges. Initial results for polarity also suggest correlations with speech rate and intensity. Current investigations...


Journal of the Acoustical Society of America | 2018

Salience of cochlear implant users’ speech rate

Valerie Freeman

Speech rate-matching is a form of rapid accommodation in which speakers adapt their speech rates to match their interlocutor’s previous utterance. Such responsiveness may contribute to rhythmic convergence between speakers throughout a conversation. However, little is known about interactions between typical speakers and those with speech or hearing difficulties. In recent work, prelingually deaf cochlear implant (CI) users were poorer rate-matchers than their peers, but it was unclear whether interlocutors accommodated toward CI users’ speech rates, which can vary widely between individuals. A follow-up study adapted procedures from work in which participants rate-matched toward fast- and slow-talking Parkinson’s patients. Surprisingly, people did not rate-match to either CI users or controls. This study explores one possible explanation: that differences in speech rate were not salient enough for participants to modify their own rates in response. Following previous procedures, participants (a) alternated hearing CI users’ sentences and reading other sentences, (b) rated CI users’ utterances as fast, slow, or neither, and (c) repeated the first task with different stimuli. Results will show whether participants were able to identify differences in speech rate (when prompted) and whether they improved in speech rate-matching after speech rate was brought to their attention.Speech rate-matching is a form of rapid accommodation in which speakers adapt their speech rates to match their interlocutor’s previous utterance. Such responsiveness may contribute to rhythmic convergence between speakers throughout a conversation. However, little is known about interactions between typical speakers and those with speech or hearing difficulties. In recent work, prelingually deaf cochlear implant (CI) users were poorer rate-matchers than their peers, but it was unclear whether interlocutors accommodated toward CI users’ speech rates, which can vary widely between individuals. A follow-up study adapted procedures from work in which participants rate-matched toward fast- and slow-talking Parkinson’s patients. Surprisingly, people did not rate-match to either CI users or controls. This study explores one possible explanation: that differences in speech rate were not salient enough for participants to modify their own rates in response. Following previous procedures, participants (a) alternat...


Proceedings of the Annual Meeting of the Berkeley Linguistics Society | 2015

Perceptual Distribution of Merging Phonemes

Valerie Freeman

This study seeks to map the perceptual vowel space of front vowel phonemes undergoing merger before voiced velars in Pacific Northwest English (PNWE). In production, most speakers spectrally merge /Eg, eg/ at a point between their non-prevelar counterparts /E, e/, but the height of /aeg/ is more variable. With variable production in the speech community, a question of perception arises: do Northwesterners maintain the same category boundaries for prevelar front vowels as non-prevelars, or are the prevelars merged in perception as they often are in production? This study addresses the question by mapping the perceptual space of front vowels in prevelar vs. precoronal contexts. Stimuli were created to synthesize an initial /b/ followed by 24 front-vowel formant value combinations (F1, F2) with no offglide or coda transitions. Twenty Northwestern subjects were told that each stimulus was the first part of a word that had been cut off in the middle, and they indicated which word they heard with a button press. In the first three blocks of randomized stimuli presentation, the word choices were in the shape /b d/: bad, bid, bayed, bed, bead ; the second three blocks used the same randomly-presented stimuli (unbeknownst to subjects), but the word choices were /b g/: bag, big, bagel, beg, beagle. This design forces lexical access during the task, as subjects must imagine they are hearing words, not contextless phonemes. The paper is organized as follows: Section 2 presents background information on the merger in production, followed by predictions for perception. Section 3 describes the experimental design, stimuli creation, and response procedures. Section 4 presents results, and Section 5 concludes with discussion and future work.


Journal of the Acoustical Society of America | 2015

Prosodic features of stance acts

Valerie Freeman

While textual aspects of stance (attitudes/opinions) have been well studied in conversation analysis and computational models, acoustic-phonetic properties have received less attention. Recent work (2014 and 2015) has found that variations in prosodic measures (speech rate, vowel duration, pitch, and intensity) are correlated with stance presence and strength in unscripted speech, and stances with different discourse functions may be distinguishable by the shapes of their pitch and intensity contours. Building on these early findings, this presentation investigates prosodic properties of various stance-act types in spontaneous conversation (e.g., opinion-offering and soliciting, (dis)agreement, persuasion, rapport-building). The dataset contains over 32,000 stressed vowels from content words spoken by 40 speakers drawn from an audio corpus of dyads engaged in collaborative tasks. Speaker-normalized vowel duration, pitch, and intensity are automatically extracted from time-aligned transcriptions that have been hand-annotated for stance strength, polarity, and act type. Results show that changes in the prosodic measures combine to distinguish several notable stance-act types, including: weak-positive agreement, rapport-building agreement, reluctance to accept a stance, stance-softening, and backchanneling. Pitch and intensity contours over vowel duration are particularly illustrative, suggesting a future avenue in examining contours over whole stance acts.


Journal of the Acoustical Society of America | 2014

Phonetic marking of stance in a collaborative-task spontaneous-speech corpus

Valerie Freeman; Gina A. Levow; Richard Wright

While stance-taking has been examined qualitatively within conversation and discourse analysis and modeled using text-based approaches in computational linguistics, there has been little quantification of its acoustic-phonetic correlates. One reason for this is the relative sparsity of stance-taking behavior in spontaneous conversations. Another is that stance marking is embedded into a highly variable signal that encodes many other channels of information (prosody, word entropy, audience, etc.). To address these issues, we draw on varying subfields to build a corpus of stance-dense conversation and develop methods for identification and analysis of stance-related cues in the speech signal. In the corpus, dyads are engaged in three collaborative tasks designed to elicit increasing levels of investment. In these imaginary store inventory, survival, and budget-balancing scenarios, participants solve problems, but the conversation is otherwise unscripted. Based on limited previous work (Freeman, under review...


Journal of the Acoustical Society of America | 2010

Using hyperarticulation to quantify interaction between discourse functions.

Valerie Freeman

Social factors are known to affect speech production, but in discourse and conversation analytic branches of sociolinguistics, quantitative measures are not as common as qualitative observations. This study uses acoustic measures of hyperarticulation to quantify the effects of two interacting discourse functions: new‐information signaling and stance expression. For each of five speakers in an hour‐long political talk show, content analysis was performed on all phrases repeated three or more times to separate neutral from stance‐expressing tokens and new from given repetitions of those tokens. Word, syllable, and vowel duration were measured from spectrograms; formant (LPC) and pitch (autocorrelation) values were measured at onset, 20%, 50%, 80%, and offset of stressed vowels. Preliminary results from repeated measures analysis of variance suggest that stance is indeed a significant predictor of hyperarticulation which interacts with newness for at least some speakers. This work shows one way that acoustic...


conference of the international speech communication association | 2014

Manipulating stance and involvement using collaborative tasks: an exploratory comparison.

Valerie Freeman; Julian Chan; Gina-Anne Levow; Richard Wright; Mari Ostendorf; Victoria Zayats

Collaboration


Dive into the Valerie Freeman's collaboration.

Top Co-Authors

Avatar

Richard Wright

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mari Ostendorf

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Julian Chan

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi Luan

University of Washington

View shared research outputs
Top Co-Authors

Avatar

David B. Pisoni

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge