Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jennifer J. Venditti is active.

Publication


Featured researches published by Jennifer J. Venditti.


conference of the international speech communication association | 2003

Classifying Subject Ratings of Emotional Speech Using Acoustic Features

Jackson Liscombe; Jennifer J. Venditti; Julia Hirschberg

This paper presents results from a study examining emotional speech using acoustic features and their use in automatic machine learning classification. In addition, we propose a classification scheme for the labeling of emotions on continuous scales. Our findings support those of previous research as well as indicate possible future directions utilizing spectral tilt and pitch contour to distinguish emotions in the valence dimension. Speech is a rich source of information, not only about what a speaker says, but also about what the speaker’s attitude is toward the listener and toward the topic under discussion — as well as the speaker’s own current state of mind. Until recently, most research on spoken language systems has focused on propositional content: what words is the speaker producing? Currently there is considerable interest in going beyond mere words to discover the semantic content of utterances. However, we believe it is important to go beyond semantic content as well, in order to fully interpret what human listeners infer from listening to other humans. In this paper we present results from some recent and ongoing experiments in the study of emotional speech, designed to elicit subjective judgments of tokens of emotional speech and to identify acoustic and prosodic correlates of such speech based on these classifications. We discuss previous research as well as show results from correlation and machine learning experiments, and conclude with the implications of this study 1 .


conference of the international speech communication association | 2005

Detecting Certainness in Spoken Tutorial Dialogues

Jackson Liscombe; Julia Hirschberg; Jennifer J. Venditti

What role does affect play in spoken tutorial systems and is it automatically detectable? We investigated the classification of student certainness in a corpus collected for ITSPOKE, a speech-enabled Intelligent Tutorial System (ITS). Our study suggests that tutors respond to indications of student uncertainty differently from student certainty. Results of machine learning experiments indicate that acoustic-prosodic features can distinguish student certainness from other student states. A combination of acoustic-prosodic features extracted at two levels of intonational analysis — breath groups and turns — achieves 76.42% classification accuracy, a 15.8% relative improvement over baseline performance. Our results suggest that student certainness can be automatically detected and utilized to create better spoke dialog ITSs.


Proceedings of Computer Animation 2002 (CA 2002) | 2002

Making discourse visible: coding and animating conversational facial displays

Douglas DeCarlo; Corey Revilla; Matthew Stone; Jennifer J. Venditti

People highlight the intended interpretation of their utterances within a larger discourse by a diverse set of nonverbal signals. These signals represent a key challenge for animated conversational agents because they are pervasive, variable, and need to be coordinated judiciously in an effective contribution to conversation. In this paper we describe a freely-available cross-platform real-time facial animation system, RUTH, that animates such high-level signals in synchrony with speech and lip movements. RUTH adopts an open, layered architecture in which fine-grained features of the animation can be derived by rule from inferred linguistic structure, allowing us to use RUTH, in conjunction with annotation of observed discourse, to investigate the meaningful high-level elements of conversational facial movement for American English speakers.


Computer Animation and Virtual Worlds | 2004

Specifying and animating facial signals for discourse in embodied conversational agents

Douglas DeCarlo; Matthew Stone; Corey Revilla; Jennifer J. Venditti

People highlight the intended interpretation of their utterances within a larger discourse by a diverse set of non‐verbal signals. These signals represent a key challenge for animated conversational agents because they are pervasive, variable, and need to be coordinated judiciously in an effective contribution to conversation. In this paper, we describe a freely available cross‐platform real‐time facial animation system, RUTH, that animates such high‐level signals in synchrony with speech and lip movements. RUTH adopts an open, layered architecture in which fine‐grained features of the animation can be derived by rule from inferred linguistic structure, allowing us to use RUTH, in conjunction with annotation of observed discourse, to investigate the meaningful high‐level elements of conversational facial movement for American English speakers. Copyright


Archive | 2003

Intonation and discourse processing

Julia Hirschberg; Jennifer J. Venditti

This paper describes intonational cues to discourse structure, and the role that intonation plays in spoken discourse processing. We begin by discussing two main structures in discourse that one must consider when doing research on discourse processing: segmentation and information status. We then review a number of key studies from the phonetics literature which have investigated the intonational marking of these structures. Next, we discuss in detail the psycholinguistic research to date which has examined the role that intonation can play in facilitating or inhibiting the processing of discourse in English and other related languages. We conclude by outlining directions for future research in the area of intonation-discourse processing.


conference of the international speech communication association | 2006

Detecting Question-Bearing Turns in Spoken Tutorial Dialogues

Jackson Liscombe; Jennifer J. Venditti; Julia Hirschberg

Current speech-enabled Intelligent Tutoring Systems do not model student question behavior the way human tutors do, despite evidence indicating the importance of doing so. Our study examined a corpus of spoken tutorial dialogues collected for development of ITSpoke, an Intelligent Tutoring Spoken Dialogue System. The authors extracted prosodic, lexical, syntactic, and student and task dependent information from student turns. Results of running 5-fold cross validation machine learning experiments using AdaBoosted C4.5 decision trees show prediction of student question-bearing turns at a rate of 79.7%. The most useful features were prosodic, especially the pitch slope of the last 200 milliseconds of the student turn. Student pre-test score was the most-used feature. Findings indicate that using turn-based units is acceptable for incorporating question detection capability into practical Intelligent Tutoring Systems.


international conference on spoken language processing | 1996

Intonational cues to discourse structure in Japanese

Jennifer J. Venditti; Marc Swerts

The study examines the extent to which intonation plays a role in the structuring of information in a Japanese monologue. The role of pitch accent in the intonational system of Japanese is very different from that in languages like Dutch or English: in Japanese, pitch accent is a lexical property of words and cannot be used to lend prominence to words at the sentence level. Therefore, we wondered if (and how) intonation can cue discourse structure in Japanese, comparable to how it is being used in Dutch and English. Results show that fundamental frequency (F/sub 0/), amplitude, and duration of the final accents in each sentence did not serve to cue the boundaries of discourse segments, contrary to our expectation. However, pitch range variations on NPs, examined in terms of their position in a discourse segment and their information status, did show a correlation with discourse structure.


conference of the international speech communication association | 2006

Intonational Cues to Student Questions in Tutoring Dialogs

Jennifer J. Venditti; Julia Hirschberg; Jackson Liscombe

Successful Intelligent Tutoring Systems (ITSs)must be able to recognize when their students are asking a question. They must identify question form as well as function in order to respond appropriately. Our study examines whether intonational features, specifically, F0 height and rise range, are useful cues to student question type in a corpus of 643 American English questions. Results show a quantitative effect of both form and function. In addition, among clarification-seeking questions, we observed differences based on the type of clarification being sought. 1


Archive | 2008

Prominence Marking in the Japanese Intonation System

Jennifer J. Venditti; Kikuo Maekawa; Mary E. Beckman


Archive | 2010

Tone and Intonation

Mary E. Beckman; Jennifer J. Venditti

Collaboration


Dive into the Jennifer J. Venditti's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge