Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott Prevost is active.

Publication


Featured researches published by Scott Prevost.


international conference on computer graphics and interactive techniques | 1994

Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents

Justine Cassell; Catherine Pelachaud; Norman I. Badler; Mark Steedman; Brett Achorn; Tripp Becket; Brett Douville; Scott Prevost; Matthew Stone

We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversation is created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gestures generators. Coordinated arm, wrist, and hand motions are invoked to create semantically meaningful gestures. Throughout we will use examples from an actual synthesized, fully animated conversation.


Speech Communication | 1994

Specifying intonation from context for speech synthesis

Scott Prevost; Mark Steedman

Abstract This paper presents a theory and a computational implementation for generating prosodically appropriate synthetic speech in response to database queries. Proper distinctions of contrast and emphasis are expressed in an intonation contour that is synthesized by rule under the control of a grammar, a discourse model and a knowledge base. The theory is based on Combinatory Categorial Grammar, a formalism which easily integrates the notions of syntactic constituency, semantics, prosodic phrasing and information structure. Results from our current implementation demonstrate the systems ability to generate a variety of intonational possibilities for a given sentence depending on the discourse context.


conference of the european chapter of the association for computational linguistics | 1993

Generating contextually appropriate intonation

Scott Prevost; Mark Steedman

One source of unnaturalness in the output of text-to-speech systems from the involvement of algorithmically generated default intonation contours, applied under minimal control from syntax and semantics. It is a tribute both to the resilience of human language understanding and to the ingenuity of the inventors of these algorithms that the results are as intelligible as they are. However, the result is very frequently unnatural, and may on occasion mislead the hearer. This paper extends earlier work on the relation between syntax and intonation in language understanding in Combinatory Categorial Grammar (CCG). A generator with a simple and domain-independent discourse model can be used to direct synthesis of intonation contours for responses to data-base queries, to convey distinctions of contrast and emphasis determined by the discourse model.


meeting of the association for computational linguistics | 1996

An Information Structural Approach To Spoken Language Generation

Scott Prevost

This paper presents an architecture for the generation of spoken monologues with contextually appropriate intonation. A two-tiered information structure representation is used in the high-level content planning and sentence planning stages of generation to produce efficient, coherent speech that makes certain discourse relationships, such as explicit contrasts, appropriately salient. The system is able to produce appropriate intonational patterns that cannot be generated by other systems which rely solely on word class and given/new distinctions.


communications and mobile computing | 1995

Synthesizing cooperative conversation

Catherine Pelachaud; Justine Cassell; Norman I. Badler; Mark Steedman; Scott Prevost; Matthew Stone

We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversations are created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gesture generators.


human language technology | 1994

Information based intonation synthesis

Scott Prevost; Mark Steedman

This paper presents a model for generating prosodically appropriate synthesized responses to database queries using Combinatory Categorial Grammar (CCG - cf. [22]), a formalism which easily integrates the notions of syntactic constituency, prosodic phrasing and information structure. The model determines accent locations within phrases on the basis of contrastive sets derived from the discourse structure and a domain-independent knowledge base.


eurographics | 1995

Coordinating vocal and visual parameters for 3D virtual agents

Catherine Pelachaud; Scott Prevost

This paper presents an implemented system for automatically producing prosodically appropriate speech and corresponding facial expressions for animated, three-dimensional agents that respond to simple database queries in a 3D virtual environment. Unlike previous text-to-facial animation approaches, the system described here produces synthesized speech and facial animations entirely from scratch, starting with semantic representations of the message to be conveyed, which are based in turn on a discourse model and a small database of facts about the modeled world.


european conference on machine learning | 1998

Synthesizing Cooperative Conversation

Catherine Pelachaud; Justine Cassell; Norman I. Badler; Mark Steedman; Scott Prevost; Matthew Stone

We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversations are created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gesture generators.


Springer Berlin Heidelberg | 1995

Multimodal Human-Computer Communication

Catherine Pelachaud; Justine Cassell; Norman I. Badler; Mark Steedman; Scott Prevost; Matthew Stone

We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversations are created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gesture generators.


The Association for Computational Linguistics | 1993

Sixth Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference, 21-23 April 1993, Utrecht, The Netherlands

Scott Prevost; Mark Steedman

One source of unnaturalness in the output of text-to-speech systems from the involvement of algorithmically generated default intonation contours, applied under minimal control from syntax and semantics. It is a tribute both to the resilience of human language understanding and to the ingenuity of the inventors of these algorithms that the results are as intelligible as they are. However, the result is very frequently unnatural, and may on occasion mislead the hearer. This paper extends earlier work on the relation between syntax and intonation in language understanding in Combinatory Categorial Grammar (CCG). A generator with a simple and domain-independent discourse model can be used to direct synthesis of intonation contours for responses to data-base queries, to convey distinctions of contrast and emphasis determined by the discourse model.

Collaboration


Dive into the Scott Prevost's collaboration.

Top Co-Authors

Avatar

Justine Cassell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Catherine Pelachaud

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norman I. Badler

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Brett Achorn

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Brett Douville

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Obed E. Torres

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tripp Becket

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Laurie Hiyakumoto

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge