Matthew Stone
Rutgers University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthew Stone.
international conference on computer graphics and interactive techniques | 1994
Justine Cassell; Catherine Pelachaud; Norman I. Badler; Mark Steedman; Brett Achorn; Tripp Becket; Brett Douville; Scott Prevost; Matthew Stone
We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversation is created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gestures generators. Coordinated arm, wrist, and hand motions are invoked to create semantically meaningful gestures. Throughout we will use examples from an actual synthesized, fully animated conversation.
international conference on computer graphics and interactive techniques | 1998
Douglas DeCarlo; Dimitris N. Metaxas; Matthew Stone
We describe a system that automatically generates varied geometric models of human faces. A collection of random measurements of the face is generated according to anthropometric statistics for likely face measurements in a population. These measurements are then treated as constraints on a parameterized surface. Variational modeling is used to find a smooth surface that satisfies these constraints while using a prototype shape as a reference.
Computational Linguistics | 2003
Bonnie Webber; Matthew Stone; Aravind K. Joshi; Alistair Knott
We argue in this article that many common adverbial phrases generally taken to signal a discourse relation between syntactically connected units within discourse structure instead work anaphorically to contribute relational meaning, with only indirect dependence on discourse structure. This allows a simpler discourse structure to provide scaffolding for compositional semantics and reveals multiple ways in which the relational meaning conveyed by adverbial connectives can interact with that associated with discourse structure. We conclude by sketching out a lexicalized grammar for discourse that facilitates discourse interpretation as a product of compositional rules, anaphor resolution, and inference.
international conference on computer graphics and interactive techniques | 2004
Matthew Stone; Douglas DeCarlo; Insuk Oh; Christian Rodriguez; Adrian Stere; Alyssa Lees; Christoph Bregler
We describe a method for using a database of recorded speech and captured motion to create an animated conversational character. Peoples utterances are composed of short, clearly-delimited phrases; in each phrase, gesture and speech go together meaningfully and synchronize at a common point of maximum emphasis. We develop tools for collecting and managing performance data that exploit this structure. The tools help create scripts for performers, help annotate and segment performance data, and structure specific messages for characters to use within application contexts. Our animations then reproduce this structure. They recombine motion samples with new speech samples to recreate coherent phrases, and blend segments of speech and motion together phrase-by-phrase into extended utterances. By framing problems for utterance generation and synthesis so that they can draw closely on a talented performance, our techniques support the rapid construction of animated characters with rich and appropriate expression.
meeting of the association for computational linguistics | 1997
Matthew Stone; Christine Doran
We present an algorithm for simultaneously constructing both the syntax and semantics of a sentence using a Lexicalized Tree Adjoining Grammar (LTAG). This approach captures naturally and elegantly the interaction between pragmatic and syntactic constraints on descriptions in a sentence, and the inferential interactions between multiple descriptions in a sentence. At the same time, it exploits linguistically motivated, declarative specifications of the discourse functions of syntactic constructions to make contextually appropriate syntactic choices.
computational intelligence | 2003
Matthew Stone; Christine Doran; Bonnie Webber; Tonia Bleam; Martha Palmer
The process of microplanning in natural language generation (NLG) encompasses a range of problems in which a generator must bridge underlying domain‐specific representations and general linguistic representations. These problems include constructing linguistic referring expressions to identify domain objects, selecting lexical items to express domain concepts, and using complex linguistic constructions to concisely convey related domain facts.
international conference on natural language generation | 2000
Justine Cassell; Matthew Stone; Hao Yan
We describe the generation of communicative actions in an implemented embodied conversational agent. Our agent plans each utterance so that multiple communicative goals may be realized opportunistically by a composite action including not only speech but also coverbal gesture that fits the context and the ongoing speech in ways representative of natural human conversation. We accomplish this by reasoning from a grammar which describes gesture declaratively in terms of its discourse function, semantics and synchrony with speech.
meeting of the association for computational linguistics | 1999
Bonnie Webber; Alistair Knott; Matthew Stone; Aravind K. Joshi
We show that discourse structure need not bear the full burden of conveying discourse relations by showing that many of them can be explained nonstructurally in terms of the grounding of anaphoric presuppositions (Van der Sandt, 1992). This simplifies discourse structure, while still allowing the realisation of a full range of discourse relations. This is achieved using the same semantic machinery used in deriving clause-level semantics.
international conference on natural language generation | 2000
Matthew Stone
A range of research has explored the problem of generating referring expressions that uniquely identify a single entity from the shared context. But what about expressions that identify sets of entities? In this paper, I adapt recent semantic research on plural descriptions---using covers to abstract collective and distributive readings and using sets of assignments to represent dependencies among references---to describe a search problem for set-identifying expressions that largely mirrors the search problem for singular referring expressions. By structuring the search space only in terms of the words that can be added to the description, the proposal defuses potential combinatorial explosions that might otherwise arise with reference to sets.
Proceedings of Computer Animation 2002 (CA 2002) | 2002
Douglas DeCarlo; Corey Revilla; Matthew Stone; Jennifer J. Venditti
People highlight the intended interpretation of their utterances within a larger discourse by a diverse set of nonverbal signals. These signals represent a key challenge for animated conversational agents because they are pervasive, variable, and need to be coordinated judiciously in an effective contribution to conversation. In this paper we describe a freely-available cross-platform real-time facial animation system, RUTH, that animates such high-level signals in synchrony with speech and lip movements. RUTH adopts an open, layered architecture in which fine-grained features of the animation can be derived by rule from inferred linguistic structure, allowing us to use RUTH, in conjunction with annotation of observed discourse, to investigate the meaningful high-level elements of conversational facial movement for American English speakers.