Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mark Steedman is active.

Publication


Featured researches published by Mark Steedman.


Cognition | 1988

Interaction with context during human sentence processing.

Graeme Altmann; Mark Steedman

Abstract Psychological theories of natural language processing have usually assumed that the sentence processor resolves local syntactic ambiguities by selecting a single analysis on the basis of structural criteria such as Fraziers (1978) “minimal attachment.” According to such theories, alternative analyses will only be attempted if the initial analysis subsequently proves inconsistent with the context. (See also Ferreira & Clifton, 1986; Ford, Bresnan, & Kaplan, 1982; Rayner, Carlson, & Frazier, 1983). An alternative hypothesis exists, however: If sentences are understood incrementally, more or less word-by-word (Marlsen-Wilson, 1973, 1975), then syntactic processing can in principle exploit the fact that interpretations are available, using them “interactively” to select among alternative syntactic analyses on the basis of their plausibility with respect to the context. The present paper considers possible architectures for such incremental and interactive sentence processors, and argues for an architecture


international conference on computer graphics and interactive techniques | 1994

Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents

Justine Cassell; Catherine Pelachaud; Norman I. Badler; Mark Steedman; Brett Achorn; Tripp Becket; Brett Douville; Scott Prevost; Matthew Stone

We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversation is created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gestures generators. Coordinated arm, wrist, and hand motions are invoked to create semantically meaningful gestures. Throughout we will use examples from an actual synthesized, fully animated conversation.


Linguistic Inquiry | 2000

Information Structure and the Syntax-Phonology Interface

Mark Steedman

The article proposes a theory of grammar relating syntax, discourse semantics, and intonational prosody. The full range of English intonational tunes distinguished by Beckman and Pierrehumbert (1986) and their semantic interpretation in terms of focus and information structure are discussed, including discontinuous themes and rhemes. The theory extends an earlier account based on Combinatory Categorial Grammar, which directly pairs phonological and logical forms without intermediary representational levels.


Computational Linguistics | 2007

CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank

Julia Hockenmaier; Mark Steedman

This article presents an algorithm for translating the Penn Treebank into a corpus of Combinatory Categorial Grammar (CCG) derivations augmented with local and long-range word-word dependencies. The resulting corpus, CCGbank, includes 99.4% of the sentences in the Penn Treebank. It is available from the Linguistic Data Consortium, and has been used to train wide-coverage statistical parsers that obtain state-of-the-art rates of dependency recovery. In order to obtain linguistically adequate CCG analyses, and to eliminate noise and inconsistencies in the original annotation, an extensive analysis of the constructions and annotations in the Penn Treebank was called for, and a substantial number of changes to the Treebank were necessary. We discuss the implications of our findings for the extraction of other linguistically expressive grammars from the Treebank, and for the design of future treebanks.


Cognitive Science | 1996

Generating facial expressions for speech

Catherine Pelachaud; Norman I. Badler; Mark Steedman

This article reports results from o program thot produces high-quolity onimotion of fociol expressions ond head movements OS outomoticolly OS possible in conjunction with meaning-based speech synthesis, including spoken intonation. The gool of the research is OS much to test and define our theories of the formal semantics for such gestures, OS to produce convincing onimotion. Towards this end, we hove produced o high-level progromming longuoge for three-dimensional (3-D) onimotion of fociol expressions. We have been concerned primorily with expressions conveying information correlated with the intonotion of the voice: This includes the differences of timing, pitch, and emphosis that ore reloted to such semantic distinctions of discourse OS “focus,” “topic,” and “comment, ” “theme” ond “rheme,” or “given” ond “new” informotion. We ore also interested in the relotion of affect or emotion to fociol expression. Until now, systems hove not embodied such rule-governed tronslotion from spoken utterance meaning to fociol expressions. Our system embodies rules that describe and coordinate these relations: intonotion/informofion, intonofion/offect, ond fociol expressions/affect. A meoning representation includes discourse information: What is controstive/background informotion in the given context, ond whot is the “topic” or “theme” of the discourse? The system mops the meaning representotion into how accents ond their placement ore chosen, how they ore conveyed over fociol expression, ond how speech ond fociol expressions ore coordinated. This determines a sequence of functional groups: lip shapes, conversational signals, punctuators, regulators, and monipulotors. Our algorithms then impose synchrony, create coorticulotion effects, and determine offectuol signals, eye ond heod movements. The lowest level representation is the Facial Action Coding System (FACS), which makes the generation system portable to other fociol models. We would like to thank Steve Platt for his facial model and for very useful comments. We would like to thank Soetjianto and Khairol Yussof who have improved the facial model. We are also very grateful to Jean Griffin, Francisco Azuola, and Mike Edwards who developed part of the animation software. All the work related to the voice synthesizer, speech, and intonation was done by Scott Prevost. We are very grateful to him. Finally, we would like to thank all the members of the graphics laboratory, especially Cary Phillips and Jianmin Zhao, for their helpful comments.


Linguistics and Philosophy | 1982

On the order of words

Anthony E. Ades; Mark Steedman

ConclusionsThere is no doubt that the model presented here is incomplete. Many important categories, particularly negation and the adverbials, have been entirely ignored, and the treatment of Tense and the affixes is certainly inadequate. It also remains to be seen how the many constructions that have been ignored here are to be accommodated within the framework that has been outlined. However, the fact that a standard categorial lexicon, plus the four rule schemata, seems to come close to exhaustively specifying the main clause constructions of English, and also seems to explain a number of major constraints on transformations, encourages us to compare the theory with certain alternatives, and to examine its broader implications.


Springer Berlin Heidelberg | 2004

APML, a Markup Language for Believable Behavior Generation

Berardina De Carolis; Catherine Pelachaud; Isabella Poggi; Mark Steedman

Developing an embodied conversational agent able to exhibit a humanlike behavior while communicating with other virtual or human agents requires enriching the dialogue of the agent with non-verbal information. Our agent, Greta, is defined as two components: a Mind and a Body. Her mind reflects her personality, her social intelligence, as well as her emotional reaction to events occurring in the environment. Her body corresponds to her physical appearance able to display expressive behaviors. We designed a Mind—Body interface that takes as input a specification of a discourse plan in an XML language (DPML) and enriches this plan with the communicative meanings that have to be attached to it, by producing an input to the Body in a new XML language (APML). Moreover we have developed a language to describe facial expressions. It combines basic facial expressions with operators to create complex facial expressions. The purpose of this chapter is to describe these languages and to illustrate our approach to the generation of behavior of an agent able to act consistently with her goals and with the context of the interaction.


Cognitive Psychology | 1978

The psychology of syllogisms

Philip N. Johnson-Laird; Mark Steedman

Two experiments were carried out in which subjects had to draw conclusions from syllogistic premises. The nature of their responses showed that the figure of the syllogisms exerted a strong effect on the accuracy of performance and on the nature of the conclusions that were drawn. For example, premises such as “Some of the parents are scientists; All of the scientists are drivers” tend to elicit the conclusion, “Some of the parents are drivers” rather than its equally valid converse, “Some of the drivers are parents”. In general, premises of the form iE


north american chapter of the association for computational linguistics | 2003

Example selection for bootstrapping statistical parsers

Mark Steedman; Rebecca Hwa; Stephen Clark; Miles Osborne; Anoop Sarkar; Julia Hockenmaier; Paul Ruhlen; Steven Baker; Jeremiah Crim

created a bias towards conclusions of the form A-C, whereas premises of the form


international conference on computational linguistics | 2004

Wide-coverage semantic representations from a CCG parser

Johan Bos; Stephen Clark; Mark Steedman; James R. Curran; Julia Hockenmaier

=fi created a bias towards conclusions of the form C-A. The data cast doubt on current theories of syllogistic inference; a new theory was accordingly developed and implemented as a computer program. The theory postulates that quantified assertions receive an analogical mental representation which captures their logical properties structurally. A simple heuristic generates putative conclusions from the combined representations of premises, and such conclusions are put to logical tests which, if exhaustively conducted, invariably yield a correct response. Erroneous responses may occur if there is a failure to test exhaustively.

Collaboration


Dive into the Mark Steedman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norman I. Badler

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Scott Prevost

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Catherine Pelachaud

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kira Mourao

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge