Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ronald M. Kaplan is active.

Publication


Featured researches published by Ronald M. Kaplan.


meeting of the association for computational linguistics | 2002

Parsing the Wall Street Journal using a Lexical-Functional Grammar and Discriminative Estimation Techniques

Stefan Riezler; Tracy Holloway King; Ronald M. Kaplan; Richard S. Crouch; John T. Maxwell; Mark Johnson

We present a stochastic parsing system consisting of a Lexical-Functional Grammar (LFG), a constraint-based parser and a stochastic disambiguation model. We report on the results of applying this system to parsing the UPenn Wall Street Journal (WSJ) treebank. The model combines full and partial parsing techniques to reach full grammar coverage on unseen data. The treebank annotations are used to provide partially labeled data for discriminative statistical estimation using exponential models. Disambiguation performance is evaluated by measuring matches of predicate-argument relations on two distinct test sets. On a gold standard of manually annotated f-structures for a subset of the WSJ treebank, this evaluation reaches 79% F-score. An evaluation on a gold standard of dependency relations for Brown corpus data achieves 76% F-score.


Artificial Intelligence | 1977

GUS, a frame-driven dialog system

Daniel G. Bobrow; Ronald M. Kaplan; Martin Kay; Donald A. Norman; Henry Thompson; Terry Winograd

GUS is the first of a series of experimental computer systems that we intend to construct as part of a program of research on language understanding. In large measure, these systems will fill the role of periodic progress reports, summarizing what we have learned, assessing the mutual coherence of the various lines of investigation we have been following, and suggesting where more emphasis is needed in future work. GUS (Genial Understander System) is intended to engage a sympathetic and highly cooperative human in an English dialog, directed towards a specific goal within a very restricted domain of discourse. As a starting point, GUS was restricted to the role of a travel agent in a conversation with a client who wants to make a simple return trip to a single city in California. n nThere is good reason for restricting the domain of discourse for a computer system which is to engage in an English dialog. Specializing the subject matter that the system can talk about permits it to achieve some measure of realism without encompassing all the possibilities of human knowledge or of the English language. It also provides the user with specific motivation for participating in the conversation, thus narrowing the range of expectations that GUS must have about the users purposes. A system restricted in this way will be more able to guide the conversation within the boundaries of its competence.


Archive | 1982

Cross-Serial Dependencies in Dutch

Joan Bresnan; Ronald M. Kaplan; Stanley Peters; Annie Zaenen

Chomsky’s argument that natural languages are not finite state languages puts a lower bound on the weak generative capacity of grammars for natural languages (Chomsky (1956)). Arguments based on weak generative capacity are useful in excluding classes of formal devices as characterizations of natural language, but they are not the only formal considerations by which this can be done. Generative grammars may also be excluded because they cannot assign the correct structural descriptions to the terminal strings of a language; in this case, the grammars are excluded on grounds of strong generative capacity. Thus, the deterministic subclasses of context-free grammars (Knuth (1965)) can be rejected because they cannot assign alternative phrase structures to represent natural language ambiguities.


Language | 1998

Formal issues in lexical-functional grammar

Kim Honeyford; Mary Dalrymple; Ronald M. Kaplan; John T. Maxwell; Annie Zaenen

Preface Part I. Formal Architecture: 1. The formal architecture of Lexical Functional Grammar Ronald M. Kaplan 2. Lexical Functional Grammar: a formal system for grammatical representation Ronald M. Kaplan and Joan Bresnan Part II. Nonlocal Dependencies: 3. Long-distance dependencies, constituent structure, and functional uncertainty Ronald M. Kaplan and Annie Zaenen 4. Modeling syntactic constrants on anaphoric binding Mary Dalrymple, John T. Maxwell III, and Annie Zaenen 5. An algorithm for functional uncertainty Ronald M. Kaplan and John T. Maxwell III 6. Constituent coordination in lexical functional grammar Ronald M. Kaplan and John T. Maxwell III Part III. Word Order: 7. Formal devices for linguisic generalizations: West Germanic world order in lexical functional grammar Annie Zaenen and Ronald M. Kaplan 8. Linear order, syntactic rank, and empty categories: on weak crossover Joan Bresnan Part IV. Semantics and Translation: 9. Projections and semantic description in lexical-functional grammar Per-Kristian Halvorsen and Ronald M. Kaplan 10. Situation semantics and semantic interpretation in constraint-based grammars Per-Kristian Halvorsen 11. Translation by structural correspondences Ronald M. Kaplan, Klaus Netter, Jurgen Wedekind and Annie Zaenen Part V. Mathematical and Computational Issues: 12. Three seductions of computational psycholinguistics Ronald M. Kaplan 13. Logic and feature structures Mark Johnson 14. A method for disjunctive constraint satisfaction John T. Maxwell III and Ronald M. Kaplan 15. The interface between phrasal and functional constraints John T. Maxwell III and Ronald M. Kaplan.


international conference on computational linguistics | 1992

Two-level morphology with composition

Lauri Karttunen; Ronald M. Kaplan; Annie Zaenen

Two-Level Morphology with Composition Lauri Karttunen, Ronald M. Kaplan, and Annie Zaenen Xerox Palo Alto Research Center Center for the Study of language and Information StanJbrd University 1. Limitations of Kimmo systems The advent of two-level morphology (Koskenniemi [1], Karttunen [2], Antworth [3], Ritchie et al. [4]) has made it relatively easy to develop adequate morphological (or at least morphographical) descriptions for natural languages, clearly superior to earlier cut-and-paste approaches to mor- phology. Most of the existing Kimmo systems developed within this paradigm consist of • linked lexicons stored as annotated letter trees • morphological information on the leaf nodes of trees • transducers that encode morphological alternations An analysis of an inflected word form is produced by mapping the input form to a sequence of lexical forms through the transducers and by composing some out- put from the annotations on the leaf nodes of the lexical paths that were traversed. Comprehensive morphological descrip- tions of this type have been developed for several languages including Finnish, Swedish, Russian, English, Swahili, and Arabic. Although they have several good features, these Kimmo-systems also have some limitations. The ones we want to ad- dress in this paper are the following: (1) Lexical representations tend to be arbitrary. Because it is difficult to write and test two-level systems that map between pairs of radically dissimilar forms, lexical representations in existing two-level analyzers tend to stay close to the surface forms. This is not a problem for morpho- logically simple languages like English because, for most words, inflected forms are very similar to the canonical dictionary entry. Except for a small number of irregular verbs and nouns, it is not difficult to create a two-level description for English in which lexical forms coincide with the canonical citation forms found in a dictionary. However, current analyzers for mor- phologically more complex languages (Finnish and Russian, for example) are not as satisfying in this respect. In these systems, lexical forms typically contain diacritic markers and special symbols; they are not real words in the language. For example, in Finnish the lexical counterpart of otin I took might be rendered as otTallln, where T, al, and I1 are an arbitrary encoding of morpho- logical alternations that determine the allomorphs of the stem and the past tense morpheme. The canonical citation form ottaa to take is composed from annotations on the leaf nodes of the letter trees that are linked to match the input. It is not in any direct way related to the lexical form produced by the transducers. (2) Morphological categories are not directly encoded as part of the lexical form. Instead of morphemes like Plural or Past, we typically see suffix strings like +s, and +ed, which do not by themselves indi- cate what morpheme they express. Different realizations of the same morpho- logical category are often represented as different even on the lexical side. These characteristics lead to some un- desirable consequences: ACRES DE COLING-92, NANTES, 23-28 AO~ 1992 1 4 1 PROC. OF COLING-92, NA~rr~s, AU6.23-28, 1992


conference of the european chapter of the association for computational linguistics | 1989

Translation by structural correspondences

Ronald M. Kaplan; Klaus Netter; Jürgen Wedekind; Annie Zaenen

We sketch and illustrate an approach to machine translation that exploits the potential of simultaneous correspondences between separate levels of linguistic representation, as formalized in the LFG notion of codescriptions. The approach is illustrated with examples from English, German and French where the source and the target language sentence show noteworthy differences in linguistic analysis.


Archive | 1991

A Method for Disjunctive Constraint Satisfaction

John T. Maxwell; Ronald M. Kaplan

A distinctive property of many current grammatical formalisms is their use of feature equality constraints to express a wide variety of grammatical dependencies. Lexical-Functional Grammar (Kaplan & Bresnan, 1982), Head-Driven Phrase-Structure Grammar (Pollard & Sag, 1987), PATR (Karttunen, 1986a), FUG (Kay, 1979, 1985), and the various forms of categorial unification grammar (Karttunen, 1986b; Uszkoreit, 1986; Zeevat, Klein, & Calder, 1987) all require an analysis of a sentence to satisfy a collection of feature constraints in addition to a set of conditions on the arrangement of words and phrases. Conjunctions of equality constraints can be quickly solved by standard unification algorithms, so they in themselves do not present a computational problem. However, the equality constraints derived for typical sentences are not merely conjoined together in a form that unification algorithms can deal with directly. Rather, they are embedded as primitive elements in complex disjunctive formulas. For some formalisms, these disjunctions arise from explicit disjunction operators that the constraint language provides for (e.g., LFG) while for others disjunctive constraints are derived from the application of alternative phrase structure rules (e.g., PATR). In either case, disjunctive specifications help to simplify the statement of grammatical possibilities. Alternatives expressed locally within individual rules and lexical entries can appeal to more general disjunctive processing mechanisms to resolve their global interactions.


Language | 2000

Feature Indeterminacy and Feature Resolution.

Mary Dalrymple; Ronald M. Kaplan

Syntactic features like CASE, PERSON, and GENDER are often assumed to have simple atomic values that are checked for consistency by the standard predicate of equality. The CASE feature has values such as NOM or ACC, and values like MASC and FEM are assumed for the feature GENDER. But such a view does not square with some of the complex behavior these features exhibit. It allows no obvious account of FEATURE INDETERMINACY (how a particular form can satisfy conflicting requirements on a feature like CASE), nor does it give an obvious account of FEATURE RESOLUTION (how PERSON and GENDER features of a coordinate noun phrase are determined on the basis of the conjuncts). We present a theory of feature representation and feature checking that solves these two problems, providing a straightforward characterization of feature indeterminacy and feature resolution while sticking to structures and standard interpretations that have independent motivation. Our theory of features is formulated within the LFG framework, but we believe that similar solutions can be developed within other syntactic approaches.


international conference on computational linguistics | 1988

An algorithm for functional uncertainty

Ronald M. Kaplan; John T. Maxwell

The formal device of functional uncertainty has been introduced into linguistic theory as a means of characterizing long-distance dependencies alternative to conventional phrase-structure based approaches. In this paper we briefly outline the uncertainty concept, and then present an algorithm for determining the satisfiability of acyclic grammatical descriptions containing uncertainty expressions and for synthesizing the grammatically relevant solutions to those descriptions.


meeting of the association for computational linguistics | 1998

A Probabilistic Corpus-Driven Model for Lexical-Functional Analysis

Rens Bod; Ronald M. Kaplan

We develop a Data-Oriented Parsing (DOP) model based on the syntactic representations of Lexical-Functional Grammar (LFG). We start by summarizing the original DOP model for tree representations and then show how it can be extended with corresponding functional structures. The resulting LFG-DOP model triggers a new, corpus-based notion of grammaticality, and its probability models exhibit interesting behavior with respect to specificity and the interpretation of ill-formed strings.

Collaboration


Dive into the Ronald M. Kaplan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge