Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bill MacCartney is active.

Publication


Featured researches published by Bill MacCartney.


language and technology conference | 2006

Learning to recognize features of valid textual entailments

Bill MacCartney; Trond Grenager; Marie-Catherine de Marneffe; Daniel M. Cer; Christopher D. Manning

This paper advocates a new architecture for textual inference in which finding a good alignment is separated from evaluating entailment. Current approaches to semantic inference in question answering and textual entailment have approximated the entailment problem as that of computing the best alignment of the hypothesis to the text, using a locally decomposable matching score. We argue that there are significant weaknesses in this approach, including flawed assumptions of monotonicity and locality. Instead we propose a pipelined approach where alignment is followed by a classification step, in which we extract features representing high-level characteristics of the entailment problem, and pass the resulting feature vector to a statistical classifier trained on development data. We report results on data from the 2005 Pascal RTE Challenge which surpass previously reported results for alignment-based systems.


international conference on computational linguistics | 2008

Modeling Semantic Containment and Exclusion in Natural Language Inference

Bill MacCartney; Christopher D. Manning

We propose an approach to natural language inference based on a model of natural logic, which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We greatly extend past work in natural logic, which has focused solely on semantic containment and monotonicity, to incorporate both semantic exclusion and implicativity. Our system decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical entailment relation for each edit using a statistical classifier; propagates these relations upward through a syntax tree according to semantic properties of intermediate nodes; and composes the resulting entailment relations across the edit sequence. We evaluate our system on the FraCaS test suite, and achieve a 27% reduction in error from previous work. We also show that hybridizing an existing RTE system with our natural logic system yields significant gains on the RTE3 test suite.


meeting of the association for computational linguistics | 2007

Natural Logic for Textual Inference

Bill MacCartney; Christopher D. Manning

This paper presents the first use of a computational model of natural logic---a system of logical inference which operates over natural language---for textual inference. Most current approaches to the PASCAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds a low-cost edit sequence which transforms the premise into the hypothesis; learns to classify entailment relations across atomic edits; and composes atomic entailments into a top-level entailment judgment. We provide the first reported results for any system on the FraCaS test suite. We also evaluate on RTE3 data, and show that hybridizing an existing RTE system with our natural logic system yields significant performance gains.


Proceedings of the Eight International Conference on Computational Semantics | 2009

An extended model of natural logic

Bill MacCartney; Christopher D. Manning

We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical semantic relation for each edit; propagates these relations upward through a semantic composition tree according to properties of intermediate nodes; and joins the resulting semantic relations across the edit sequence. A computational implementation of the model achieves 70% accuracy and 89% precision on the FraCaS test suite. Moreover, including this model as a component in an existing system yields significant performance gains on the Recognizing Textual Entailment challenge.


empirical methods in natural language processing | 2008

A Phrase-Based Alignment Model for Natural Language Inference

Bill MacCartney; Michel Galley; Christopher D. Manning

The alignment problem---establishing links between corresponding phrases in two related sentences---is as important in natural language inference (NLI) as it is in machine translation (MT). But the tools and techniques of MT alignment do not readily transfer to NLI, where one cannot assume semantic equivalence, and for which large volumes of bitext are lacking. We present a new NLI aligner, the MANLI system, designed to address these challenges. It uses a phrase-based alignment representation, exploits external lexical resources, and capitalizes on a new set of supervised training data. We compare the performance of MANLI to existing NLI and MT aligners on an NLI alignment task over the well-known Recognizing Textual Entailment data. We show that MANLI significantly outperforms existing aligners, achieving gains of 6.2% in F1 over a representative NLI aligner and 10.5% over GIZA++.


meeting of the association for computational linguistics | 2007

Learning Alignments and Leveraging Natural Logic

Nathanael Chambers; Daniel M. Cer; Trond Grenager; David Leo Wright Hall; Chloé Kiddon; Bill MacCartney; Marie-Catherine de Marneffe; Daniel Ramage; Eric Yeh; Christopher D. Manning

We describe an approach to textual inference that improves alignments at both the typed dependency level and at a deeper semantic level. We present a machine learning approach to alignment scoring, a stochastic search procedure, and a new tool that finds deeper semantic alignments, allowing rapid development of semantic features over the aligned graphs. Further, we describe a complementary semantic component based on natural logic, which shows an added gain of 3.13% accuracy on the RTE3 test set.


TextMean '04 Proceedings of the 2nd Workshop on Text Meaning and Interpretation | 2004

Solving logic puzzles: from robust processing to precise semantics

Iddo Lev; Bill MacCartney; Christopher D. Manning; Roger Levy

This paper presents intial work on a system that bridges from robust, broad-coverage natural language processing to precise semantics and automated reasoning, focusing on solving logic puzzles drawn from sources such as the Law School Admission Test (LSAT) and the analytic section of the Graduate Record Exam (GRE). We highlight key challenges, and discuss the representations and performance of the prototype system.


Archive | 2014

Natural Logic and Natural Language Inference

Bill MacCartney; Christopher D. Manning

We propose a model of natural language inference which identifies valid inferences by their lexical and syntactic features, without full semantic interpretation. We extend past work in natural logic, which has focused on semantic containment and monotonicity, by incorporating both semantic exclusion and implicativity. Our model decomposes an inference problem into a sequence of atomic edits linking premise to hypothesis; predicts a lexical entailment relation for each edit; propagates these relations upward through a semantic composition tree according to properties of intermediate nodes; and joins the resulting entailment relations across the edit sequence. A computational implementation of the model achieves 70 % accuracy and 89 % precision on the FraCaS test suite. Moreover, including this model as a component in an existing system yields significant performance gains on the Recognizing Textual Entailment challenge.


language resources and evaluation | 2006

Generating Typed Dependency Parses from Phrase Structure Parses.

Marie-Catherine de Marneffe; Bill MacCartney; Christopher D. Manning


Archive | 2009

Natural language inference

Christopher D. Manning; Bill MacCartney

Collaboration


Dive into the Bill MacCartney's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge