Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoav Artzi is active.

Publication


Featured researches published by Yoav Artzi.


meeting of the association for computational linguistics | 2014

Learning to Automatically Solve Algebra Word Problems

Nate Kushman; Yoav Artzi; Luke Zettlemoyer; Regina Barzilay

We present an approach for automatically learning to solve algebra word problems. Our algorithm reasons across sentence boundaries to construct and solve a system of linear equations, while simultaneously recovering an alignment of the variables and numbers in these equations to the problem text. The learning algorithm uses varied supervision, including either full equations or just the final answers. We evaluate performance on a newly gathered corpus of algebra word problems, demonstrating that the system can correctly answer almost 70% of the questions in the dataset. This is, to our knowledge, the first learning result for this task.


empirical methods in natural language processing | 2015

Broad-coverage CCG Semantic Parsing with AMR

Yoav Artzi; Kenton Lee; Luke Zettlemoyer

We propose a grammar induction technique for AMR semantic parsing. While previous grammar induction techniques were designed to re-learn a new parser for each target application, the recently annotated AMR Bank provides a unique opportunity to induce a single model for understanding broad-coverage newswire text and support a wide range of applications. We present a new model that combines CCG parsing to recover compositional aspects of meaning and a factor graph to model non-compositional phenomena, such as anaphoric dependencies. Our approach achieves 66.2 Smatch F1 score on the AMR bank, significantly outperforming the previous state of the art.


meeting of the association for computational linguistics | 2014

Context-dependent Semantic Parsing for Time Expressions

Kenton Lee; Yoav Artzi; Jesse Dodge; Luke Zettlemoyer

We present an approach for learning context-dependent semantic parsers to identify and interpret time expressions. We use a Combinatory Categorial Grammar to construct compositional meaning representations, while considering contextual cues, such as the document creation time and the tense of the governing verb, to compute the final time values. Experiments on benchmark datasets show that our approach outperforms previous stateof-the-art systems, with error reductions of 13% to 21% in end-to-end performance.


empirical methods in natural language processing | 2014

Learning Compact Lexicons for CCG Semantic Parsing

Yoav Artzi; Dipanjan Das; Slav Petrov

We present methods to control the lexicon size when learning a Combinatory Categorial Grammar semantic parser. Existing methods incrementally expand the lexicon by greedily adding entries, considering a single training datapoint at a time. We propose using corpus-level statistics for lexicon learning decisions. We introduce voting to globally consider adding entries to the lexicon, and pruning to remove entries no longer required to explain the training data. Our methods result in state-of-the-art performance on the task of executing sequences of natural language instructions, achieving up to 25% error reduction, with lexicons that are up to 70% smaller and are qualitatively less noisy.


empirical methods in natural language processing | 2015

Event Detection and Factuality Assessment with Non-Expert Supervision

Kenton Lee; Yoav Artzi; Yejin Choi; Luke Zettlemoyer

Events are communicated in natural language with varying degrees of certainty. For example, if you are “hoping for a raise,” it may be somewhat less likely than if you are “expecting” one. To study these distinctions, we present scalable, highquality annotation schemes for event detection and fine-grained factuality assessment. We find that non-experts, with very little training, can reliably provide judgments about what events are mentioned and the extent to which the author thinks they actually happened. We also show how such data enables the development of regression models for fine-grained scalar factuality predictions that outperform strong baselines.


meeting of the association for computational linguistics | 2017

A Corpus of Natural Language for Visual Reasoning.

Alane Suhr; Mike Lewis; James Yeh; Yoav Artzi

We present a new visual reasoning language dataset, containing 92,244 pairs of examples of natural statements grounded in synthetic images with 3,962 unique sentences. We describe a method of crowdsourcing linguistically-diverse data, and present an analysis of our data. The data demonstrates a broad set of linguistic phenomena, requiring visual and set-theoretic reasoning. We experiment with various models, and show the data presents a strong challenge for future research.


empirical methods in natural language processing | 2016

Neural Shift-Reduce CCG Semantic Parsing.

Dipendra Kumar Misra; Yoav Artzi

We present a shift-reduce CCG semantic parser. Our parser uses a neural network architecture that balances model capacity and computational cost. We train by transferring a model from a computationally expensive loglinear CKY parser. Our learner addresses two challenges: selecting the best parse for learning when the CKY parser generates multiple correct trees, and learning from partial derivations when the CKY parser fails to parse. We evaluate on AMR parsing. Our parser performs comparably to the CKY parser, while doing significantly fewer operations. We also present results for greedy semantic parsing with a relatively small drop in performance.


human factors in computing systems | 2017

Modeling Sub-Document Attention Using Viewport Time

Max Grusky; Jeiran Jahani; Josh Schwartz; Dan Valente; Yoav Artzi; Mor Naaman

Website measures of engagement captured from millions of users, such as in-page scrolling and viewport position, can provide deeper understanding of attention than possible with simpler measures, such as dwell time. Using data from 1.2M news reading sessions, we examine and evaluate three increasingly sophisticated models of sub-document attention computed from viewport time, the time a page component is visible on the user display. Our modeling incorporates prior eye-tracking knowledge about onscreen reading, and we validate it by showing how, when used to estimate user reading rate, it aligns with known empirical measures. We then show how our models reveal an interaction between article topic and attention to page elements. Our approach supports refined large-scale measurement of user engagement at a level previously available only from lab-based eye-tracking studies.


Transactions of the Association for Computational Linguistics | 2013

Weakly Supervised Learning of Semantic Parsers for Mapping Instructions to Actions

Yoav Artzi; Luke Zettlemoyer


empirical methods in natural language processing | 2013

Scaling Semantic Parsers with On-the-Fly Ontology Matching

Tom Kwiatkowski; Eunsol Choi; Yoav Artzi; Luke Zettlemoyer

Collaboration


Dive into the Yoav Artzi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenton Lee

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mike Lewis

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tao Lei

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yu Zhang

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge