Rachel Rudinger
Johns Hopkins University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rachel Rudinger.
empirical methods in natural language processing | 2015
Rachel Rudinger; Pushpendre Rastogi; Francis Ferraro; Benjamin Van Durme
The narrative cloze is an evaluation metric commonly used for work on automatic script induction. While prior work in this area has focused on count-based methods from distributional semantics, such as pointwise mutual information, we argue that the narrative cloze can be productively reframed as a language modeling task. By training a discriminative language model for this task, we attain improvements of up to 27 percent over prior methods on standard narrative cloze metrics.
empirical methods in natural language processing | 2016
Aaron Steven White; Drew Reisinger; Keisuke Sakaguchi; Tim Vieira; Sheng Zhang; Rachel Rudinger; Kyle Rawlins; Benjamin Van Durme
We present a framework for augmenting data sets from the Universal Dependencies project with Universal Decompositional Semantics. Where the Universal Dependencies project aims to provide a syntactic annotation standard that can be used consistently across many languages as well as a collection of corpora that use that standard, our extension has similar aims for semantic annotation. We describe results from annotating the English Universal Dependencies treebank, dealing with word senses, semantic roles, and event properties.
workshop on events definition detection coreference and representation | 2014
Rachel Rudinger; Benjamin Van Durme
The Stanford Dependencies are a deep syntactic representation that are widely used for semantic tasks, like Recognizing Textual Entailment. But do they capture all of the semantic information a meaning representation ought to convey? This paper explores this question by investigating the feasibility of mapping Stanford dependency parses to Hobbsian Logical Form, a practical, event-theoretic semantic representation, using only a set of deterministic rules. Although we find that such a mapping is possible in a large number of cases, we also find cases for which such a mapping seems to require information beyond what the Stanford Dependencies encode. These cases shed light on the kinds of semantic information that are and are not present in the Stanford Dependencies.
meeting of the association for computational linguistics | 2017
Rachel Rudinger; Chandler May; Benjamin Van Durme
We analyze the Stanford Natural Language Inference (SNLI) corpus in an investigation of bias and stereotyping in NLP data. The SNLI human-elicitation protocol makes it prone to amplifying bias and stereotypical associations, which we demonstrate statistically (using pointwise mutual information) and with qualitative examples.
joint conference on lexical and computational semantics | 2015
Rachel Rudinger; Vera Demberg; Ashutosh Modi; Benjamin Van Durme; Manfred Pinkal
The automatic induction of scripts (Schank and Abelson, 1977) has been the focus of many recent works. In this paper, we employ a variety of these methods to learn Schank and Abelson’s canonical restaurant script, using a novel dataset of restaurant narratives we have compiled from a website called “Dinners from Hell.” Our models learn narrative chains, script-like structures that we evaluate with the “narrative cloze” task (Chambers and Jurafsky, 2008).
Transactions of the Association for Computational Linguistics | 2015
Drew Reisinger; Rachel Rudinger; Francis Ferraro; Craig Harman; Kyle Rawlins; Benjamin Van Durme
north american chapter of the association for computational linguistics | 2018
Rachel Rudinger; Jason Naradowsky; Brian Leonard; Benjamin Van Durme
north american chapter of the association for computational linguistics | 2018
Adam Poliak; Jason Naradowsky; Aparajita Haldar; Rachel Rudinger; Benjamin Van Durme
Transactions of the Association for Computational Linguistics | 2017
Sheng Zhang; Rachel Rudinger; Kevin Duh; Benjamin Van Durme
north american chapter of the association for computational linguistics | 2018
Rachel Rudinger; Aaron Steven White; Benjamin Van Durme