Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason Naradowsky is active.

Publication


Featured researches published by Jason Naradowsky.


empirical methods in natural language processing | 2009

Polylingual Topic Models

David M. Mimno; Hanna M. Wallach; Jason Naradowsky; David A. Smith; Andrew McCallum

Topic models are a useful tool for analyzing large text collections, but have previously been applied in only monolingual, or at most bilingual, contexts. Meanwhile, massive collections of interlinked documents in dozens of languages, such as Wikipedia, are now widely available, calling for tools that can characterize content in many languages. We introduce a polylingual topic model that discovers topics aligned across multiple languages. We explore the models characteristics using two large corpora, each with over ten different languages, and demonstrate its usefulness in supporting machine translation and tracking topic trends across languages.


meeting of the association for computational linguistics | 2016

Noise reduction and targeted exploration in imitation learning for Abstract Meaning Representation parsing

James Goodman; Andreas Vlachos; Jason Naradowsky

Semantic parsers map natural language statements into meaning representations, and must abstract over syntactic phenomena, resolve anaphora, and identify word senses to eliminate ambiguous interpretations. Abstract meaning representation (AMR) is a recent example of one such semantic formalism which, similar to a dependency parse, utilizes a graph to represent relationships between concepts (Banarescu et al., 2013). As with dependency parsing, transition-based approaches are a common approach to this problem. However, when trained in the traditional manner these systems are susceptible to the accumulation of errors when they find undesirable states during greedy decoding. Imitation learning algorithms have been shown to help these systems recover from such errors. To effectively use these methods for AMR parsing we find it highly beneficial to introduce two novel extensions: noise reduction and targeted exploration. The former mitigates the noise in the feature representation, a result of the complexity of the task. The latter targets the exploration steps of imitation learning towards areas which are likely to provide the most information in the context of a large action-space. We achieve state-ofthe art results, and improve upon standard transition-based parsing by 4.7 F1 points.


north american chapter of the association for computational linguistics | 2015

WOLFE: An NLP-friendly Declarative Machine Learning Stack

Sameer Singh; Tim Rocktäschel; Luke Hewitt; Jason Naradowsky; Sebastian Riedel

Developing machine learning algorithms for natural language processing (NLP) applications is inherently an iterative process, involving a continuous refinement of the choice of model, engineering of features, selection of inference algorithms, search for the right hyperparameters, and error analysis. Existing probabilistic program languages (PPLs) only provide partial solutions; most of them do not support commonly used models such as matrix factorization or neural networks, and do not facilitate interactive and iterative programming that is crucial for rapid development of these models. In this demo we introduce WOLFE, a stack designed to facilitate the development of NLP applications: (1) the WOLFE language allows the user to concisely define complex models, enabling easy modification and extension, (2) the WOLFE interpreter transforms declarative machine learning code into automatically differentiable terms or, where applicable, into factor graphs that allow for complex models to be applied to real-world applications, and (3) the WOLFE IDE provides a number of different visual and interactive elements, allowing intuitive exploration and editing of the data representations, the underlying graphical models, and the execution of the inference algorithms.


meeting of the association for computational linguistics | 2015

Matrix and Tensor Factorization Methods for Natural Language Processing

Guillaume Bouchard; Jason Naradowsky; Sebastian Riedel; Tim Rocktäschel; Andreas Vlachos

Tensor and matrix factorization methods have attracted a lot of attention recently thanks to their successful applications to information extraction, knowledge base population, lexical semantics and dependency parsing. In the first part, we will first cover the basics of matrix and tensor factorization theory and optimization, and then proceed to more advanced topics involving convex surrogates and alternative losses. In the second part we will discuss recent NLP applications of these methods and show the connections with other popular methods such as transductive learning, topic models and neural networks. The aim of this tutorial is to present in detail applied factorization methods, as well as to introduce more recently proposed methods that are likely to be useful to NLP applications.


north american chapter of the association for computational linguistics | 2016

UCL+Sheffield at SemEval-2016 Task 8: Imitation learning for AMR parsing with an alpha-bound.

James Goodman; Andreas Vlachos; Jason Naradowsky

We develop a novel transition-based parsing algorithm for the abstract meaning representation parsing task using exact imitation learning, in which the parser learns a statistical model by imitating the actions of an expert on the training data. We then use the imitation learning algorithm DAGGER to improve the performance, and apply an α-bound as a simple noise reduction technique. Our performance on the test set was 60% in F-score, and the performance gains on the development set due to DAGGER was up to 1.1 points of Fscore. The α-bound improved performance by up to 1.8 points.


international conference on machine learning | 2017

Programming With a Differentiable Forth Interpreter

Matko Bošnjak; Tim Rocktäschel; Jason Naradowsky; Sebastian Riedel


meeting of the association for computational linguistics | 2011

A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing

John Lee; Jason Naradowsky; David A. Smith


meeting of the association for computational linguistics | 2011

Unsupervised Bilingual Morpheme Segmentation and Alignment with Context-rich Hidden Semi-Markov Models

Jason Naradowsky; Kristina Toutanova


international joint conference on artificial intelligence | 2009

Improving morphology induction by learning spelling rules

Jason Naradowsky; Sharon Goldwater


empirical methods in natural language processing | 2012

Improving NLP through Marginalization of Hidden Syntactic Structure

Jason Naradowsky; Sebastian Riedel; David A. Smith

Collaboration


Dive into the Jason Naradowsky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David A. Smith

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan Cotterell

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Andrew McCallum

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hanna M. Wallach

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge