Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Weiss is active.

Publication


Featured researches published by David Weiss.


meeting of the association for computational linguistics | 2016

Globally Normalized Transition-Based Neural Networks

Daniel Andor; Chris Alberti; David Weiss; Aliaksei Severyn; Alessandro Presta; Kuzman Ganchev; Slav Petrov; Michael Collins

We introduce a globally normalized transition-based neural network model that achieves state-of-the-art part-of-speech tagging, dependency parsing and sentence compression results. Our model is a simple feed-forward neural network that operates on a task-specific transition system, yet achieves comparable or better accuracies than recurrent models. We discuss the importance of global as opposed to local normalization: a key insight is that the label bias problem implies that globally normalized models can be strictly more expressive than locally normalized models.


international joint conference on natural language processing | 2015

Structured Training for Neural Network Transition-Based Parsing

David Weiss; Chris Alberti; Michael Collins; Slav Petrov

We present structured perceptron training for neural network transition-based dependency parsing. We learn the neural network representation using a gold corpus augmented by a large number of automatically parsed sentences. Given this fixed network representation, we learn a final layer using the structured perceptron with beam-search decoding. On the Penn Treebank, our parser reaches 94.26% unlabeled and 92.41% labeled attachment accuracy, which to our knowledge is the best accuracy on Stanford Dependencies to date. We also provide indepth ablative analysis to determine which aspects of our model provide the largest gains in accuracy.


empirical methods in natural language processing | 2015

Improved Transition-Based Parsing and Tagging with Neural Networks

Chris Alberti; David Weiss; Greg Coppola; Slav Petrov

We extend and improve upon recent work in structured training for neural network transition-based dependency parsing. We do this by experimenting with novel features, additional transition systems and by testing on a wider array of languages. In particular, we introduce set-valued features to encode the predicted morphological properties and part-ofspeech confusion sets of the words being parsed. We also investigate the use of joint parsing and partof-speech tagging in the neural paradigm. Finally, we conduct a multi-lingual evaluation that demonstrates the robustness of the overall structured neural approach, as well as the benefits of the extensions proposed in this work. Our research further demonstrates the breadth of the applicability of neural network methods to dependency parsing, as well as the ease with which new features can be added to neural parsing models.


meeting of the association for computational linguistics | 2016

Stack-propagation: Improved Representation Learning for Syntax

Yuan Zhang; David Weiss

Traditional syntax models typically leverage part-of-speech (POS) information by constructing features from hand-tuned templates. We demonstrate that a better approach is to utilize POS tags as a regularizer of learned representations. We propose a simple method for learning a stacked pipeline of models which we call “stack-propagation”. We apply this to dependency parsing and tagging, where we use the hidden layer of the tagger network as a representation of the input tokens for the parser. At test time, our parser does not require predicted POS tags. On 19 languages from the Universal Dependencies, our method is 1.3% (absolute) more accurate than a state-of-the-art graph-based approach and 2.7% more accurate than the most comparable greedy model.


arXiv: Computation and Language | 2017

SyntaxNet Models for the CoNLL 2017 Shared Task.

Chris Alberti; Daniel Andor; Ivan Bogatyy; Michael Collins; Daniel Gillick; Lingpeng Kong; Terry Koo; Ji Ma; Mark Omernick; Slav Petrov; Chayut Thanapirom; Zora Tung; David Weiss


arXiv: Computation and Language | 2017

DRAGNN: A Transition-Based Framework for Dynamically Connected Neural Networks

Lingpeng Kong; Chris Alberti; Daniel Andor; Ivan Bogatyy; David Weiss


empirical methods in natural language processing | 2017

Natural Language Processing with Small Feed-Forward Networks

Jan A. Botha; Emily Pitler; Ji Ma; Anton Bakalov; Alex Salcianu; David Weiss; Ryan T. McDonald; Slav Petrov


empirical methods in natural language processing | 2018

A Fast, Compact, Accurate Model for Language Identification of Codemixed Text

Yuan Zhang; Jason Riesa; Daniel Gillick; Anton Bakalov; Jason Baldridge; David Weiss


empirical methods in natural language processing | 2018

Linguistically-Informed Self-Attention for Semantic Role Labeling

Emma Strubell; Patrick Verga; Daniel Andor; David Weiss; Andrew McCallum


arXiv: Computation and Language | 2018

Adversarial Neural Networks for Cross-lingual Sequence Tagging.

Heike Adel; Anton Bryl; David Weiss; Aliaksei Severyn

Collaboration


Dive into the David Weiss's collaboration.

Top Co-Authors

Avatar

Kuzman Ganchev

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Lingpeng Kong

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Yuan Zhang

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge