Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Belanger is active.

Publication


Featured researches published by David Belanger.


conference on recommender systems | 2016

Ask the GRU : Multi-task Learning for Deep Text Recommendations

Trapit Bansal; David Belanger; Andrew McCallum

In a variety of application domains the content to be recommended to users is associated with text. This includes research papers, movies with associated plot summaries, news articles, blog posts, etc. Recommendation approaches based on latent factor models can be extended naturally to leverage text by employing an explicit mapping from text to factors. This enables recommendations for new, unseen content, and may generalize better, since the factors for all items are produced by a compactly-parametrized model. Previous work has used topic models or averages of word embeddings for this mapping. In this paper we present a method leveraging deep recurrent neural networks to encode the text sequence into a latent vector, specifically gated recurrent units (GRUs) trained end-to-end on the collaborative filtering task. For the task of scientific paper recommendation, this yields models with significantly higher accuracy. In cold-start scenarios, we beat the previous state-of-the-art, all of which ignore word order. Performance is further improved by multi-task learning, where the text encoder network is trained for a combination of content recommendation and item metadata prediction. This regularizes the collaborative filtering model, ameliorating the problem of sparsity of the observed rating matrix.


meeting of the association for computational linguistics | 2014

Learning Soft Linear Constraints with Application to Citation Field Extraction

Sam Anzaroot; Alexandre Passos; David Belanger; Andrew McCallum

Accurately segmenting a citation string into fields for authors, titles, etc. is a challenging task because the output typically obeys various global constraints. Previous work has shown that modeling soft constraints, where the model is encouraged, but not require to obey the constraints, can substantially improve segmentation performance. On the other hand, for imposing hard constraints, dual decomposition is a popular technique for efficient prediction given existing algorithms for unconstrained inference. We extend the technique to perform prediction subject to soft constraints. Moreover, with a technique for performing inference given soft constraints, it is easy to automatically generate large families of constraints and learn their costs with a simple convex optimization problem during training. This allows us to obtain substantial gains in accuracy on a new, challenging citation extraction dataset.


north american chapter of the association for computational linguistics | 2016

Incorporating Selectional Preferences in Multi-hop Relation Extraction.

Rajarshi Das; Arvind Neelakantan; David Belanger; Andrew McCallum

Relation extraction is one of the core challenges in automated knowledge base construction. One line of approach for relation extraction is to perform multi-hop reasoning on the paths connecting an entity pair to infer new relations. While these methods have been successfully applied for knowledge base completion, they do not utilize the entity or the entity type information to make predictions. In this work, we incorporate selectional preferences, i.e., relations enforce constraints on the allowed entity types for the candidate entities, to multi-hop relation extraction by including entity type information. We achieve a 17.67% (relative) improvement in MAP score in a relation extraction task when compared to a method that does not use entity type information.


international conference on frontiers in handwriting recognition | 2014

Progress in the Raytheon BBN Arabic Offline Handwriting Recognition System

Huaigu Cao; Prem Natarajan; Xujun Peng; Krishna Subramanian; David Belanger; Nan Li

This paper presents the most recent progress and state of the art result obtained from BBNs Arabic offline handwriting recognition research. Our system is based a left-to-right hidden Markov model and integrates discriminative learning methods including discriminative MPE and n-best rescoring using the scores of glyph classifiers (SVM, DNN) and the RNNLM. Arabic-related features for n-best rescoring are also investigated in this paper. Multi-stage MAP/MLLR and writer verification are applied to adapt the recognizer in all training situations. Consensus network is extensively researched for system combination and improving challenging preprocessing problems.


international conference on machine learning | 2016

Structured prediction energy networks

David Belanger; Andrew McCallum


north american chapter of the association for computational linguistics | 2016

Multilingual Relation Extraction using Compositional Universal Schema

Patrick Verga; David Belanger; Emma Strubell; Benjamin Roth; Andrew McCallum


conference of the european chapter of the association for computational linguistics | 2017

Chains of Reasoning over Entities, Relations, and Text using Recurrent Neural Networks

Rajarshi Das; Arvind Neelakantan; David Belanger; Andrew McCallum


empirical methods in natural language processing | 2017

Fast and Accurate Entity Recognition with Iterated Dilated Convolutions

Emma Strubell; Patrick Verga; David Belanger; Andrew McCallum


international conference on machine learning | 2015

A Linear Dynamical System Model for Text

David Belanger; Sham M. Kakade


Archive | 2013

Marginal Inference in MRFs using Frank-Wolfe

David Belanger; Daniel Sheldon; Andrew McCallum

Collaboration


Dive into the David Belanger's collaboration.

Top Co-Authors

Avatar

Andrew McCallum

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Alexandre Passos

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emma Strubell

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Patrick Verga

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arvind Neelakantan

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Daniel Sheldon

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luke Vilnis

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge