Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew R. Gormley is active.

Publication


Featured researches published by Matthew R. Gormley.


empirical methods in natural language processing | 2015

Improved Relation Extraction with Feature-Rich Compositional Embedding Models

Matthew R. Gormley; Mo Yu; Mark Dredze

Compositional embedding models build a representation (or embedding) for a linguistic structure based on its component word embeddings. We propose a Feature-rich Compositional Embedding Model (FCM) for relation extraction that is expressive, generalizes to new domains, and is easy-to-implement. The key idea is to combine both (unlexicalized) hand-crafted features with learned word embeddings. The model is able to directly tackle the difficulties met by traditional compositional embeddings models, such as handling arbitrary types of sentence annotations and utilizing global information for composition. We test the proposed model on two relation extraction tasks, and demonstrate that our model outperforms both previous compositional models and traditional feature rich models on the ACE 2005 relation extraction task, and the SemEval 2010 relation classification task. The combination of our model and a log-linear classifier with hand-crafted features gives state-of-the-art results.


north american chapter of the association for computational linguistics | 2015

Combining Word Embeddings and Feature Embeddings for Fine-grained Relation Extraction

Mo Yu; Matthew R. Gormley; Mark Dredze

Compositional embedding models build a representation for a linguistic structure based on its component word embeddings. While recent work has combined these word embeddings with hand crafted features for improved performance, it was restricted to a small number of features due to model complexity, thus limiting its applicability. We propose a new model that conjoins features and word embeddings while maintaing a small number of parameters by learning feature embeddings jointly with the parameters of a compositional model. The result is a method that can scale to more features and more labels, while avoiding overfitting. We demonstrate that our model attains state-of-the-art results on ACE and ERE fine-grained relation extraction.


meeting of the association for computational linguistics | 2014

Low-Resource Semantic Role Labeling

Matthew R. Gormley; Margaret Mitchell; Benjamin Van Durme; Mark Dredze

We explore the extent to which highresource manual annotations such as treebanks are necessary for the task of semantic role labeling (SRL). We examine how performance changes without syntactic supervision, comparing both joint and pipelined methods to induce latent syntax. This work highlights a new application of unsupervised grammar induction and demonstrates several approaches to SRL in the absence of supervised syntax. Our best models obtain competitive results in the high-resource setting and state-ofthe-art results in the low resource setting, reaching 72.48% F1 averaged across languages. We release our code for this work along with a larger toolkit for specifying arbitrary graphical structure. 1


north american chapter of the association for computational linguistics | 2016

Embedding Lexical Features via Low-Rank Tensors

Mo Yu; Mark Dredze; Raman Arora; Matthew R. Gormley

Modern NLP models rely heavily on engineered features, which often combine word and contextual information into complex lexical features. Such combination results in large numbers of features, which can lead to over-fitting. We present a new model that represents complex lexical features---comprised of parts for words, contextual information and labels---in a tensor that captures conjunction information among these parts. We apply low-rank tensor approximations to the corresponding parameter tensors to reduce the parameter space and improve prediction speed. Furthermore, we investigate two methods for handling features that include


north american chapter of the association for computational linguistics | 2015

A Concrete Chinese NLP Pipeline

Nanyun Peng; Francis Ferraro; Mo Yu; Nicholas Andrews; Jay DeYoung; Max Thomas; Matthew R. Gormley; Travis Wolfe; Craig Harman; Benjamin Van Durme; Mark Dredze

n


Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX) | 2012

Annotated Gigaword

Courtney Napoles; Matthew R. Gormley; Benjamin Van Durme

-grams of mixed lengths. Our model achieves state-of-the-art results on tasks in relation extraction, PP-attachment, and preposition disambiguation.


north american chapter of the association for computational linguistics | 2013

Topic Models and Metadata for Visualizing Text Corpora

Justin Snyder; Rebecca Knowles; Mark Dredze; Matthew R. Gormley; Travis Wolfe

Natural language processing research increasingly relies on the output of a variety of syntactic and semantic analytics. Yet integrating output from multiple analytics into a single framework can be time consuming and slow research progress. We present a CONCRETE Chinese NLP Pipeline: an NLP stack built using a series of open source systems integrated based on the CONCRETE data schema. Our pipeline includes data ingest, word segmentation, part of speech tagging, parsing, named entity recognition, relation extraction and cross document coreference resolution. Additionally, we integrate a tool for visualizing these annotations as well as allowing for the manual annotation of new data. We release our pipeline to the research community to facilitate work on Chinese language tasks that require rich linguistic annotations.


north american chapter of the association for computational linguistics | 2012

Shared Components Topic Models

Matthew R. Gormley; Mark Dredze; Benjamin Van Durme; Jason Eisner


north american chapter of the association for computational linguistics | 2012

Entity Clustering Across Languages

Spence Green; Nicholas Andrews; Matthew R. Gormley; Mark Dredze; Christopher D. Manning


north american chapter of the association for computational linguistics | 2010

Non-Expert Correction of Automatically Generated Relation Annotations

Matthew R. Gormley; Adam Gerber; Mary P. Harper; Mark Dredze

Collaboration


Dive into the Matthew R. Gormley's collaboration.

Top Co-Authors

Avatar

Mark Dredze

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason Eisner

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Mo Yu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Gerber

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Craig Harman

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge