Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Travis Wolfe is active.

Publication


Featured researches published by Travis Wolfe.


international joint conference on natural language processing | 2015

FrameNet+: Fast Paraphrastic Tripling of FrameNet

Ellie Pavlick; Travis Wolfe; Pushpendre Rastogi; Chris Callison-Burch; Mark Dredze; Benjamin Van Durme

We increase the lexical coverage of FrameNet through automatic paraphrasing. We use crowdsourcing to manually filter out bad paraphrases in order to ensure a high-precision resource. Our expanded FrameNet contains an additional 22K lexical units, a 3-fold increase over the current FrameNet, and achieves 40% better coverage when evaluated in a practical setting on New York Times data.


north american chapter of the association for computational linguistics | 2015

Predicate Argument Alignment using a Global Coherence Model

Travis Wolfe; Mark Dredze; Benjamin Van Durme

We present a joint model for predicate argument alignment. We leverage multiple sources of semantic information, including temporal ordering constraints between events. These are combined in a max-margin framework to find a globally consistent view of entities and events across multiple documents, which leads to improvements over a very strong local baseline.


meeting of the association for computational linguistics | 2017

Pocket Knowledge Base Population

Travis Wolfe; Mark Dredze; Benjamin Van Durme

Existing Knowledge Base Population methods extract relations from a closed relational schema with limited coverage leading to sparse KBs. We propose Pocket Knowledge Base Population (PKBP), the task of dynamically constructing a KB of entities related to a query and finding the best characterization of relationships between entities. We describe novel Open Information Extraction methods which leverage the PKB to find informative trigger words. We evaluate using existing KBP shared-task data as well anew annotations collected for this work. Our methods produce high quality KB from just text with many more entities and relationships than existing KBP systems.


empirical methods in natural language processing | 2016

A Study of Imitation Learning Methods for Semantic Role Labeling.

Travis Wolfe; Mark Dredze; Benjamin Van Durme

Global features have proven effective in a wide range of structured prediction problems but come with high inference costs. Imitation learning is a common method for training models when exact inference isn’t feasible. We study imitation learning for Semantic Role Labeling (SRL) and analyze the effectiveness of the Violation Fixing Perceptron (VFP) (Huang et al., 2012) and Locally Optimal Learning to Search (LOLS) (Chang et al., 2015) frameworks with respect to SRL global features. We describe problems in applying each framework to SRL and evaluate the effectiveness of some solutions. We also show that action ordering, including easy first inference, has a large impact on the quality of greedy global models.


north american chapter of the association for computational linguistics | 2015

A Concrete Chinese NLP Pipeline

Nanyun Peng; Francis Ferraro; Mo Yu; Nicholas Andrews; Jay DeYoung; Max Thomas; Matthew R. Gormley; Travis Wolfe; Craig Harman; Benjamin Van Durme; Mark Dredze

Natural language processing research increasingly relies on the output of a variety of syntactic and semantic analytics. Yet integrating output from multiple analytics into a single framework can be time consuming and slow research progress. We present a CONCRETE Chinese NLP Pipeline: an NLP stack built using a series of open source systems integrated based on the CONCRETE data schema. Our pipeline includes data ingest, word segmentation, part of speech tagging, parsing, named entity recognition, relation extraction and cross document coreference resolution. Additionally, we integrate a tool for visualizing these annotations as well as allowing for the manual annotation of new data. We release our pipeline to the research community to facilitate work on Chinese language tasks that require rich linguistic annotations.


north american chapter of the association for computational linguistics | 2013

Topic Models and Metadata for Visualizing Text Corpora

Justin Snyder; Rebecca Knowles; Mark Dredze; Matthew R. Gormley; Travis Wolfe


meeting of the association for computational linguistics | 2013

PARMA: A Predicate Argument Aligner

Travis Wolfe; Benjamin Van Durme; Mark Dredze; Nicholas Andrews; Charley Beller; Chris Callison-Burch; Jay DeYoung; Justin Snyder; Jonathan Weese; Tan Xu; Xuchen Yao


arXiv: Artificial Intelligence | 2015

Interactive Knowledge Base Population.

Travis Wolfe; Mark Dredze; James Mayfield; Paul McNamee; Craig Harman; Tim Finin; Benjamin Van Durme


Archive | 2011

News Personalization using Support Vector Machines

Anatole Gershman; Travis Wolfe; Eugene Fink; Jaime G. Carbonell


international acm sigir conference on research and development in information retrieval | 2018

Summarizing Entities using Distantly Supervised Information Extractors.

Travis Wolfe; Annabelle Carrell; Mark Dredze; Benjamin Van Durme

Collaboration


Dive into the Travis Wolfe's collaboration.

Top Co-Authors

Avatar

Mark Dredze

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Craig Harman

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Mayfield

Johns Hopkins University Applied Physics Laboratory

View shared research outputs
Top Co-Authors

Avatar

Tim Finin

University of Maryland

View shared research outputs
Top Co-Authors

Avatar

Justin Snyder

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge