Justin Mott
University of Pennsylvania
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Justin Mott.
workshop on events definition detection coreference and representation | 2015
Zhiyi Song; Ann Bies; Stephanie M. Strassel; Tom Riese; Justin Mott; Joe Ellis; Jonathan Wright; Seth Kulick; Neville Ryant; Xiaoyi Ma
We describe the evolution of the Entities, Relations and Events (ERE) annotation task, created to support research and technology development within the DARPA DEFT program. We begin by describing the specification for Light ERE annotation, including the motivation for the task within the context of DEFT. We discuss the transition from Light ERE to a more complex Rich ERE specification, enabling more comprehensive treatment of phenomena of interest to DEFT.
north american chapter of the association for computational linguistics | 2016
Ann Bies; Zhiyi Song; Jeremy Getman; Joe Ellis; Justin Mott; Stephanie M. Strassel; Martha Palmer; Teruko Mitamura; Marjorie Freedman; Heng Ji; Tim O'Gorman
This paper will discuss and compare event representations across a variety of types of event annotation: Rich Entities, Relations, and Events (Rich ERE), Light Entities, Relations, and Events (Light ERE), Event Nugget (EN), Event Argument Extraction (EAE), Richer Event Descriptions (RED), and Event-Event Relations (EER). Comparisons of event representations are presented, along with a comparison of data annotated according to each event representation. An event annotation experiment is also discussed, including annotation for all of these representations on the same set of sample data, with the purpose of being able to compare actual annotation across all of these approaches as directly as possible. We walk through a brief example to illustrate the various annotation approaches, and to show the intersections among the various annotated data sets.
workshop on events definition detection coreference and representation | 2014
Seth Kulick; Ann Bies; Justin Mott
This paper describes a system for interannotator agreement analysis of ERE annotation, focusing on entity mentions and how the higher-order annotations such as EVENTS are dependent on those entity mentions. The goal of this approach is to provide both (1) quantitative scores for the various levels of annotation, and (2) information about the types of annotation inconsistencies that might exist. While primarily designed for inter-annotator agreement, it can also be considered a system for evaluation of ERE annotation.
meeting of the association for computational linguistics | 2014
Seth Kulick; Ann Bies; Justin Mott; Anthony S. Kroch; Beatrice Santorini; Mark Liberman
This paper introduces a new technique for phrase-structure parser analysis, categorizing possible treebank structures by integrating regular expressions into derivation trees. We analyze the performance of the Berkeley parser on OntoNotes WSJ and the English Web Treebank. This provides some insight into the evalb scores, and the problem of domain adaptation with the web data. We also analyze a “test-ontrain” dataset, showing a wide variance in how the parser is generalizing from different structures in the training material.
Theory and Applications of Categories | 2013
Joe Ellis; Jeremy Getman; Justin Mott; Xuansong Li; Kira Griffitt; Stephanie M. Strassel; Jonathan Wright
meeting of the association for computational linguistics | 2011
Seth Kulick; Ann Bies; Justin Mott
north american chapter of the association for computational linguistics | 2013
Seth Kulick; Ann Bies; Justin Mott; Mohamed Maamouri; Beatrice Santorini; Anthony S. Kroch
language resources and evaluation | 2012
Seth Kulick; Ann Bies; Justin Mott
language resources and evaluation | 2014
Ann Bies; Justin Mott; Seth Kulick; Jennifer Garland; Colin Warner
language resources and evaluation | 2016
Justin Mott; Ann Bies; Zhiyi Song; Stephanie M. Strassel