Joe Ellis
University of Pennsylvania
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joe Ellis.
workshop on events definition detection coreference and representation | 2015
Zhiyi Song; Ann Bies; Stephanie M. Strassel; Tom Riese; Justin Mott; Joe Ellis; Jonathan Wright; Seth Kulick; Neville Ryant; Xiaoyi Ma
We describe the evolution of the Entities, Relations and Events (ERE) annotation task, created to support research and technology development within the DARPA DEFT program. We begin by describing the specification for Light ERE annotation, including the motivation for the task within the context of DEFT. We discuss the transition from Light ERE to a more complex Rich ERE specification, enabling more comprehensive treatment of phenomena of interest to DEFT.
workshop on events definition detection coreference and representation | 2014
Jacqueline Aguilar; Charley Beller; Paul McNamee; Benjamin Van Durme; Stephanie M. Strassel; Zhiyi Song; Joe Ellis
The resurgence of effort within computational semantics has led to increased interest in various types of relation extraction and semantic parsing. While various manually annotated resources exist for enabling this work, these materials have been developed with different standards and goals in mind. In an effort to develop better general understanding across these resources, we provide a summary overview of the standards underlying ACE, ERE, TAC-KBP Slot-filling, and FrameNet. 1 Overview ACE and ERE are comprehensive annotation standards that aim to consistently annotate Entities, Events, and Relations within a variety of documents. The ACE (Automatic Content Extraction) standard was developed by NIST in 1999 and has evolved over time to support different evaluation cycles, the last evaluation having occurred in 2008. The ERE (Entities, Relations, Events) standard was created under the DARPA DEFT program as a lighter-weight version of ACE with the goal of making annotation easier, and more consistent across annotators. ERE attempts to achieve this goal by consolidating some of the annotation type distinctions that were found to be the most problematic in ACE, as well as removing some more complex annotation features. This paper provides an overview of the relationship between these two standards and compares them to the more restricted standard of the TACKBP slot-filling task and the more expansive standard of FrameNet. Sections 3 and 4 examine Relations and Events in the ACE/ERE standards, section 5 looks at TAC-KBP slot-filling, and section 6 compares FrameNet to the other standards.
international conference on web engineering | 2015
Mariona Taulé; M. Antònia Martí; Ann Bies; Montserrat Nofre; Aina Garí; Zhiyi Song; Stephanie M. Strassel; Joe Ellis
This paper presents the Latin American Spanish Discussion Forum Treebank (LAS-DisFo). This corpus consists of 50,291 words and 2,846 sentences that are part-of-speech tagged, lemmatized and syntactically annotated with constituents and functions. We describe how it was built and the methodology followed for its annotation, the annotation scheme and criteria applied for dealing with the most problematic phenomena commonly encountered in this kind of informal unedited web text. This is the first available Latin American Spanish corpus of non-standard language that has been morphologically and syntactically annotated. It is a valuable linguistic resource that can be used for the training and evaluation of parsers and PoS taggers.
north american chapter of the association for computational linguistics | 2016
Ann Bies; Zhiyi Song; Jeremy Getman; Joe Ellis; Justin Mott; Stephanie M. Strassel; Martha Palmer; Teruko Mitamura; Marjorie Freedman; Heng Ji; Tim O'Gorman
This paper will discuss and compare event representations across a variety of types of event annotation: Rich Entities, Relations, and Events (Rich ERE), Light Entities, Relations, and Events (Light ERE), Event Nugget (EN), Event Argument Extraction (EAE), Richer Event Descriptions (RED), and Event-Event Relations (EER). Comparisons of event representations are presented, along with a comparison of data annotated according to each event representation. An event annotation experiment is also discussed, including annotation for all of these representations on the same set of sample data, with the purpose of being able to compare actual annotation across all of these approaches as directly as possible. We walk through a brief example to illustrate the various annotation approaches, and to show the intersections among the various annotated data sets.
north american chapter of the association for computational linguistics | 2016
Zhiyi Song; Ann Bies; Stephanie M. Strassel; Joe Ellis; Teruko Mitamura; Hoa Trang Dang; Yukari Yamakawa; Susan E Holm
In this paper, we describe the event nugget annotation created in support of the pilot Event Nugget Detection evaluation in 2014 and in support of the Event Nugget Detection and Coreference open evaluation in 2015, which was one of the Knowledge Base Population tracks within the NIST Text Analysis Conference. We present the data volume annotated for both training and evaluation data for the 2015 evaluation as well as changes to annotation in 2015 as compared to that of 2014. We also analyze the annotation for the 2015 evaluation as an example to show the annotation challenges and consistency, and identify the event types and subtypes that are most difficult for human annotators. Finally, we discuss annotation issues that we need to take into consideration in the future.
Archive | 2010
Heng Ji; Ralph Grishman; Hoa Trang Dang; Kira Griffitt; Joe Ellis
Theory and Applications of Categories | 2015
Joe Ellis; Jeremy Getman; Dana Fore; Neil Kuster; Zhiyi Song; Ann Bies; Stephanie M. Strassel
Theory and Applications of Categories | 2013
Joe Ellis; Jeremy Getman; Justin Mott; Xuansong Li; Kira Griffitt; Stephanie M. Strassel; Jonathan Wright
Archive | 2014
Joe Ellis; Jeremy Getman; Stephanie M. Strassel
Theory and Applications of Categories | 2012
Joe Ellis; Xuansong Li; Kira Griffitt; Stephanie M. Strassel; Jonathan Wright