Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junyi Jessy Li is active.

Publication


Featured researches published by Junyi Jessy Li.


annual meeting of the special interest group on discourse and dialogue | 2014

Reducing Sparsity Improves the Recognition of Implicit Discourse Relations

Junyi Jessy Li; Ani Nenkova

The earliest work on automatic detection of implicit discourse relations relied on lexical features. More recently, researchers have demonstrated that syntactic features are superior to lexical features for the task. In this paper we re-examine the two classes of state of the art representations: syntactic production rules and word pair features. In particular, we focus on the need to reduce sparsity in instance representation, demonstrating that different representation choices even for the same class of features may exacerbate sparsity issues and reduce performance. We present results that clearly reveal that lexicalization of the syntactic features is necessary for good performance. We introduce a novel, less sparse, syntactic representation which leads to improvement in discourse relation recognition. Finally, we demonstrate that classifiers trained on different representations, especially lexical ones, behave rather differently and thus could likely be combined in future systems.


meeting of the association for computational linguistics | 2014

Assessing the Discourse Factors that Influence the Quality of Machine Translation

Junyi Jessy Li; Marine Carpuat; Ani Nenkova

We present a study of aspects of discourse structure — specifically discourse devices used to organize information in a sentence — that significantly impact the quality of machine translation. Our analysis is based on manual evaluations of translations of news from Chinese and Arabic to English. We find that there is a particularly strong mismatch in the notion of what constitutes a sentence in Chinese and English, which occurs often and is associated with significant degradation in translation quality. Also related to lower translation quality is the need to employ multiple explicit discourse connectives (because, but, etc.), as well as the presence of ambiguous discourse connectives in the English translation. Furthermore, the mismatches between discourse expressions across languages significantly impact translation quality.


annual meeting of the special interest group on discourse and dialogue | 2014

Addressing Class Imbalance for Improved Recognition of Implicit Discourse Relations

Junyi Jessy Li; Ani Nenkova

In this paper we address the problem of skewed class distribution in implicit discourse relation recognition. We examine the performance of classifiers for both binary classification predicting if a particular relation holds or not and for multi-class prediction. We review prior work to point out that the problem has been addressed differently for the binary and multi-class problems. We demonstrate that adopting a unified approach can significantly improve the performance of multi-class prediction. We also propose an approach that makes better use of the full annotations in the training set when downsampling is used. We report significant absolute improvements in performance in multi-class prediction, as well as significant improvement of binary classifiers for detecting the presence of implicit Temporal, Comparison and Contingency relations.


meeting of the association for computational linguistics | 2017

Aggregating and Predicting Sequence Labels from Crowd Annotations.

An Thanh Nguyen; Byron C. Wallace; Junyi Jessy Li; Ani Nenkova; Matthew Lease

Despite sequences being core to NLP, scant work has considered how to handle noisy sequence labels from multiple annotators for the same text. Given such annotations, we consider two complementary tasks: (1) aggregating sequential crowd labels to infer a best single set of consensus annotations; and (2) using crowd annotations as training data for a model that can predict sequences in unannotated text. For aggregation, we propose a novel Hidden Markov Model variant. To predict sequences in unannotated text, we propose a neural approach using Long Short Term Memory. We evaluate a suite of methods across two different applications and text genres: Named-Entity Recognition in news articles and Information Extraction from biomedical abstracts. Results show improvement over strong baselines. Our source code and data are available online.


empirical methods in natural language processing | 2015

Detecting Content-Heavy Sentences: A Cross-Language Case Study

Junyi Jessy Li; Ani Nenkova

The information conveyed by some sentences would be more easily understood by a reader if it were expressed in multiple sentences. We call such sentences content heavy: these are possibly grammatical but difficult to comprehend, cumbersome sentences. In this paper we introduce the task of detecting content-heavy sentences in cross-lingual context. Specifically we develop methods to identify sentences in Chinese for which English speakers would prefer translations consisting of more than one sentence. We base our analysis and definitions on evidence from multiple human translations and reader preferences on flow and understandability. We show that machine translation quality when translating content heavy sentences is markedly worse than overall quality and that this type of sentence are fairly common in Chinese news. We demonstrate that sentence length and punctuation usage in Chinese are not sufficient clues for accurately detecting heavy sentences and present a richer classification model that accurately identifies these sentences.


north american chapter of the association for computational linguistics | 2016

The Instantiation Discourse Relation: A Corpus Analysis of Its Properties and Improved Detection.

Junyi Jessy Li; Ani Nenkova

INSTANTIATION is a fairly common discourse relation and past work has suggested that it plays special roles in local coherence, in sentiment expression and in content selection in summarization. In this paper we provide the first systematic corpus analysis of the relation and show that relation-specific features can improve considerably the detection of the relation. We show that sentences involved in INSTANTIATION are set apart from other sentences by the use of gradable (subjective) adjectives, the occurrence of rare words and by different patterns in part-of-speech usage. Words across arguments of INSTANTIATION are connected through hypernym and meronym relations significantly more often than in other sentences and that they stand out in context by being significantly less similar to each other than other adjacent sentence pairs. These factors provide substantial predictive power that improves the identification of implicit INSTANTIATION relation by more than 5% F-measure.


national conference on artificial intelligence | 2015

Fast and accurate prediction of sentence specificity

Junyi Jessy Li; Ani Nenkova


international conference on computational linguistics | 2014

Cross-lingual Discourse Relation Analysis: A corpus study and a semi-supervised classification system

Junyi Jessy Li; Marine Carpuat; Ani Nenkova


meeting of the association for computational linguistics | 2018

A Corpus with Multi-Level Annotations of Patients, Interventions and Outcomes to Support Language Processing for Medical Literature

Benjamin Nye; Junyi Jessy Li; Roma Patel; Yinfei Yang; Iain James Marshall; Ani Nenkova; Byron C. Wallace


national conference on artificial intelligence | 2016

Estimating text intelligibility via information packaging analysis

Junyi Jessy Li

Collaboration


Dive into the Junyi Jessy Li's collaboration.

Top Co-Authors

Avatar

Ani Nenkova

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marine Carpuat

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

An Thanh Nguyen

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Matthew Lease

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge