Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nanyun Peng is active.

Publication


Featured researches published by Nanyun Peng.


empirical methods in natural language processing | 2015

Named Entity Recognition for Chinese Social Media with Jointly Trained Embeddings

Nanyun Peng; Mark Dredze

We consider the task of named entity recognition for Chinese social media. The long line of work in Chinese NER has focused on formal domains, and NER for social media has been largely restricted to English. We present a new corpus of Weibo messages annotated for both name and nominal mentions. Additionally, we evaluate three types of neural embeddings for representing Chinese text. Finally, we propose a joint training objective for the embeddings that makes use of both (NER) labeled and unlabeled raw text. Our methods yield a 9% improvement over a stateof-the-art baseline.


meeting of the association for computational linguistics | 2016

Improving Named Entity Recognition for Chinese Social Media with Word Segmentation Representation Learning.

Nanyun Peng; Mark Dredze

Named entity recognition, and other information extraction tasks, frequently use linguistic features such as part of speech tags or chunkings. For languages where word boundaries are not readily identified in text, word segmentation is a key first step to generating features for an NER system. While using word boundary tags as features are helpful, the signals that aid in identifying these boundaries may provide richer information for an NER system. New state-of-the-art word segmentation systems use neural models to learn representations for predicting word boundaries. We show that these same representations, jointly trained with an NER system, yield significant improvements in NER for Chinese social media. In our experiments, jointly training NER and word segmentation with an LSTM-CRF model yields nearly 5% absolute improvement over previously published results.


meeting of the association for computational linguistics | 2014

Stochastic Contextual Edit Distance and Probabilistic FSTs

Ryan Cotterell; Nanyun Peng; Jason Eisner

String similarity is most often measured by weighted or unweighted edit distance d(x, y). Ristad and Yianilos (1998) defined stochastic edit distance—a probability distribution p(y | x) whose parameters can be trained from data. We generalize this so that the probability of choosing each edit operation can depend on contextual features. We show how to construct and train a probabilistic finite-state transducer that computes our stochastic contextual edit distance. To illustrate the improvement from conditioning on context, we model typos found in social media text.


meeting of the association for computational linguistics | 2014

Learning Polylingual Topic Models from Code-Switched Social Media Documents

Nanyun Peng; Yiming Wang; Mark Dredze

Code-switched documents are common in social media, providing evidence for polylingual topic models to infer aligned topics across languages. We present Code-Switched LDA (csLDA), which infers language specific topic distributions based on code-switched documents to facilitate multi-lingual corpus analysis. We experiment on two code-switching corpora (English-Spanish Twitter data and English-Chinese Weibo data) and show that csLDA improves perplexity over LDA, and learns semantically coherent aligned topics as judged by human annotators.


meeting of the association for computational linguistics | 2017

Multi-task Domain Adaptation for Sequence Tagging.

Nanyun Peng; Mark Dredze

Many domain adaptation approaches rely on learning cross domain shared representations to transfer the knowledge learned in one domain to other domains. Traditional domain adaptation only considers adapting for one task. In this paper, we explore multi-task representation learning under the domain adaptation scenario. We propose a neural network framework that supports domain adaptation for multiple tasks simultaneously, and learns shared representations that better generalize for domain adaptation. We apply the proposed framework to domain adaptation for sequence tagging problems considering two tasks: Chinese word segmentation and named entity recognition. Experiments show that multi-task domain adaptation works better than disjoint domain adaptation for each task, and achieves the state-of-the-art results for both tasks in the social media domain.


empirical methods in natural language processing | 2015

Dual Decomposition Inference for Graphical Models over Strings

Nanyun Peng; Ryan Cotterell; Jason Eisner

We investigate dual decomposition for joint MAP inference of many strings. Given an arbitrary graphical model, we decompose it into small acyclic sub-models, whose MAP configurations can be found by finite-state composition and dynamic programming. We force the solutions of these subproblems to agree on overlapping variables, by tuning Lagrange multipliers for an adaptively expanding set of variable-lengthn-gram count features. This is the first inference method for arbitrary graphical models over strings that does not require approximations such as random sampling, message simplification, or a bound on string length. Provided that the inference method terminates, it gives a certificate of global optimality (though MAP inference in our setting is undecidable in general). On our global phonological inference problems, it always terminates, and achieves more accurate results than max-product and sum-product loopy belief propagation.


international joint conference on natural language processing | 2015

An Empirical Study of Chinese Name Matching and Applications

Nanyun Peng; Mo Yu; Mark Dredze

Methods for name matching, an important component to support downstream tasks such as entity linking and entity clustering, have focused on alphabetic languages, primarily English. In contrast, logogram languages such as Chinese remain untested. We evaluate methods for name matching in Chinese, including both string matching and learning approaches. Our approach, based on new representations for Chinese, improves both name matching and a downstream entity clustering task.


international joint conference on artificial intelligence | 2018

Learning to Converse with Noisy Data: Generation with Calibration

Mingyue Shang; Zhenxin Fu; Nanyun Peng; Yansong Feng; Dongyan Zhao; Rui Yan

The availability of abundant conversational data on the Internet brought prosperity to the generationbased open domain conversation systems. In the training of the generation models, existing methods generally treat all the training data equivalently. However, the data crawled from the websites may contain many noises. Blindly training with the noisy data could harm the performance of the final generation model. In this paper, we propose a generation with calibration framework, that allows high quality data to have more influences on the generation model and reduces the effect of noisy data. Specifically, for each instance in training set, we employ a calibration network to produce a quality score for it, then the score is used for the weighted update of the generation model parameters. Experiments show that the calibrated model outperforms baseline methods on both automatic evaluation metrics and human annotations.


north american chapter of the association for computational linguistics | 2015

A Concrete Chinese NLP Pipeline

Nanyun Peng; Francis Ferraro; Mo Yu; Nicholas Andrews; Jay DeYoung; Max Thomas; Matthew R. Gormley; Travis Wolfe; Craig Harman; Benjamin Van Durme; Mark Dredze

Natural language processing research increasingly relies on the output of a variety of syntactic and semantic analytics. Yet integrating output from multiple analytics into a single framework can be time consuming and slow research progress. We present a CONCRETE Chinese NLP Pipeline: an NLP stack built using a series of open source systems integrated based on the CONCRETE data schema. Our pipeline includes data ingest, word segmentation, part of speech tagging, parsing, named entity recognition, relation extraction and cross document coreference resolution. Additionally, we integrate a tool for visualizing these annotations as well as allowing for the manual annotation of new data. We release our pipeline to the research community to facilitate work on Chinese language tasks that require rich linguistic annotations.


Transactions of the Association for Computational Linguistics | 2015

Modeling Word Forms Using Latent Underlying Morphs and Phonology

Ryan Cotterell; Nanyun Peng; Jason Eisner

Collaboration


Dive into the Nanyun Peng's collaboration.

Top Co-Authors

Avatar

Mark Dredze

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Jason Eisner

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Ryan Cotterell

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Shou-De Lin

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mo Yu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chun-Chao Yen

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Wan-Yu Lin

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge