Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaocheng Feng is active.

Publication


Featured researches published by Xiaocheng Feng.


meeting of the association for computational linguistics | 2016

Liberal Event Extraction and Event Schema Induction

Lifu Huang; Taylor Cassidy; Xiaocheng Feng; Heng Ji; Clare R. Voss; Jiawei Han; Avirup Sil

We propose a brand new “Liberal” Event Extraction paradigm to extract events and discover event schemas from any input corpus simultaneously. We incorporate symbolic (e.g., Abstract Meaning Representation) and distributional semantics to detect and represent event structures and adopt a joint typing framework to simultaneously extract event types and argument roles and discover an event schema. Experiments on general and specific domains demonstrate that this framework can construct high-quality schemas with many event and argument role types, covering a high proportion of event types and argument roles in manually defined schemas. We show that extraction performance using discovered schemas is comparable to supervised models trained from a large amount of data labeled according to predefined event types. The extraction quality of new event types is also promising.


Science in China Series F: Information Sciences | 2018

A language-independent neural network for event detection

Xiaocheng Feng; Bing Qin; Ting Liu

Event detection remains a challenge because of the difficulty of encoding the word semantics in various contexts. Previous approaches have heavily depended on language-specific knowledge and preexisting natural language processing tools. However, not all languages have such resources and tools available compared with English language. A more promising approach is to automatically learn effective features from data, without relying on language-specific resources. In this study, we develop a language-independent neural network to capture both sequence and chunk information from specific contexts and use them to train an event detector for multiple languages without any manually encoded features. Experiments show that our approach can achieve robust, efficient and accurate results for various languages. In the ACE 2005 English event detection task, our approach achieved a 73.4% F-score with an average of 3.0% absolute improvement compared with state-of-the-art. Additionally, our experimental results are competitive for Chinese and Spanish.


international joint conference on artificial intelligence | 2017

Effective Deep Memory Networks for Distant Supervised Relation Extraction

Xiaocheng Feng; Jiang Guo; Bing Qin; Ting Liu; Yongjie Liu

Distant supervised relation extraction (RE) has been an effective way of finding novel relational facts from text without labeled training data. Typically it can be formalized as a multi-instance multilabel problem. In this paper, we introduce a novel neural approach for distant supervised RE with special focus on attention mechanisms. Unlike the feature-based logistic regression model and compositional neural models such as CNN, our approach includes two major attention-based memory components, which are capable of explicitly capturing the importance of each context word for modeling the representation of the entity pair, as well as the intrinsic dependencies between relations. Such importance degree and dependency relationship are calculated with multiple computational layers, each of which is a neural attention model over an external memory. Experiment on real-world datasets shows that our approach performs significantly and consistently better than various baselines.


international joint conference on artificial intelligence | 2018

Topic-to-Essay Generation with Neural Networks

Xiaocheng Feng; Ming Liu; Jiahao Liu; Bing Qin; Yibo Sun; Ting Liu

We focus on essay generation, which is a challenging task that generates a paragraph-level text with multiple topics. Progress towards understanding different topics and expressing diversity in this task requires more powerful generators and richer training and evaluation resources. To address this, we develop a multi-topic-aware long short-term memory (MTA-LSTM) network. In this model, we maintain a novel multi-topic coverage vector, which learns the weight of of each topic and is sequentially updated during the decoding process. Afterwards this vector is fed to an attention model to guide the generator. Moreover, we automatically construct two paragraph-level Chinese essay corpora, 305,000 essay paragraphs and 55,000 question-and-answer pairs. Empirical results show that our approach obtains much better BLEU-2 score compared to various baselines. Furthermore, human judgment shows that MTA-LSTM has the ability to generate essays that are not only coherent but also closely related to the input topics.


international joint conference on artificial intelligence | 2018

Improving Low Resource Named Entity Recognition using Cross-lingual Knowledge Transfer

Xiaocheng Feng; Xiachong Feng; Bing Qin; Zhangyin Feng; Ting Liu

Neural networks have been widely used for high resource language (e.g. English) named entity recognition (NER) and have shown state-of-the-art results. However, for low resource languages, such as Dutch and Spanish, due to the limitation of resources and lack of annotated data, NER models tend to have lower performances. To narrow this gap, we investigate cross-lingual knowledge to enrich the semantic representations of low resource languages. We first develop neural networks to improve low resource word representations via knowledge transfer from high resource language using bilingual lexicons. Further, a lexicon extension strategy is designed to address out-of lexicon problem by automatically learning semantic projections. Finally, we regard word-level entity type distribution features as an external languageindependent knowledge and incorporate them into our neural architecture. Experiments on two low resource languages (Dutch and Spanish) demonstrate the effectiveness of these additional semantic representations (average 4.8% improvement). Moreover, on Chinese OntoNotes 4.0 dataset, our approach achieves an F-score of 83.07% with 2.91% absolute gain compared to the state-of-the-art systems.


international conference on computational linguistics | 2016

Effective LSTMs for Target-Dependent Sentiment Classification.

Duyu Tang; Bing Qin; Xiaocheng Feng; Ting Liu


arXiv: Computation and Language | 2015

Target-Dependent Sentiment Classification with Long Short Term Memory.

Duyu Tang; Bing Qin; Xiaocheng Feng; Ting Liu


meeting of the association for computational linguistics | 2016

A Language-Independent Neural Network for Event Detection.

Xiaocheng Feng; Lifu Huang; Duyu Tang; Heng Ji; Bing Qin; Ting Liu


international conference on computational linguistics | 2016

Bitext Name Tagging for Cross-lingual Entity Annotation Projection.

Dongxu Zhang; Boliang Zhang; Xiaoman Pan; Xiaocheng Feng; Heng Ji; Weiran Xu


international conference on computational linguistics | 2016

English-Chinese Knowledge Base Translation with Neural Network.

Xiaocheng Feng; Duyu Tang; Bing Qin; Ting Liu

Collaboration


Dive into the Xiaocheng Feng's collaboration.

Top Co-Authors

Avatar

Bing Qin

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ting Liu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Duyu Tang

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yibo Sun

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Heng Ji

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Lifu Huang

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daya Guo

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Jiahao Liu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jiang Guo

Harbin Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge