Yankai Lin
Tsinghua University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yankai Lin.
empirical methods in natural language processing | 2015
Yankai Lin; Zhiyuan Liu; Huanbo Luan; Maosong Sun; Siwei Rao; Song Liu
Representation learning of knowledge bases aims to embed both entities and relations into a low-dimensional space. Most existing methods only consider direct relations in representation learning. We argue that multiple-step relation paths also contain rich inference patterns between entities, and propose a path-based representation learning model. This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, we design a path-constraint resource allocation algorithm to measure the reliability of relation paths. (2) We represent relation paths via semantic composition of relation embeddings. Experimental results on real-world datasets show that, as compared with baselines, our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text. The source code of this paper can be obtained from https://github.com/mrlyk423/ relation_extraction.
meeting of the association for computational linguistics | 2016
Yankai Lin; Shiqi Shen; Zhiyuan Liu; Huanbo Luan; Maosong Sun
Distant supervised relation extraction has been widely used to find novel relational facts from text. However, distant supervision inevitably accompanies with the wrong labelling problem, and these noisy data will substantially hurt the performance of relation extraction. To alleviate this issue, we propose a sentence-level attention-based model for relation extraction. In this model, we employ convolutional neural networks to embed the semantics of sentences. Afterwards, we build sentence-level attention over multiple instances, which is expected to dynamically reduce the weights of those noisy instances. Experimental results on real-world datasets show that, our model can make full use of all informative sentences and effectively reduce the influence of wrong labelled instances. Our model achieves significant and consistent improvements on relation extraction as compared with baselines. The source code of this paper can be obtained from https: //github.com/thunlp/NRE.
empirical methods in natural language processing | 2016
Huimin Chen; Maosong Sun; Cunchao Tu; Yankai Lin; Zhiyuan Liu
Document-level sentiment classification aims to predict user’s overall sentiment in a document about a product. However, most of existing methods only focus on local text information and ignore the global user preference and product characteristics. Even though some works take such information into account, they usually suffer from high model complexity and only consider wordlevel preference rather than semantic levels. To address this issue, we propose a hierarchical neural network to incorporate global user and product information into sentiment classification. Our model first builds a hierarchical LSTM model to generate sentence and document representations. Afterwards, user and product information is considered via attentions over different semantic levels due to its ability of capturing crucial semantic components. The experimental results show that our model achieves significant and consistent improvements compared to all state-of-theart methods. The source code of this paper can be obtained from https://github. com/thunlp/NSC.
meeting of the association for computational linguistics | 2017
Yankai Lin; Zhiyuan Liu; Maosong Sun
Relation extraction has been widely used for finding unknown relational facts from plain text. Most existing methods focus on exploiting mono-lingual data for relation extraction, ignoring massive information from the texts in various languages. To address this issue, we introduce a multi-lingual neural relation extraction framework, which employs mono-lingual attention to utilize the information within mono-lingual texts and further proposes cross-lingual attention to consider the information consistency and complementarity among cross-lingual texts. Experimental results on real-world datasets show that, our model can take advantage of multi-lingual texts and consistently achieve significant improvements on relation extraction as compared with baselines.
Journal of Computer Science and Technology | 2017
Ayana; Shiqi Shen; Yankai Lin; Cunchao Tu; Yu Zhao; Zhiyuan Liu; Maosong Sun
Recently, neural models have been proposed for headline generation by learning to map documents to headlines with recurrent neural network. In this work, we give a detailed introduction and comparison of existing work and recent improvements in neural headline generation, with particular attention on how encoders, decoders and neural model training strategies alter the overall performance of the headline generation system. Furthermore, we perform quantitative analysis of most existing neural headline generation systems and summarize several key factors that impact the performance of headline generation systems. Meanwhile, we carry on detailed error analysis to typical neural headline generation systems in order to gain more comprehension. Our results and conclusions are hoped to benefit future research studies.
national conference on artificial intelligence | 2015
Yankai Lin; Zhiyuan Liu; Maosong Sun; Yang Liu; Xuan Zhu
international joint conference on artificial intelligence | 2016
Yankai Lin; Zhiyuan Liu; Maosong Sun
empirical methods in natural language processing | 2017
Wenyuan Zeng; Yankai Lin; Zhiyuan Liu; Maosong Sun
Theory and Applications of Categories | 2011
Yan Wang; Yankai Lin; Zhiyuan Liu; Maosong Sun
meeting of the association for computational linguistics | 2018
Yankai Lin; Haozhe Ji; Zhiyuan Liu; Maosong Sun