Ruobing Xie
Tsinghua University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ruobing Xie.
meeting of the association for computational linguistics | 2017
Yilin Niu; Ruobing Xie; Zhiyuan Liu; Maosong Sun
Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed by several sememes. Since sememes are not explicit for each word, people manually annotate word sememes and form linguistic common-sense knowledge bases. In this paper, we present that, word sememe information can improve word representation learning (WRL), which maps words into a low-dimensional semantic space and serves as a fundamental step for many NLP tasks. The key idea is to utilize word sememes to capture exact meanings of a word within specific contexts accurately. More specifically, we follow the framework of Skip-gram and present three sememe-encoded models to learn representations of sememes, senses and words, where we apply the attention scheme to detect word senses in various contexts. We conduct experiments on two tasks including word similarity and word analogy, and our models significantly outperform baselines. The results indicate that WRL can benefit from sememes via the attention scheme, and also confirm our models being capable of correctly modeling sememe information.
international joint conference on artificial intelligence | 2017
Ruobing Xie; Zhiyuan Liu; Huanbo Luan; Maosong Sun
Entity images could provide significant visual information for knowledge representation learning. Most conventional methods learn knowledge representations merely from structured triples, ignoring rich visual information extracted from entity images. In this paper, we propose a novel Image-embodied Knowledge Representation Learning model (IKRL), where knowledge representations are learned with both triple facts and images. More specifically, we first construct representations for all images of an entity with a neural image encoder. These image representations are then integrated into an aggregated image-based representation via an attention-based method. We evaluate our IKRL models on knowledge graph completion and triple classification. Experimental results demonstrate that our models outperform all baselines on both tasks, which indicates the significance of visual information for knowledge representations and the capability of our models in learning knowledge representations with images.
international joint conference on artificial intelligence | 2017
Ruobing Xie; Xingchi Yuan; Zhiyuan Liu; Maosong Sun
Sememes are defined as the minimum semantic units of human languages. People have manually annotated lexical sememes for words and form linguistic knowledge bases. However, manual construction is time-consuming and labor-intensive, with significant annotation inconsistency and noise. In this paper, we for the first time explore to automatically predict lexical sememes based on semantic meanings of words encoded by word embeddings. Moreover, we apply matrix factorization to learn semantic relations between sememes and words. In experiments, we take a real-world sememe knowledge base HowNet for training and evaluation, and the results reveal the effectiveness of our method for lexical sememe prediction. Our method will be of great use for annotation verification of existing noisy sememe knowledge bases and annotation suggestion of new words and phrases.
international joint conference on artificial intelligence | 2017
Hao Zhu; Ruobing Xie; Zhiyuan Liu; Maosong Sun
Entity alignment aims to link entities and their counterparts among multiple knowledge graphs (KGs). Most existing methods typically rely on external information of entities such as Wikipedia links and require costly manual feature construction to complete alignment. In this paper, we present a novel approach for entity alignment via joint knowledge embeddings. Our method jointly encodes both entities and relations of various KGs into a unified low-dimensional semantic space according to a small seed set of aligned entities. During this process, we can align entities according to their semantic distance in this joint semantic space. More specifically, we present an iterative and parameter sharing method to improve alignment performance. Experiment results on realworld datasets show that, as compared to baselines, our method achieves significant improvements on entity alignment, and can further improve knowledge graph completion performance on various KGs with the favor of joint knowledge embeddings.
national conference on artificial intelligence | 2016
Ruobing Xie; Zhiyuan Liu; Jia Jia; Huanbo Luan; Maosong Sun
international joint conference on artificial intelligence | 2016
Ruobing Xie; Zhiyuan Liu; Maosong Sun
arXiv: Computation and Language | 2016
Ruobing Xie; Zhiyuan Liu; Rui Yan; Maosong Sun
neural information processing systems | 2016
Stefan Heinrich; Cornelius Weber; Stefan Wermter; Ruobing Xie; Yankai Lin; Zhiyuan Liu
arXiv: Computation and Language | 2016
Jiawei Wu; Ruobing Xie; Zhiyuan Liu; Maosong Sun
national conference on artificial intelligence | 2018
Ruobing Xie; Zhiyuan Liu; Fen Lin; Leyu Lin