Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ruobing Xie is active.

Publication


Featured researches published by Ruobing Xie.


meeting of the association for computational linguistics | 2017

Improved Word Representation Learning with Sememes.

Yilin Niu; Ruobing Xie; Zhiyuan Liu; Maosong Sun

Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed by several sememes. Since sememes are not explicit for each word, people manually annotate word sememes and form linguistic common-sense knowledge bases. In this paper, we present that, word sememe information can improve word representation learning (WRL), which maps words into a low-dimensional semantic space and serves as a fundamental step for many NLP tasks. The key idea is to utilize word sememes to capture exact meanings of a word within specific contexts accurately. More specifically, we follow the framework of Skip-gram and present three sememe-encoded models to learn representations of sememes, senses and words, where we apply the attention scheme to detect word senses in various contexts. We conduct experiments on two tasks including word similarity and word analogy, and our models significantly outperform baselines. The results indicate that WRL can benefit from sememes via the attention scheme, and also confirm our models being capable of correctly modeling sememe information.


international joint conference on artificial intelligence | 2017

Image-embodied Knowledge Representation Learning

Ruobing Xie; Zhiyuan Liu; Huanbo Luan; Maosong Sun

Entity images could provide significant visual information for knowledge representation learning. Most conventional methods learn knowledge representations merely from structured triples, ignoring rich visual information extracted from entity images. In this paper, we propose a novel Image-embodied Knowledge Representation Learning model (IKRL), where knowledge representations are learned with both triple facts and images. More specifically, we first construct representations for all images of an entity with a neural image encoder. These image representations are then integrated into an aggregated image-based representation via an attention-based method. We evaluate our IKRL models on knowledge graph completion and triple classification. Experimental results demonstrate that our models outperform all baselines on both tasks, which indicates the significance of visual information for knowledge representations and the capability of our models in learning knowledge representations with images.


international joint conference on artificial intelligence | 2017

Lexical Sememe Prediction via Word Embeddings and Matrix Factorization

Ruobing Xie; Xingchi Yuan; Zhiyuan Liu; Maosong Sun

Sememes are defined as the minimum semantic units of human languages. People have manually annotated lexical sememes for words and form linguistic knowledge bases. However, manual construction is time-consuming and labor-intensive, with significant annotation inconsistency and noise. In this paper, we for the first time explore to automatically predict lexical sememes based on semantic meanings of words encoded by word embeddings. Moreover, we apply matrix factorization to learn semantic relations between sememes and words. In experiments, we take a real-world sememe knowledge base HowNet for training and evaluation, and the results reveal the effectiveness of our method for lexical sememe prediction. Our method will be of great use for annotation verification of existing noisy sememe knowledge bases and annotation suggestion of new words and phrases.


international joint conference on artificial intelligence | 2017

Iterative Entity Alignment via Joint Knowledge Embeddings

Hao Zhu; Ruobing Xie; Zhiyuan Liu; Maosong Sun

Entity alignment aims to link entities and their counterparts among multiple knowledge graphs (KGs). Most existing methods typically rely on external information of entities such as Wikipedia links and require costly manual feature construction to complete alignment. In this paper, we present a novel approach for entity alignment via joint knowledge embeddings. Our method jointly encodes both entities and relations of various KGs into a unified low-dimensional semantic space according to a small seed set of aligned entities. During this process, we can align entities according to their semantic distance in this joint semantic space. More specifically, we present an iterative and parameter sharing method to improve alignment performance. Experiment results on realworld datasets show that, as compared to baselines, our method achieves significant improvements on entity alignment, and can further improve knowledge graph completion performance on various KGs with the favor of joint knowledge embeddings.


national conference on artificial intelligence | 2016

Representation learning of knowledge graphs with entity descriptions

Ruobing Xie; Zhiyuan Liu; Jia Jia; Huanbo Luan; Maosong Sun


international joint conference on artificial intelligence | 2016

Representation learning of knowledge graphs with hierarchical types

Ruobing Xie; Zhiyuan Liu; Maosong Sun


arXiv: Computation and Language | 2016

Neural Emoji Recommendation in Dialogue Systems.

Ruobing Xie; Zhiyuan Liu; Rui Yan; Maosong Sun


neural information processing systems | 2016

Crossmodal Language Grounding, Learning, and Teaching.

Stefan Heinrich; Cornelius Weber; Stefan Wermter; Ruobing Xie; Yankai Lin; Zhiyuan Liu


arXiv: Computation and Language | 2016

Knowledge Representation via Joint Learning of Sequential Text and Knowledge Graphs.

Jiawei Wu; Ruobing Xie; Zhiyuan Liu; Maosong Sun


national conference on artificial intelligence | 2018

Does William Shakespeare REALLY Write Hamlet? Knowledge Representation Learning with Confidence

Ruobing Xie; Zhiyuan Liu; Fen Lin; Leyu Lin

Collaboration


Dive into the Ruobing Xie's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Heinrich

Hamburg University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge