Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lili Mou is active.

Publication


Featured researches published by Lili Mou.


empirical methods in natural language processing | 2015

Classifying Relations via Long Short Term Memory Networks along Shortest Dependency Paths

Yan Xu; Lili Mou; Ge Li; Yunchuan Chen; Hao Peng; Zhi Jin

Relation classification is an important research arena in the field of natural language processing (NLP). In this paper, we present SDP-LSTM, a novel neural network to classify the relation of two entities in a sentence. Our neural architecture leverages the shortest dependency path (SDP) between two entities; multichannel recurrent neural networks, with long short term memory (LSTM) units, pick up heterogeneous information along the SDP. Our proposed model has several distinct features: (1) The shortest dependency paths retain most relevant information (to relation classification), while eliminating irrelevant words in the sentence. (2) The multichannel LSTM networks allow effective information integration from heterogeneous sources over the dependency paths. (3) A customized dropout strategy regularizes the neural network to alleviate overfitting. We test our model on the SemEval 2010 relation classification task, and achieve an


meeting of the association for computational linguistics | 2016

Natural Language Inference by Tree-Based Convolution and Heuristic Matching.

Lili Mou; Rui Men; Ge Li; Yan Xu; Lu Zhang; Rui Yan; Zhi Jin

F_1


empirical methods in natural language processing | 2016

How Transferable are Neural Networks in NLP Applications

Lili Mou; Zhao Meng; Rui Yan; Ge Li; Yan Xu; Lu Zhang; Zhi Jin

-score of 83.7\%, higher than competing methods in the literature.


empirical methods in natural language processing | 2015

A Comparative Study on Regularization Strategies for Embedding-based Neural Networks

Hao Peng; Lili Mou; Ge Li; Yunchuan Chen; Yangyang Lu; Zhi Jin

In this paper, we propose the TBCNN-pair model to recognize entailment and contradiction between two sentences. In our model, a tree-based convolutional neural network (TBCNN) captures sentence-level semantics; then heuristic matching layers like concatenation, element-wise product/difference combine the information in individual sentences. Experimental results show that our model outperforms existing sentence encoding-based approaches by a large margin.


meeting of the association for computational linguistics | 2017

How to Make Context More Useful? An Empirical Study on Context-Aware Neural Conversational Models

Zhiliang Tian; Rui Yan; Lili Mou; Yiping Song; Yansong Feng; Dongyan Zhao

Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain. It is particularly important to neural networks, which are very likely to be overfitting. In some fields like image processing, many studies have shown the effectiveness of neural network-based transfer learning. For neural NLP, however, existing studies have only casually applied transfer learning, and conclusions are inconsistent. In this paper, we conduct systematic case studies and provide an illuminating picture on the transferability of neural networks in NLP.


conference on information and knowledge management | 2016

Distilling Word Embeddings: An Encoding Approach

Lili Mou; Ran Jia; Yan Xu; Ge Li; Lu Zhang; Zhi Jin

This paper aims to compare different regularization strategies to address a common phenomenon, severe overfitting, in embedding-based neural networks for NLP. We chose two widely studied neural models and tasks as our testbed. We tried several frequently applied or newly proposed regularization strategies, including penalizing weights (embeddings excluded), penalizing embeddings, re-embedding words, and dropout. We also emphasized on incremental hyperparameter tuning, and combining different regularizations. The results provide a picture on tuning hyperparameters for neural NLP models.


conference of the international speech communication association | 2016

Dialogue Session Segmentation by Embedding-Enhanced TextTiling

Yiping Song; Lili Mou; Rui Yan; Li Yi; Zinan Zhu; Xiaohua Hu; Ming Zhang

Generative conversational systems are attracting increasing attention in natural language processing (NLP). Recently, researchers have noticed the importance of context information in dialog processing, and built various models to utilize context. However, there is no systematic comparison to analyze how to use context effectively. In this paper, we conduct an empirical study to compare various models and investigate the effect of context information in dialog systems. We also propose a variant that explicitly weights context vectors by context-query relevance, outperforming the other baselines.


International Journal of Software Engineering and Knowledge Engineering | 2014

Learning Non-Taxonomic Relations on Demand for Ontology Extension

Yan Xu; Ge Li; Lili Mou; Yangyang Lu

Distilling knowledge from a well-trained cumbersome network to a small one has recently become a new research topic, as lightweight neural networks with high performance are particularly in need in various resource-restricted systems. This paper addresses the problem of distilling word embeddings for NLP tasks. We propose an encoding approach to distill task-specific knowledge from a set of high-dimensional embeddings, so that we can reduce model complexity by a large margin as well as retain high accuracy, achieving a good compromise between efficiency and performance. Experiments reveal the phenomenon that distilling knowledge from cumbersome embeddings is better than directly training neural networks with small embeddings.


meeting of the association for computational linguistics | 2016

Compressing Neural Language Models by Sparse Word Representations

Yunchuan Chen; Lili Mou; Yan Xu; Ge Li; Zhi Jin

National Natural Science Foundation of China (NSFC Grant) [61272343, 61472006]; Doctoral Program of Higher Education of China [20130001110032]; National Basic Research Program (973 Program) [2014CB340405]


european conference on information retrieval | 2018

Affective Neural Response Generation

Nabiha Asghar; Pascal Poupart; Jesse Hoey; Xin Jiang; Lili Mou

Learning non-taxonomic relations becomes an important research topic in ontology extension. Most of the existing learning approaches are mainly based on expert crafted corpora. These approaches are normally domain-specific and the corpora acquisition is laborious and costly. On the other hand, based on the static corpora, it is not able to meet personalized needs of semantic relations discovery for various taxonomies. In this paper, we propose a novel approach for learning non-taxonomic relations on demand. For any supplied taxonomy, it can focus on the segment of the taxonomy and collect information dynamically about the taxonomic concepts by using Wikipedia as a learning source. Based on the newly generated corpus, non-taxonomic relations are acquired through three steps: a) semantic relatedness detection; b) relations extraction between concepts; and c) relations generalization within a hierarchy. The proposed approach is evaluated on three different predefined taxonomies and the experimental results show that it is effective in capturing non-taxonomic relations as needed and has good potential for the ontology extension on demand.

Collaboration


Dive into the Lili Mou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yan Xu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge