Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pengfei Liu is active.

Publication


Featured researches published by Pengfei Liu.


empirical methods in natural language processing | 2015

Long Short-Term Memory Neural Networks for Chinese Word Segmentation

Xinchi Chen; Xipeng Qiu; Chenxi Zhu; Pengfei Liu; Xuanjing Huang

Currently most of state-of-the-art methods for Chinese word segmentation are based on supervised learning, whose features aremostly extracted from a local context. Thesemethods cannot utilize the long distance information which is also crucial for word segmentation. In this paper, we propose a novel neural network model for Chinese word segmentation, which adopts the long short-term memory (LSTM) neural network to keep the previous important information inmemory cell and avoids the limit of window size of local context. Experiments on PKU, MSRA and CTB6 benchmark datasets show that our model outperforms the previous neural network models and state-of-the-art methods.


empirical methods in natural language processing | 2015

Multi-Timescale Long Short-Term Memory Neural Network for Modelling Sentences and Documents

Pengfei Liu; Xipeng Qiu; Xinchi Chen; Shiyu Wu; Xuanjing Huang

Neural network based methods have obtained great progress on a variety of natural language processing tasks. However, it is still a challenge task to model long texts, such as sentences and documents. In this paper, we propose a multi-timescale long short-termmemory (MT-LSTM) neural network to model long texts. MTLSTM partitions the hidden states of the standard LSTM into several groups. Each group is activated at different time periods. Thus, MT-LSTM can model very long documents as well as short sentences. Experiments on four benchmark datasets show that our model outperforms the other neural models in text classification task.


meeting of the association for computational linguistics | 2017

Adversarial Multi-task Learning for Text Classification

Pengfei Liu; Xipeng Qiu; Xuanjing Huang

Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features. However, in most existing approaches, the extracted shared features are prone to be contaminated by task-specific features or the noise brought by other tasks. In this paper, we propose an adversarial multi-task learning framework, alleviating the shared and private latent feature spaces from interfering with each other. We conduct extensive experiments on 16 different text classification tasks, which demonstrates the benefits of our approach. Besides, we show that the shared knowledge learned by our proposed model can be regarded as off-the-shelf knowledge and easily transferred to new tasks. The datasets of all 16 tasks are publicly available at url{this http URL}


meeting of the association for computational linguistics | 2016

Deep Fusion LSTMs for Text Semantic Matching

Pengfei Liu; Xipeng Qiu; Jifan Chen; Xuanjing Huang

Recently, there is rising interest in modelling the interactions of text pair with deep neural networks. In this paper, we propose a model of deep fusion LSTMs (DF-LSTMs) to model the strong interaction of text pair in a recursive matching way. Specifically, DF-LSTMs consist of two interdependent LSTMs, each of which models a sequence under the influence of another. We also use external memory to increase the capacity of LSTMs, thereby possibly capturing more complicated matching patterns. Experiments on two very large datasets demonstrate the efficacy of our proposed architecture. Furthermore, we present an elaborate qualitative analysis of our models, giving an intuitive understanding how our model worked.


meeting of the association for computational linguistics | 2016

Implicit Discourse Relation Detection via a Deep Architecture with Gated Relevance Network

Jifan Chen; Qi Zhang; Pengfei Liu; Xipeng Qiu; Xuanjing Huang

Word pairs, which are one of the most easily accessible features between two text segments, have been proven to be very useful for detecting the discourse relations held between text segments. However, because of the data sparsity problem, the performance achieved by using word pair features is limited. In this paper, in order to overcome the data sparsity problem, we propose the use of word embeddings to replace the original words. Moreover, we adopt a gated relevance network to capture the semantic interaction between word pairs, and then aggregate those semantic interactions using a pooling layer to select the most informative interactions. Experimental results on Penn Discourse Tree Bank show that the proposed method without using manually designed features can achieve better performance on recognizing the discourse level relations in all of the relations.


empirical methods in natural language processing | 2016

Modelling Interaction of Sentence Pair with Coupled-LSTMs.

Pengfei Liu; Xipeng Qiu; Yaqian Zhou; Jifan Chen; Xuanjing Huang

Recently, there is rising interest in modelling the interactions of two sentences with deep neural networks. However, most of the existing methods encode two sequences with separate encoders, in which a sentence is encoded with little or no information from the other sentence. In this paper, we propose a deep architecture to model the strong interaction of sentence pair with two coupled-LSTMs. Specifically, we introduce two coupled ways to model the interdependences of two LSTMs, coupling the local contextualized interactions of two sentences. We then aggregate these interactions and use a dynamic pooling to select the most informative features. Experiments on two very large datasets demonstrate the efficacy of our proposed architecture and its superiority to state-of-the-art methods.


empirical methods in natural language processing | 2016

Deep Multi-Task Learning with Shared Memory for Text Classification

Pengfei Liu; Xipeng Qiu; Xuanjing Huang

Neural network based models have achieved impressive results on various specific tasks. However, in previous works, most models are learned separately based on single-task supervised objectives, which often suffer from insufficient training data. In this paper, we propose two deep architectures which can be trained jointly on multiple related tasks. More specifically, we augment neural model with an external memory, which is shared by several tasks. Experiments on two groups of text classification tasks show that our proposed architectures can improve the performance of a task with the help of other related tasks.


international joint conference on artificial intelligence | 2017

Dynamic Compositional Neural Networks over Tree Structure

Pengfei Liu; Xipeng Qiu; Xuanjing Huang

Tree-structured neural networks have proven to be effective in learning semantic representations by exploiting syntactic information. In spite of their success, most existing models suffer from the underfitting problem: they recursively use the same shared compositional function throughout the whole compositional process and lack expressive power due to inability to capture the richness of compositionality. In this paper, we address this issue by introducing the dynamic compositional neural networks over tree structure (DC-TreeNN), in which the compositional function is dynamically generated by a meta network. The role of meta-network is to capture the metaknowledge across the different compositional rules and formulate them. Experimental results on two typical tasks show the effectiveness of the proposed models.


international joint conference on artificial intelligence | 2017

Adaptive Semantic Compositionality for Sentence Modelling

Pengfei Liu; Xipeng Qiu; Xuanjing Huang

Representing a sentence with a fixed vector has shown its effectiveness in various NLP tasks. Most of the existing methods are based on neural network, which recursively apply different composition functions to a sequence of word vectors thereby obtaining a sentence vector. A hypothesis behind these approaches is that the meaning of any phrase can be composed of the meanings of its constituents. However, many phrases, such as idioms, are apparently non-compositional. To address this problem, we introduce a parameterized compositional switch, which outputs a scalar to adaptively determine whether the meaning of a phrase should be composed of its two constituents. We evaluate our model on five datasets of sentiment classification and demonstrate its efficacy with qualitative and quantitative experimental analysis.


international conference on artificial intelligence | 2015

Learning context-sensitive word embeddings with neural tensor skip-gram model

Pengfei Liu; Xipeng Qiu; Xuanjing Huang

Collaboration


Dive into the Pengfei Liu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge