Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xipeng Qiu is active.

Publication


Featured researches published by Xipeng Qiu.


empirical methods in natural language processing | 2015

Long Short-Term Memory Neural Networks for Chinese Word Segmentation

Xinchi Chen; Xipeng Qiu; Chenxi Zhu; Pengfei Liu; Xuanjing Huang

Currently most of state-of-the-art methods for Chinese word segmentation are based on supervised learning, whose features aremostly extracted from a local context. Thesemethods cannot utilize the long distance information which is also crucial for word segmentation. In this paper, we propose a novel neural network model for Chinese word segmentation, which adopts the long short-term memory (LSTM) neural network to keep the previous important information inmemory cell and avoids the limit of window size of local context. Experiments on PKU, MSRA and CTB6 benchmark datasets show that our model outperforms the previous neural network models and state-of-the-art methods.


international conference on computer vision | 2005

Face recognition by stepwise nonparametric margin maximum criterion

Xipeng Qiu; Lide Wu

Linear discriminant analysis (LDA) is a popular feature extraction technique in face recognition. However, it often suffers from the small sample size problem when dealing with the high dimensional data. Moreover, while LDA is guaranteed to find the best directions when each class has a Gaussian density with a common covariance matrix, it can fail if the class densities are more general. In this paper; a new nonparametric linear feature extraction method, stepwise nonparametric margin maximum criterion (SNMMC), is proposed to find the most discriminant directions, which does not assume that the class densities belong to any particular parametric family and does not depend on the non- singularity of the within-class scatter matrix neither. On three datasets from ATT and FERET face databases, our experimental results demonstrate that SNMMC outperforms other methods and is robust to variations of pose, illumination and expression


international joint conference on natural language processing | 2015

Gated Recursive Neural Network for Chinese Word Segmentation

Xinchi Chen; Xipeng Qiu; Chenxi Zhu; Xuanjing Huang

Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability of alleviating the burden of manual feature engineering. However, the previous neural models cannot extract the complicated feature compositions as the traditional methods with discrete features. In this paper, we propose a gated recursive neural network (GRNN) for Chinese word segmentation, which contains reset and update gates to incorporate the complicated combinations of the context characters. Since GRNN is relative deep, we also use a supervised layer-wise training method to avoid the problem of gradient diffusion. Experiments on the benchmark datasets show that our model outperforms the previous neural network models as well as the state-of-the-art methods.


empirical methods in natural language processing | 2015

Multi-Timescale Long Short-Term Memory Neural Network for Modelling Sentences and Documents

Pengfei Liu; Xipeng Qiu; Xinchi Chen; Shiyu Wu; Xuanjing Huang

Neural network based methods have obtained great progress on a variety of natural language processing tasks. However, it is still a challenge task to model long texts, such as sentences and documents. In this paper, we propose a multi-timescale long short-termmemory (MT-LSTM) neural network to model long texts. MTLSTM partitions the hidden states of the standard LSTM into several groups. Each group is activated at different time periods. Thus, MT-LSTM can model very long documents as well as short sentences. Experiments on four benchmark datasets show that our model outperforms the other neural models in text classification task.


international joint conference on natural language processing | 2015

A Re-ranking Model for Dependency Parser with Recursive Convolutional Neural Network

Chenxi Zhu; Xipeng Qiu; Xinchi Chen; Xuanjing Huang

In this work, we address the problem to model all the nodes (words or phrases) in a dependency tree with the dense representations. We propose a recursive convolutional neural network (RCNN) architecture to capture syntactic and compositional-semantic representations of phrases and words in a dependency tree. Different with the original recursive neural network, we introduce the convolution and pooling layers, which can model a variety of compositions by the feature maps and choose the most informative compositions by the pooling layers. Based on RCNN, we use a discriminative model to re-rank a


meeting of the association for computational linguistics | 2017

Adversarial Multi-task Learning for Text Classification

Pengfei Liu; Xipeng Qiu; Xuanjing Huang

k


meeting of the association for computational linguistics | 2016

Deep Fusion LSTMs for Text Semantic Matching

Pengfei Liu; Xipeng Qiu; Jifan Chen; Xuanjing Huang

-best list of candidate dependency parsing trees. The experiments show that RCNN is very effective to improve the state-of-the-art dependency parsing on both English and Chinese datasets.


International Journal of Pattern Recognition and Artificial Intelligence | 2006

NEAREST NEIGHBOR DISCRIMINANT ANALYSIS

Xipeng Qiu; Lide Wu

Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features. However, in most existing approaches, the extracted shared features are prone to be contaminated by task-specific features or the noise brought by other tasks. In this paper, we propose an adversarial multi-task learning framework, alleviating the shared and private latent feature spaces from interfering with each other. We conduct extensive experiments on 16 different text classification tasks, which demonstrates the benefits of our approach. Besides, we show that the shared knowledge learned by our proposed model can be regarded as off-the-shelf knowledge and easily transferred to new tasks. The datasets of all 16 tasks are publicly available at \url{this http URL}


meeting of the association for computational linguistics | 2017

Adversarial Multi-Criteria Learning for Chinese Word Segmentation

Xinchi Chen; Zhan Shi; Xipeng Qiu; Xuanjing Huang

Recently, there is rising interest in modelling the interactions of text pair with deep neural networks. In this paper, we propose a model of deep fusion LSTMs (DF-LSTMs) to model the strong interaction of text pair in a recursive matching way. Specifically, DF-LSTMs consist of two interdependent LSTMs, each of which models a sequence under the influence of another. We also use external memory to increase the capacity of LSTMs, thereby possibly capturing more complicated matching patterns. Experiments on two very large datasets demonstrate the efficacy of our proposed architecture. Furthermore, we present an elaborate qualitative analysis of our models, giving an intuitive understanding how our model worked.


empirical methods in natural language processing | 2015

Sentence Modeling with Gated Recursive Neural Network

Xinchi Chen; Xipeng Qiu; Chenxi Zhu; Shiyu Wu; Xuanjing Huang

Linear Discriminant Analysis (LDA) is a popular feature extraction technique in statistical pattern recognition. However, it often suffers from the small sample size problem when dealing with high-dimensional data. Moreover, while LDA is guaranteed to find the best directions when each class has a Gaussian density with a common covariance matrix, it can fail if the class densities are more general. In this paper, a novel nonparametric linear feature extraction method, nearest neighbor discriminant analysis (NNDA), is proposed from the view of the nearest neighbor classification. NNDA finds the important discriminant directions without assuming the class densities belong to any particular parametric family. It does not depend on the nonsingularity of the within-class scatter matrix either. Then we give an approximate approach to optimize NNDA and an extension to k-NN. We apply NNDA to the simulated data and real world data, the results demonstrate that NNDA outperforms the existing variant LDA methods.

Collaboration


Dive into the Xipeng Qiu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge