Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chengjie Sun is active.

Publication


Featured researches published by Chengjie Sun.


international joint conference on natural language processing | 2015

Predicting Polarities of Tweets by Composing Word Embeddings with Long Short-Term Memory

Xin Wang; Yuanchao Liu; Chengjie Sun; Baoxun Wang; Xiaolong Wang

In this paper, we introduce Long ShortTerm Memory (LSTM) recurrent network for twitter sentiment prediction. With the help of gates and constant error carousels in the memory block structure, the model could handle interactions between words through a flexible compositional function. Experiments on a public noisy labelled data show that our model outperforms several feature-engineering approaches, with the result comparable to the current best data-driven technique. According to the evaluation on a generated negation phrase test set, the proposed architecture doubles the performance of non-neural model based on bag-of-word features. Furthermore, words with special functions (such as negation and transition) are distinguished and the dissimilarities of words with opposite sentiment are magnified. An interesting case study on negation expression processing shows a promising potential of the architecture dealing with complex sentiment phrases.


ACM Transactions on Asian Language Information Processing | 2011

Deep Learning Approaches to Semantic Relevance Modeling for Chinese Question-Answer Pairs

Baoxun Wang; Bingquan Liu; Xiaolong Wang; Chengjie Sun; Deyuan Zhang

The human-generated question-answer pairs in the Web social communities are of great value for the research of automatic question-answering technique. Due to the large amount of noise information involved in such corpora, it is still a problem to detect the answers even though the questions are exactly located. Quantifying the semantic relevance between questions and their candidate answers is essential to answer detection in social media corpora. Since both the questions and their answers usually contain a small number of sentences, the relevance modeling methods have to overcome the problem of word feature sparsity. In this article, the deep learning principle is introduced to address the semantic relevance modeling task. Two deep belief networks with different architectures are proposed by us to model the semantic relevance for the question-answer pairs. According to the investigation of the textual similarity between the community-driven question-answering (cQA) dataset and the forum dataset, a learning strategy is adopted to promote our models’ performance on the social community corpora without hand-annotating work. The experimental results show that our method outperforms the traditional approaches on both the cQA and the forum corpora.


international conference on intelligent computing | 2007

Using maximum entropy model to extract protein-protein interaction information from biomedical literature

Chengjie Sun; Lei Lin; Xiaolong Wang; Yi Guan

Protein-Protein interaction (PPI) information play a vital role in biological research. This work proposes a two-step machine learning based method to extract PPI information from biomedical literature. Both steps use Maximum Entropy (ME) model. The first step is designed to estimate whether a sentence in a literature contains PPI information. The second step is to judge whether each protein pair in a sentence has interaction. Two steps are combined through adding the outputs of the first step to the model of the second step as features. Experiments show the method achieves a total accuracy of 81.9% in BC-PPI corpus and the outputs of the first step can effectively prompt the performance of the PPI information extraction.


Entropy | 2015

Deep Belief Network-Based Approaches for Link Prediction in Signed Social Networks

Feng Liu; Bingquan Liu; Chengjie Sun; Ming Liu; Xiaolong Wang

In some online social network services (SNSs), the members are allowed to label their relationships with others, and such relationships can be represented as the links with signed values (positive or negative). The networks containing such relations are named signed social networks (SSNs), and some real-world complex systems can be also modeled with SSNs. Given the information of the observed structure of an SSN, the link prediction aims to estimate the values of the unobserved links. Noticing that most of the previous approaches for link prediction are based on the members’ similarity and the supervised learning method, however, research work on the investigation of the hidden principles that drive the behaviors of social members are rarely conducted. In this paper, the deep belief network (DBN)-based approaches for link prediction are proposed. Including an unsupervised link prediction model, a feature representation method and a DBN-based link prediction method are introduced. The experiments are done on the datasets from three SNSs (social networking services) in different domains, and the results show that our methods can predict the values of the links with high performance and have a good generalization ability across these datasets.


Electronic Commerce Research and Applications | 2016

Predicting ad click-through rates via feature-based fully coupled interaction tensor factorization

Lili Shan; Lei Lin; Chengjie Sun; Xiaolong Wang

We address CTR prediction by a novel tensor factorization model named FCTF.We incorporate all types of information into FCTF model to relieve data sparsity.We give the relationship between FCTF and PITF, Tucker, CD decomposition.We evaluate our model and algorithm on real-world bidding log data. In the real-time bidding (RTB) display advertising ecosystem, when receiving a bid request, the demand-side platform (DSP) needs to predict the click-through rate (CTR) for ads and calculate the bid price according to the CTR estimated. In addition to challenges similar to those encountered in sponsored search advertising, such as data sparsity and cold start problems, more complicated feature interactions involving multi-aspects, such as the user, publisher and advertiser, make CTR estimation in RTB more difficult. We consider CTR estimation in RTB as a tensor complement problem and propose a fully coupled interactions tensor factorization (FCTF) model based on Tucker decomposition (TD) to model three pairwise interactions between the user, publisher and advertiser and ultimately complete the tensor complement task. FCTF is a special case of the Tucker decomposition model; however, it is linear in runtime for both learning and prediction. Different from pairwise interaction tensor factorization (PITF), which is another special case of TD, FCTF is independent from the Bayesian personalized ranking optimization algorithm and is applicable to generic third-order tensor decomposition with popular simple optimizations, such as the least square method or mean square error. In addition, we also incorporate all explicit information obtained from different aspects into the FCTF model to alleviate the impact of cold start and sparse data on the final performance. We compare the performance and runtime complexity of our method with Tucker decomposition, canonical decomposition and other popular methods for CTR prediction over real-world advertising datasets. Our experimental results demonstrate that the improved model not only achieves better prediction quality than the others due to considering fully coupled interactions between three entities, user, publisher and advertiser but also can accomplish training and prediction with linear runtime.


international symposium on neural networks | 2017

Incorporating loose-structured knowledge into conversation modeling via recall-gate LSTM

Zhen Xu; Bingquan Liu; Baoxun Wang; Chengjie Sun; Xiaolong Wang

It is critical for automatic chat-bots to gain the ability of conversation comprehension, which is the essence to provide context-aware responses to conduct smooth dialogues with human beings. As the basis of this task, conversation modeling will notably benefit from the background knowledge, since such knowledge indeed implicates semantic hints that help to further clarify the relationships between sentences within a conversation. In this paper, a deep neural network is proposed to incorporate background knowledge for conversation modeling. Through a recall mechanism with a specially designed recall-gate, background knowledge as global memory can be motivated to cooperate with local cell memory of Long Short-Term Memory (LSTM), so as to enrich the ability of LSTM to capture the implicit semantic clues in conversations. In addition, this paper introduces the loose-structured domain knowledge as background knowledge, which can be built with slight amount of manual work and easily adopted by the recall-gate. Our model is evaluated on the context-oriented response selecting task, and experimental results on two datasets have shown that our approach is promising for modeling conversations and building key components of automatic chat systems.


fuzzy systems and knowledge discovery | 2006

Biomedical named entities recognition using conditional random fields model

Chengjie Sun; Yi Guan; Xiaolong Wang; Lei Lin

Biomedical named entity recognition is a critical task for automatically mining knowledge from biomedical literature. In this paper, we introduce Conditional Random Fields model to recognize biomedical named entities from biomedical literature. Rich features including literal, context and semantics are involved in Conditional Random Fields model. Shallow syntactic features are first introduced to Conditional Random Fields model and do boundary detection and semantic labeling at the same time, which effectively improve the models performance. Experiments show that our method can achieve an F-measure of 71.2% in JNLPBA test data and which is better than most of state-of-the-art system.


Expert Systems With Applications | 2011

A language model approach for tag recommendation

Ke Sun; Xiaolong Wang; Chengjie Sun; Lei Lin

Tags are user-generated keywords for entities. Recently tags have been used as a popular way to allow users to contribute metadata to large corpora on the web. However, tagging style websites lack the function of guaranteeing the quality of tags for other usages, like collaboration/community, clustering, and search, etc. Thus, as a remedy function, automatic tag recommendation which recommends a set of candidate tags for user to choice while tagging a certain document has recently drawn many attentions. In this paper, we introduce the statistical language model theory into tag recommendation problem named as language model for tag recommendation (LMTR), by converting the tag recommendation problem into a ranking problem and then modeling the correlation between tag and document with the language model framework. Furthermore, we leverage two different methods based on both keywords extraction and keywords expansion to collect candidate tag before ranking with LMTR to improve the performance of LMTR. Experiments on large-scale tagging datasets of both scientific and web documents indicate that our proposals are capable of making tag recommendation efficiently and effectively.


empirical methods in natural language processing | 2017

Neural Response Generation via GAN with an Approximate Embedding Layer.

Zhen Xu; Bingquan Liu; Baoxun Wang; Chengjie Sun; Xiaolong Wang; Zhuoran Wang; Chao Qi

This paper presents a Generative Adversarial Network (GAN) to model single-turn short-text conversations, which trains a sequence-to-sequence (Seq2Seq) network for response generation simultaneously with a discriminative classifier that measures the differences between human-produced responses and machine-generated ones. In addition, the proposed method introduces an approximate embedding layer to solve the non-differentiable problem caused by the sampling-based output decoding procedure in the Seq2Seq generative model. The GAN setup provides an effective way to avoid noninformative responses (a.k.a “safe responses”), which are frequently observed in traditional neural response generators. The experimental results show that the proposed approach significantly outperforms existing neural response generation models in diversity metrics, with slight increases in relevance scores as well, when evaluated on both a Mandarin corpus and an English corpus.


Entropy | 2017

LSTM-CRF for Drug-Named Entity Recognition

Donghuo Zeng; Chengjie Sun; Lei Lin; Bingquan Liu

Drug-Named Entity Recognition (DNER) for biomedical literature is a fundamental facilitator of Information Extraction. For this reason, the DDIExtraction2011 (DDI2011) and DDIExtraction2013 (DDI2013) challenge introduced one task aiming at recognition of drug names. State-of-the-art DNER approaches heavily rely on hand-engineered features and domain-specific knowledge which are difficult to collect and define. Therefore, we offer an automatic exploring words and characters level features approach: a recurrent neural network using bidirectional long short-term memory (LSTM) with Conditional Random Fields decoding (LSTM-CRF). Two kinds of word representations are used in this work: word embedding, which is trained from a large amount of text, and character-based representation, which can capture orthographic feature of words. Experimental results on the DDI2011 and DDI2013 dataset show the effect of the proposed LSTM-CRF method. Our method outperforms the best system in the DDI2013 challenge.

Collaboration


Dive into the Chengjie Sun's collaboration.

Top Co-Authors

Avatar

Xiaolong Wang

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bingquan Liu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lei Lin

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ming Liu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yuanchao Liu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Baoxun Wang

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lili Shan

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Deyuan Zhang

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Feng Liu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yang Liu

Harbin Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge