Bing Qin
Harbin Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bing Qin.
empirical methods in natural language processing | 2015
Duyu Tang; Bing Qin; Ting Liu
Document level sentiment classification remains a challenge: encoding the intrinsic relations between sentences in the semantic meaning of a document. To address this, we introduce a neural network model to learn vector-based document representation in a unified, bottom-up fashion. The model first learns sentence representation with convolutional neural network or long short-term memory. Afterwards, semantics of sentences and their relations are adaptively encoded in document representation with gated recurrent neural network. We conduct document level sentiment classification on four large-scale review datasets from IMDB and Yelp Dataset Challenge. Experimental results show that: (1) our neural model shows superior performances over several state-of-the-art algorithms; (2) gated recurrent neural network dramatically outperforms standard recurrent neural network in document modeling for sentiment classification. 1
meeting of the association for computational linguistics | 2014
Ruiji Fu; Jiang Guo; Bing Qin; Wanxiang Che; Haifeng Wang; Ting Liu
Semantic hierarchy construction aims to build structures of concepts linked by hypernym‐hyponym (“is-a”) relations. A major challenge for this task is the automatic discovery of such relations. This paper proposes a novel and effective method for the construction of semantic hierarchies based on word embeddings, which can be used to measure the semantic relationship between words. We identify whether a candidate word pair has hypernym‐hyponym relation by using the word-embedding-based semantic projections between words and their hypernyms. Our result, an F-score of 73.74%, outperforms the state-of-theart methods on a manually labeled test dataset. Moreover, combining our method with a previous manually-built hierarchy extension method can further improve Fscore to 80.29%.
international joint conference on natural language processing | 2015
Duyu Tang; Bing Qin; Ting Liu
Neural network methods have achieved promising results for sentiment classification of text. However, these models only use semantics of texts, while ignoring users who express the sentiment and products which are evaluated, both of which have great influences on interpreting the sentiment of text. In this paper, we address this issue by incorporating userand productlevel information into a neural network approach for document level sentiment classification. Users and products are modeled using vector space models, the representations of which capture important global clues such as individual preferences of users or overall qualities of products. Such global evidence in turn facilitates embedding learning procedure at document level, yielding better text representations. By combining evidence at user-, productand documentlevel in a unified neural framework, the proposed model achieves state-of-the-art performances on IMDB and Yelp datasets1.
international conference on computational linguistics | 2014
Duyu Tang; Furu Wei; Bing Qin; Ting Liu; Ming Zhou
In this paper, we develop a deep learning system for message-level Twitter sentiment classification. Among the 45 submitted systems including the SemEval 2013 participants, our system (Coooolll) is ranked 2nd on the Twitter2014 test set of SemEval 2014 Task 9. Coooolll is built in a supervised learning framework by concatenating the sentiment-specific word embedding (SSWE) features with the state-of-the-art hand-crafted features. We develop a neural network with hybrid loss function 1 to learn SSWE, which encodes the sentiment information of tweets in the continuous representation of words. To obtain large-scale training corpora, we train SSWE from 10M tweets collected by positive and negative emoticons, without any manual annotation. Our system can be easily re-implemented with the publicly available sentiment-specific word embedding.
conference on computational natural language learning | 2009
Wanxiang Che; Zhenghua Li; Yongqiang Li; Yuhang Guo; Bing Qin; Ting Liu
Our CoNLL 2009 Shared Task system includes three cascaded components: syntactic parsing, predicate classification, and semantic role labeling. A pseudo-projective high-order graph-based model is used in our syntactic dependency parser. A support vector machine (SVM) model is used to classify predicate senses. Semantic role labeling is achieved using maximum entropy (MaxEnt) model based semantic role classification and integer linear programming (ILP) based post inference. Finally, we win the first place in the joint task, including both the closed and open challenges.
empirical methods in natural language processing | 2016
Duyu Tang; Bing Qin; Ting Liu
We introduce a deep memory network for aspect level sentiment classification. Unlike feature-based SVM and sequential neural models such as LSTM, this approach explicitly captures the importance of each context word when inferring the sentiment polarity of an aspect. Such importance degree and text representation are calculated with multiple computational layers, each of which is a neural attention model over an external memory. Experiments on laptop and restaurant datasets demonstrate that our approach performs comparable to state-of-art feature based SVM system, and substantially better than LSTM and attention-based LSTM architectures. On both datasets we show that multiple computational layers could improve the performance. Moreover, our approach is also fast. The deep memory network with 9 layers is 15 times faster than LSTM with a CPU implementation.
IEEE Transactions on Knowledge and Data Engineering | 2016
Duyu Tang; Furu Wei; Bing Qin; Nan Yang; Ting Liu; Ming Zhou
We propose learning sentiment-specific word embeddings dubbed sentiment embeddings in this paper. Existing word embedding learning algorithms typically only use the contexts of words but ignore the sentiment of texts. It is problematic for sentiment analysis because the words with similar contexts but opposite sentiment polarity, such as good and bad, are mapped to neighboring word vectors. We address this issue by encoding sentiment information of texts (e.g., sentences and words) together with contexts of words in sentiment embeddings. By combining context and sentiment level evidences, the nearest neighbors in sentiment embedding space are semantically similar and it favors words with the same sentiment polarity. In order to learn sentiment embeddings effectively, we develop a number of neural networks with tailoring loss functions, and collect massive texts automatically with sentiment signals like emoticons as the training data. Sentiment embeddings can be naturally used as word features for a variety of sentiment analysis tasks without feature engineering. We apply sentiment embeddings to word-level sentiment analysis, sentence level sentiment classification, and building sentiment lexicons. Experimental results show that sentiment embeddings consistently outperform context-based embeddings on several benchmark datasets of these tasks. This work provides insights on the design of neural networks for learning task-specific word embeddings in other natural language processing tasks.
conference on computational natural language learning | 2008
Wanxiang Che; Zhenghua Li; Yuxuan Hu; Yongqiang Li; Bing Qin; Ting Liu; Sheng Li
We describe our CoNLL 2008 Shared Task system in this paper. The system includes two cascaded components: a syntactic and a semantic dependency parsers. A first-order projective MSTParser is used as our syntactic dependency parser. In order to overcome the shortcoming of the MSTParser, that it cannot model more global information, we add a relabeling stage after the parsing to distinguish some confusable labels, such as ADV, TMP, and LOC. Besides adding a predicate identification and a classification stages, our semantic dependency parsing simplifies the traditional four stages semantic role labeling into two: a maximum entropy based argument classification and an ILP-based post inference. Finally, we gain the overall labeled macro F1 = 82.66, which ranked the second position in the closed challenge.
Expert Systems With Applications | 2012
Ruifang He; Bing Qin; Ting Liu
Highlights? A novel content selection framework for update summarization is proposed. ? Evolutionary manifold-ranking makes the summary novel and relevant to the topic. ? Integrated with spectral clustering is to make the summary have a high coverage. ? The redundancy removal strategy with exponent decay is used in sentence selection. ? The results on TAC 2008-update summarization task show our approach is competitive. Update summarization is a new challenge in automatic text summarization. Different from the traditional static summarization, it deals with the dynamically evolving document collections of a single topic changing over time, which aims to incrementally deliver salient and novel information to a user who has already read the previous documents. How to have a content selection and linguistic quality control in a temporal context are the two new challenges brought by update summarization. In this paper, we address a novel content selection framework based on evolutionary manifold-ranking and normalized spectral clustering. The proposed evolutionary manifold-ranking aims to capture the temporal characteristics and relay propagation of information in dynamic data stream and user need. This approach tries to keep the summary content to be important, novel and relevant to the topic. Incorporation with normalized spectral clustering is to make summary content have a high coverage for each sub-topic. Ordering sub-topics and selecting sentences are dependent on the rank score from evolutionary manifold-ranking and the proposed redundancy removal strategy with exponent decay. The evaluation results on the update summarization task of Text Analysis Conference (TAC) 2008 demonstrate that our proposed approach is competitive. In the 71 run systems, we receive three top 1 under PYRAMID metrics, ranking 13th in ROUGE-2, 15th in ROUGE-SU4 and 21st in BE.
Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery | 2015
Duyu Tang; Bing Qin; Ting Liu
Sentiment analysis (also known as opinion mining) is an active research area in natural language processing. It aims at identifying, extracting and organizing sentiments from user generated texts in social networks, blogs or product reviews. A lot of studies in literature exploit machine learning approaches to solve sentiment analysis tasks from different perspectives in the past 15 years. Since the performance of a machine learner heavily depends on the choices of data representation, many studies devote to building powerful feature extractor with domain expert and careful engineering. Recently, deep learning approaches emerge as powerful computational models that discover intricate semantic representations of texts automatically from data without feature engineering. These approaches have improved the state‐of‐the‐art in many sentiment analysis tasks including sentiment classification of sentences/documents, sentiment extraction and sentiment lexicon learning. In this paper, we provide an overview of the successful deep learning approaches for sentiment analysis tasks, lay out the remaining challenges and provide some suggestions to address these challenges. WIREs Data Mining Knowl Discov 2015, 5:292–303. doi: 10.1002/widm.1171