Likun Qiu
Peking University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Likun Qiu.
conference on information and knowledge management | 2009
Likun Qiu; Weishi Zhang; Changjian Hu; Kai Zhao
This paper presents the SELC Model (SElf-Supervised, (Lexicon-based and (Corpus-based Model) for sentiment classification. The SELC Model includes two phases. The first phase is a lexicon-based iterative process. In this phase, some reviews are initially classified based on a sentiment dictionary. Then more reviews are classified through an iterative process with a negative/positive ratio control. In the second phase, a supervised classifier is learned by taking some reviews classified in the first phase as training data. Then the supervised classifier applies on other reviews to revise the results produced in the first phase. Experiments show the effectiveness of the proposed model. SELC totally achieves 6.63% F1-score improvement over the best result in previous studies on the same data (from 82.72% to 89.35%). The first phase of the SELC Model independently achieves 5.90% improvement (from 82.72% to 88.62%). Moreover, the standard deviation of F1-scores is reduced, which shows that the SELC Model could be more suitable for domain-independent sentiment classification.
international conference on computational linguistics | 2011
Likun Qiu; Yunfang Wu; Yanqiu Shao
Supersense tagging classifies unknown words into semantic categories defined by lexicographers and inserts them into a thesaurus. Previous studies on supersense tagging show that context-based methods perform well for English unknown words while structure-based methods perform well for Chinese unknown words. The challenge before us is how to successfully combine contextual and structural information together for supersense tagging of Chinese unknown words. We propose a simple yet effective approach to address the challenge. In this approach, contextual information is used for measuring contextual similarity between words while structural information is used to filter candidate synonyms and adjusting contextual similarity score. Experiment results show that the proposed approach outperforms the state-of-art context-based method and structure-based method.
international conference on computational linguistics | 2008
Likun Qiu; Changjian Hu; Kai Zhao
This paper proposes a method for automatic POS (part-of-speech) guessing of Chinese unknown words. It contains two models. The first model uses a machine-learning method to predict the POS of unknown words based on their internal component features. The credibility of the results of the first model is then measured. For low-credibility words, the second model is used to revise the first models results based on the global context information of those words. The experiments show that the first model achieves 93.40% precision for all words and 86.60% for disyllabic words, which is a significant improvement over the best results reported in previous studies, which were 89% precision for all words and 74% for disyllabic words. Further, the second model improves the results by 0.80% precision for all words and 1.30% for disyllabic words.
web intelligence | 2011
Yanqiu Shao; Likun Qiu; Chunxia Liang
Deep semantic parsing is the key to understand sentence meaning. This paper integrates some Chinese semantic relation systems given by different scholars, and presents a more comprehensive system for semantic dependency parsing. The new semantic relation system includes the definition for the situation that a verb acts as a modifier and a verbal noun acts as the center of the noun phrase. According to the relation system, a large scale Chinese semantic dependency relation tree bank is constructed by the combination of automatic and manual means. This semantic dependency tree bank will become a basis of studying deep semantic parsing.
international conference on asian language processing | 2011
Likun Qiu; Lei Wu; Kai Zhao; Changjian Hu; Lingpeng Kong
To solve the data sparseness problem in dependency parsing, most previous studies used features constructed from large-scale auto-parsed data. Unlike previous work, we propose a new approach to improve dependency parsing with context-free dependency triples (CDT) extracted by using self-disambiguating patterns (SDP). The use of SDP makes it possible to avoid the dependency on a baseline parser and explore the influence of different types of substructures one by one. Additionally, taking the available CDTs as seeds, a label propagation process is used to tag a large number of unlabeled word pairs as CDTs. Experiments show that, when CDT features are integrated into a maximum spanning tree (MST) dependency parser, the new parser improves significantly over the baseline MST parser. Comparative results also show that CDTs with dependency relation labels perform much better than CDT without dependency relation label.
international conference on computational linguistics | 2014
Likun Qiu; Yue Zhang; Peng Jin; Houfeng Wang
Int. J. of Asian Lang. Proc. | 2012
Likun Qiu; Lei Wu; Kai Zhao; Changjian Hu
web intelligence/iat workshops | 2011
Yanqiu Shao; Likun Qiu; Chunxia Liang
web intelligence/iat workshops | 2011
Likun Qiu; Yunfang Wu; Jing Shi; Yanqiu Shao; Zhiyi Long
international conference on asian language processing | 2011
Lingpeng Kong; Likun Qiu