Hongling Wang
Soochow University (Taiwan)
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hongling Wang.
ACM Transactions on Asian Language Information Processing | 2012
Hongling Wang; Guodong Zhou
This article presents a unified framework for extracting standard and update summaries from a set of documents. In particular, a topic modeling approach is employed for salience determination and a dynamic modeling approach is proposed for redundancy control. In the topic modeling approach for salience determination, we represent various kinds of text units, such as word, sentence, document, documents, and summary, using a single vector space model via their corresponding probability distributions over the inherent topics of given documents or a related corpus. Therefore, we are able to calculate the similarity between any two text units via their topic probability distributions. In the dynamic modeling approach for redundancy control, we consider the similarity between the summary and the given documents, and the similarity between the sentence and the summary, besides the similarity between the sentence and the given documents, for standard summarization while for update summarization, we also consider the similarity between the sentence and the history documents or summary. Evaluation on TAC 2008 and 2009 in English language shows encouraging results, especially the dynamic modeling approach in removing the redundancy in the given documents. Finally, we extend the framework to Chinese multi-document summarization and experiments show the effectiveness of our framework.
Journal of Computer Science and Technology | 2013
Hongling Wang; Guodong Zhou
This paper explores a tree kernel based method for semantic role labeling (SRL) of Chinese nominal predicates via a convolution tree kernel. In particular, a new parse tree representation structure, called dependency-driven constituent parse tree (D-CPT), is proposed to combine the advantages of both constituent and dependence parse trees. This is achieved by directly representing various kinds of dependency relations in a CPT-style structure, which employs dependency relation types instead of phrase labels in CPT (Constituent Parse Tree). In this way, D-CPT not only keeps the dependency relationship information in the dependency parse tree (DPT) structure but also retains the basic hierarchical structure of CPT style. Moreover, several schemes are designed to extract various kinds of necessary information, such as the shortest path between the nominal predicate and the argument candidate, the support verb of the nominal predicate and the head argument modified by the argument candidate, from D-CPT. This largely reduces the noisy information inherent in D-CPT. Finally, a convolution tree kernel is employed to compute the similarity between two parse trees. Besides, we also implement a feature-based method based on D-CPT. Evaluation on Chinese NomBank corpus shows that our tree kernel based method on D-CPT performs significantly better than other tree kernel-based ones and achieves comparable performance with the state-of-the-art feature-based ones. This indicates the effectiveness of the novel D-CPT structure in representing various kinds of dependency relations in a CPT-style structure and our tree kernel based method in exploring the novel D-CPT structure. This also illustrates that the kernel-based methods are competitive and they are complementary with the feature-based methods on SRL.
conference on computational natural language learning | 2008
Hongling Wang; Honglin Wang; Guodong Zhou; Qiaoming Zhu
This paper proposes a dependency tree-based SRL system with proper pruning and extensive feature engineering. Official evaluation on the CoNLL 2008 shared task shows that our system achieves 76.19 in labeled macro F1 for the overall task, 84.56 in labeled attachment score for syntactic dependencies, and 67.12 in labeled F1 for semantic dependencies on combined test set, using the standalone MaltParser. Besides, this paper also presents our unofficial system by 1) applying a new effective pruning algorithm; 2) including additional features; and 3) adopting a better dependency parser, MSTParser. Unofficial evaluation on the shared task shows that our system achieves 82.53 in labeled macro F1, 86.39 in labeled attachment score, and 78.64 in labeled F1, using MSTParser on combined test set. This suggests that proper pruning and extensive feature engineering contributes much in dependency tree-based SRL.
international conference on computational linguistics | 2010
Junhui Li; Guodong Zhou; Hongling Wang; Qiaoming Zhu
international conference on computational linguistics | 2010
Bing Liu; Longhua Qian; Hongling Wang; Guodong Zhou
empirical methods in natural language processing | 2010
Qiaoming Zhu; Junhui Li; Hongling Wang; Guodong Zhou
Archive | 2009
Hongling Wang; Qiaoming Zhu; Peide Qian; Fang Kong; Peifeng Li; Guodong Zhou; Longhua Qian
international conference on computational linguistics | 2012
Longhua Qian; Hongling Wang; Guodong Zhou; Qiaoming Zhu
Archive | 2009
Guodong Zhou; Peide Qian; Qiaoming Zhu; Peifeng Li; Junhui Li; Fang Kong; Hongling Wang; Longhua Qian
Archive | 2009
Qiaoming Zhu; Guodong Zhou; Peifeng Li; Junhui Li; Longhua Qian; Fang Kong; Hongling Wang; Peide Qian