Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jun-Ping Ng is active.

Publication


Featured researches published by Jun-Ping Ng.


Neurocomputing | 2015

Search engine reinforced semi-supervised classification and graph-based summarization of microblogs

Yan Chen; Xiaoming Zhang; Zhoujun Li; Jun-Ping Ng

There is an abundance of information found on microblog services due to their popularity. However the potential of this trove of information is limited by the lack of effective means for users to browse and interpret the numerous messages found on these services. We tackle this problem using a two-step process, first by slicing up the search results of current retrieval systems along multiple possible genres. Then, a summary is generated from the microblog messages attributed to each genre. We believe that this helps users to better understand the possible interpretations of the retrieved results and aid them in finding the information that they need. Our novel approach makes use of automatically acquired information from external search engines in each of these two steps. We first integrate this information with a semi-supervised probabilistic graphical model, and show that this helps us to achieve significantly better classification performance without the need for much training data. Next we incorporate the extra information into graph-based summarization, and demonstrate that superior summaries (up to 30% improvement in ROUGE-1) are obtained.


meeting of the association for computational linguistics | 2014

Exploiting Timelines to Enhance Multi-document Summarization

Jun-Ping Ng; Yan Chen; Min-Yen Kan; Zhoujun Li

We study the use of temporal information in the form of timelines to enhance multidocument summarization. We employ a fully automated temporal processing system to generate a timeline for each input document. We derive three features from these timelines, and show that their use in supervised summarization lead to a significant 4.1% improvement in ROUGE performance over a state-of-the-art baseline. In addition, we propose TIMEMMR, a modification to Maximal Marginal Relevance that promotes temporal diversity by way of computing time span similarity, and show its utility in summarizing certain document sets. We also propose a filtering metric to discard noisy timelines generated by our automatic processes, to purify the timeline input for summarization. By selectively using timelines guided by filtering, overall summarization performance is increased by a significant 5.9%.


empirical methods in natural language processing | 2015

Better Summarization Evaluation with Word Embeddings for ROUGE

Jun-Ping Ng; Viktoria Abrecht

ROUGE is a widely adopted, automatic evaluation measure for text summarization. While it has been shown to correlate well with human judgements, it is biased towards surface lexical similarities. This makes it unsuitable for the evaluation of abstractive summarization, or summaries with substantial paraphrasing. We study the effectiveness of word embeddings to overcome this disadvantage of ROUGE. Specifically, instead of measuring lexical overlaps, word embeddings are used to compute the semantic similarity of the words used in summaries instead. Our experimental results show that our proposal is able to achieve better correlations with human judgements when measured with the Spearman and Kendall rank coefficients.


international conference on information technology and applications | 2005

Dynamic Markov Compression Using a Crossbar-Like Tree Initial Structure for Chinese Texts

Ghim Hwee Ong; Jun-Ping Ng

This paper proposes the use of a crossbar-like tree structure to use with dynamic Markov compression (DMC) for the compression of Chinese text files. DMC had previously been found to be more effective than common compression techniques like compress and pack and gives a compression gain of between 13.1% and 32.0%. This initial structure is able to improve on DMCs compression results, and outperforms the various initial structures commonly adopted, such as the single-state, linear, tree or braid structures by a gain ranging from 1.5% to 9.6%


international conference on computational linguistics | 2012

Exploiting Category-Specific Information for Multi-Document Summarization

Jun-Ping Ng; Praveen Bysani; Ziheng Lin; Min-Yen Kan; Chew Lim Tan


arXiv: Information Retrieval | 2015

QANUS: An Open-source Question-Answering Platform.

Jun-Ping Ng; Min-Yen Kan


north american chapter of the association for computational linguistics | 2010

Extracting Formulaic and Free Text Clinical Research Articles Metadata using Conditional Random Fields

Sein Lin; Jun-Ping Ng; Shreyasee S. Pradhan; Jatin Shah; Ricardo Pietrobon; Min-Yen Kan


empirical methods in natural language processing | 2013

Exploiting Discourse Analysis for Article-Wide Temporal Classification

Jun-Ping Ng; Min-Yen Kan; Ziheng Lin; Wei Feng; Bin Chen; Jian Su; Chew Lim Tan


empirical methods in natural language processing | 2013

Mining Scientific Terms and their Definitions: A Study of the ACL Anthology

Yiping Jin; Min-Yen Kan; Jun-Ping Ng; Xiangnan He


Theory and Applications of Categories | 2011

SWING: Exploiting Category-Specific Information for Guided Summarization.

Jun-Ping Ng; Praveen Bysani; Ziheng Lin; Min-Yen Kan; Chew Lim Tan

Collaboration


Dive into the Jun-Ping Ng's collaboration.

Top Co-Authors

Avatar

Min-Yen Kan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Chew Lim Tan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Ziheng Lin

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Praveen Bysani

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ghim Hwee Ong

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sein Lin

National University of Singapore

View shared research outputs
Researchain Logo
Decentralizing Knowledge