Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pei- Cheng is active.

Publication


Featured researches published by Pei- Cheng.


Expert Systems With Applications | 2008

A two-level relevance feedback mechanism for image retrieval

Pei-Cheng Cheng; Been-Chian Chien; Hao Ren Ke; Wei-Pang Yang

Content-based image retrieval (CBIR) is a group of techniques that analyzes the visual features (such as color, shape, texture) of an example image or image subregion to find similar images in an image database. Relevance feedback is often used in a CBIR system to help users express their preference and improve query results. Traditional relevance feedback relies on positive and negative examples to reformulate the query. Furthermore, if the system employs several visual features for a query, the weight of each feature is adjusted manually by the user or system predetermined and fixed by the system. In this paper we propose a new relevance feedback model suitable for medical image retrieval. The proposed method enables the user to rank the results in relevance order. According to the ranking, the system can automatically determine the importance ranking of features, and use this ranking to automatically adjust the weight of each feature. The experimental results show that the new relevance feedback mechanism outperforms previous relevance feedback models.


cross language evaluation forum | 2005

Combining textual and visual features for cross-language medical image retrieval

Pei-Cheng Cheng; Been-Chian Chien; Hao Ren Ke; Wei-Pang Yang

In this paper we describe the technologies and experimental results for the medical retrieval task and automatic annotation task. We combine textual and content-based approaches to retrieve relevant medical images. The content-based approach containing four image features and the text-based approach using word expansion are developed to accomplish these tasks. Experimental results show that combining both the content-based and text-based approaches is better than using only one approach. In the automatic annotation task we use Support Vector Machines (SVM) to learn image feature characteristics for assisting the task of image classification. Based on the SVM model, we analyze which image feature is more promising in medical image retrieval. The results show that the spatial relationship between pixels is an important feature in medical image data because medical image data always has similar anatomic regions. Therefore, image features emphasizing spatial relationship have better results than others.


cross language evaluation forum | 2004

Comparison and combination of textual and visual features for interactive cross-language image retrieval

Pei-Cheng Cheng; Jen Yuan Yeh; Hao Ren Ke; Been-Chian Chien; Wei-Pang Yang

This paper concentrates on the user-centered search task at ImageCLEF 2004. In this work, we combine both textual and visual features for cross-language image retrieval, and propose two interactive retrieval systems – T_ICLEF and VCT_ICLEF. The first one incorporates a relevance feedback mechanism based on textual information while the second one combines textual and image information to help users find a target image. The experimental results show that VCT_ICLEF had a better performance in almost all cases. Overall, it helped users find the topic image within a fewer iterations with a maximum of 2 iterations saved. Our user survey also reported that a combination of textual and visual information is helpful to indicate to the system what a user really wanted in mind.


cross language evaluation forum | 2004

SMIRE: similar medical image retrieval engine

Pei-Cheng Cheng; Been-Chian Chien; Hao Ren Ke; Wei-Pang Yang

This paper aims at finding images that are similar to a medical image example query. We propose several image features based on wavelet coefficients, including color histogram, gray-spatial histogram, coherence moment, and gray correlogram, to facilitate the retrieval of similar medical images. The initial retrieval results are obtained via visual feature analysis. An automatic feedback mechanism that clusters visually and textually similar images among these initial results was also proposed to help refine the query. In the ImageCLEF 2004 evaluation, the experimental results show that our system is excellence in mean average precision.


international conference software and computer applications | 2017

Reference scope identification for citances by classification with text similarity measures

Jen-Yuan Yeh; Tien-Yu Hsu; Cheng-Jung Tsai; Pei-Cheng Cheng

This paper targets at the first step towards generating citation summaries - to identify the reference scope (i.e., cited text spans) for citances. We present a novel classification-based method that converts the task into binary classification which distinguishes cited and non-cited pairs of citances and reference sentences. The method models pairs of citances and reference sentences as feature vectors where citation-dependent and citation-independent features based on the semantic similarity between texts and the significance of texts are explored. Such vector representations are utilized to train a binary classifier. For a citance, once the set of reference sentences classified as the cited sentences are collected, a heuristic-based filtering strategy is applied to refine the output. The method is evaluated using the CL-SciSumm 2016 datasets and found to perform well with competitive results.


Security and Communication Networks | 2014

Two novel biometric features in keystroke dynamics authentication systems for touch screen devices

Cheng-Jung Tasia; Ting-Yi Chang; Pei-Cheng Cheng; Jyun-Hao Lin


cross language evaluation forum | 2004

KIDS's evaluation in medical image retrieval task at ImageCLEF 2004

Pei-Cheng Cheng; Been-Chian Chien; Hao Ren Ke; Wei-Pang Yang


cross language evaluation forum | 2004

NCTU-ISU's Evaluation for the User-Centered Search Task at ImageCLEF 2004

Pei-Cheng Cheng; Jen Yuan Yeh; Hao Ren Ke; Been-Chian Chien; Wei-Pang Yang; Ta-Hsu Hsiang


Journal of Convergence Information Technology | 2011

A Novel and Simple Statistical Fusion Method for User Authentication through Keystroke Features

Pei-Cheng Cheng; Ting-Yi Chang; ChengJung Tsai; JianWei Li; ChihSheng Wu


CLEF (Working Notes) | 2005

NCTU_DBLAB@ImageCLEFmed 2005: Medical Image Retrieval Task.

Pei-Cheng Cheng; Been-Chian Chien; Hao Ren Ke; Wei-Pang Yang

Collaboration


Dive into the Pei- Cheng's collaboration.

Top Co-Authors

Avatar

Wei-Pang Yang

National Dong Hwa University

View shared research outputs
Top Co-Authors

Avatar

Been-Chian Chien

National University of Tainan

View shared research outputs
Top Co-Authors

Avatar

Hao Ren Ke

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Jen Yuan Yeh

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Ting-Yi Chang

National Changhua University of Education

View shared research outputs
Top Co-Authors

Avatar

Cheng-Jung Tasia

National Changhua University of Education

View shared research outputs
Top Co-Authors

Avatar

Cheng-Jung Tsai

National Changhua University of Education

View shared research outputs
Top Co-Authors

Avatar

Chia-Hung Wei

Chien Hsin University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jyun-Hao Lin

National Changhua University of Education

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge