Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jenq-Haur Wang is active.

Publication


Featured researches published by Jenq-Haur Wang.


international acm sigir conference on research and development in information retrieval | 2004

Translating unknown queries with web corpora for cross-language information retrieval

Pu-Jen Cheng; Jei Wen Teng; Ruei Cheng Chen; Jenq-Haur Wang; Wen Hsiang Lu; Lee-Feng Chien

It is crucial for cross-language information retrieval (CLIR) systems to deal with the translation of unknown queries due to that real queries might be short. The purpose of this paper is to investigate the feasibility of exploiting the Web as the corpus source to translate unknown queries for CLIR. We propose an online translation approach to determine effective translations for unknown query terms via mining of bilingual search-result pages obtained from Web search engines. This approach can alleviate the problem of the lack of large bilingual corpora, translate many unknown query terms, provide flexible query specifications, and extract semantically-close translations to benefit CLIR tasks -- especially for cross-language Web search.


acm/ieee joint conference on digital libraries | 2005

Resolving the unencoded character problem for chinese digital libraries

Derming Juang; Jenq-Haur Wang; Chen-Yu Lai; Ching-Chun Hsieh; Lee-Feng Chien; Jan-Ming Ho

Constructing a Chinese digital library, especially for historical article archiving, is often hindered by the small character sets supported by current computer systems. This paper is aimed at resolving the unencoded character problem with a practical and composite approach for Chinese digital libraries. The proposed approach consists of the glyph expression model, glyph structure database, and supporting tools. With this approach, the following problems can be resolved. First, the extensibility of Chinese characters can be preserved. Second, it would be as easy to generate, input, display, and search unencoded characters as existing ones. Third, it is compatible with existing encoding schemes that most computers use. This approach has been utilized by organizations and projects in various application domains including archeology, linguistics, ancient texts, calligraphy and paintings, and stone and bronze rubbings. For example, in Academia Sinica, a very large full-text database of ancient texts called Scripta Sinica has been created using this approach. The Union Catalog of National Digital Archives Project (NDAP) dealt with the unencoded characters encountered when merging the metadata of 12 different thematic domains from various organizations. Also, in Bronze Inscriptions Research Team (BIRT) of Academia Sinica, 3,459 bronze inscriptions were added, which is very helpful to the education and research in historic linguistics


Sensors | 2012

A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

Yen-Lin Chen; Hsin-Han Chiang; Chuan-Yen Chiang; Chuan-Ming Liu; Shyan-Ming Yuan; Jenq-Haur Wang

This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.


Entropy | 2017

A Quantized Kernel Learning Algorithm Using a Minimum Kernel Risk-Sensitive Loss Criterion and Bilateral Gradient Technique

Xiong Luo; Jing Deng; Weiping Wang; Jenq-Haur Wang; Wenbing Zhao

Recently, inspired by correntropy, kernel risk-sensitive loss (KRSL) has emerged as a novel nonlinear similarity measure defined in kernel space, which achieves a better computing performance. After applying the KRSL to adaptive filtering, the corresponding minimum kernel risk-sensitive loss (MKRSL) algorithm has been developed accordingly. However, MKRSL as a traditional kernel adaptive filter (KAF) method, generates a growing radial basis functional (RBF) network. In response to that limitation, through the use of online vector quantization (VQ) technique, this article proposes a novel KAF algorithm, named quantized MKRSL (QMKRSL) to curb the growth of the RBF network structure. Compared with other quantized methods, e.g., quantized kernel least mean square (QKLMS) and quantized kernel maximum correntropy (QKMC), the efficient performance surface makes QMKRSL converge faster and filter more accurately, while maintaining the robustness to outliers. Moreover, considering that QMKRSL using traditional gradient descent method may fail to make full use of the hidden information between the input and output spaces, we also propose an intensified QMKRSL using a bilateral gradient technique named QMKRSL_BG, in an effort to further improve filtering accuracy. Short-term chaotic time-series prediction experiments are conducted to demonstrate the satisfactory performance of our algorithms.


Sensors | 2012

An Intelligent Knowledge-Based and Customizable Home Care System Framework with Ubiquitous Patient Monitoring and Alerting Techniques

Yen-Lin Chen; Hsin-Han Chiang; Chao-Wei Yu; Chuan-Yen Chiang; Chuan-Ming Liu; Jenq-Haur Wang

This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions.


web intelligence | 2007

Finding Event-Relevant Content from the Web Using a Near-Duplicate Detection Approach

Hung-Chi Chang; Jenq-Haur Wang; Chih-Yi Chiu

In online resources, such as news and weblogs, authors often extract articles, embed content, and comment on existing articles related to a popular event. Therefore, it is useful if authors can check whether two or more articles share common parts for further analysis, such as cocitation analysis and search result improvement. If articles do have parts in common, we say the content of such articles is event-relevant. Conventional text classification methods classify a complete document into categories, but they cannot represent the semantics precisely or extract meaningful event-relevant content. To resolve these problems, we propose a near-duplicate detection approach for finding event-relevant content in Web documents. The efficiency of the approach and the proposed duplicate set generation algorithms make it suitable for identifying event-relevant content. The experiment results demonstrate the potential of the proposed approach for use in weblogs.


computer software and applications conference | 2002

Enhanced intranet management in a DHCP-enabled environment

Jenq-Haur Wang; Tzao-Lin Lee

The DHCP (dynamic host configuration protocol) is widely deployed in resource allocation and intranet management. However the DHCP mechanism is not mandatory and the DHCP server can neither force DHCP clients to release their leases, nor enforce cooperation from externally configured hosts that are DHCP-unaware. Although new DHCP options such as DHCP reconfigure extension have been proposed, the basic problems inherent in DHCP mechanism cannot be solved without first strengthening its operations. In the paper a DHCP-based infrastructure for intranet management is proposed by combining the resource allocation functions of the DHCP server with the packet filtering features of MAC (Medium Access Control) bridges such as Ethernet switches and wireless access points. DHCP clients and DHCP-unaware hosts that do not abide by the DHCP mechanism or our management policy will be denied network accesses by MAC bridges. Resource allocation and access control can be integrated and local configuration conflicts can be reduced to the minimum.


asia information retrieval symposium | 2009

Exploiting Sentence-Level Features for Near-Duplicate Document Detection

Jenq-Haur Wang; Hung-Chi Chang

Digital documents are easy to copy. How to effectively detect possible near-duplicate copies is critical in Web search. Conventional copy detection approaches such as document fingerprinting and bag-of-word similarity target at different levels of granularity in document features, from word n -grams to whole documents. In this paper, we focus on the mutual-inclusive type of near-duplicates where only partial overlap among documents makes them similar. We propose using a simple and compact sentence-level feature, the sequence of sentence lengths , for near-duplicate copy detection. Various configurations of sentence-level and word-level algorithms are evaluated. The experimental results show that sentence-level algorithms achieved higher efficiency with comparable precision and recall rates.


International Journal on Digital Libraries | 2004

Toward Web mining of cross-language query translations in digital libraries

Jenq-Haur Wang; Wen Hsiang Lu; Lee-Feng Chien

This paper proposes an effective query-translation approach that enables a cross-language information retrieval (CLIR) service to be more easily supported in digital library systems that only contain monolingual content. A query-translation engine called LiveTrans is used to process the translation requests of cross-lingual queries from connected digital library systems. To automatically extract translations not covered by standard dictionaries, the engine is developed based on a novel integration of dictionary resources and Web mining approaches, including anchor-text and search-result methods. The engine exploits a broad range of multilingual Web resources used as live bilingual corpora to alleviate translation difficulties. It is shown to be particularly effective for extracting multilingual translation equivalents of query terms containing proper names or new terminology. The obtained results show the feasibility of and great potential for creating English-Chinese CLIR services in existing digital libraries and new applications in cross-language Web searching, although difficulties still remain that need to be investigated further.


international conference on asian digital libraries | 2007

Organizing news archives by near-duplicate copy detection in digital libraries

Hung-Chi Chang; Jenq-Haur Wang

There are huge numbers of documents in digital libraries. How to effectively organize these documents so that humans can easily browse or reference is a challenging task. Existing classification methods and chronological or geographical ordering only provide partial views of the news articles. The relationships among news articles might not be easily grasped. In this paper, we propose a near-duplicate copy detection approach to organizing news archives in digital libraries. Conventional copy detection methods use word-level features which could be time-consuming and not robust to term substitutions. In this paper, we propose a sentence-level statistics-based approach to detect near-duplicate documents, which is language independent, simple but effective. Its orthogonal to and can be used to complement word-based approaches. Also its insensitive to actual page layout of articles. The experimental results showed the high efficiency and good accuracy of the proposed approach in detecting near-duplicates in news archives.

Collaboration


Dive into the Jenq-Haur Wang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yen-Lin Chen

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Chuan-Ming Liu

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Wen Hsiang Lu

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Tzao-Lin Lee

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Wen-Yew Liang

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Weiping Wang

University of Science and Technology Beijing

View shared research outputs
Top Co-Authors

Avatar

Xiong Luo

University of Science and Technology Beijing

View shared research outputs
Top Co-Authors

Avatar

Che Wun Chiou

Chien Hsin University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge