Bo-Wen Wang
National Cheng Kung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bo-Wen Wang.
Information Sciences | 2010
Ja-Hwung Su; Bo-Wen Wang; Chin-Yuan Hsiao; Vincent S. Tseng
In recent years, explosively-growing information makes the users confused in making decisions among various kinds of products such as music, movies, books, etc. As a result, it is a challenging issue to help the user identify what she/he prefers. To this end, so called recommender systems are proposed to discover the implicit interests in users mind based on the usage logs. However, the existing recommender systems suffer from the problems of cold-start, first-rater, sparsity and scalability. To alleviate such problems, we propose a novel recommender, namely FRSA (Fusion of Rough-Set and Average-category-rating) that integrates multiple contents and collaborative information to predict users preferences based on the fusion of Rough-Set and Average-category-rating. Through the integrated mining of multiple contents and collaborative information, our proposed recommendation method can successfully reduce the gap between the users preferences and the automated recommendations. The empirical evaluations reveal that the proposed method, FRSA, can associate the recommended items with users interests more effectively than other existing well-known ones in terms of accuracy.
acm symposium on applied computing | 2007
Vincent S. Tseng; Ja-Hwung Su; Bo-Wen Wang; Yu-Ming Lin
In this paper, we propose a novel web image annotation method, namely FMD (Fused annotation by Mixed model graph and Decision tree), which combines visual features and textual information to conceptualize the web images. The FMD approach consists of three main processes: 1) construct the visual-based model, namely ModelMMG, 2) construct the textual-based model, namely ModelDT, and 3) fuse ModelMMG and ModelDT as ModelFMD for annotating the images. The purpose of visual-based annotation model is to objectify the image not only by the global content of the image but also by its local content of composing objects. The textual-based annotation model is to handle the problems of user-specified dependency of keywords and the complex computation due to high dimensionalities in text features. The experimental results reveal that the proposed FMD method is very effective for web image annotation in terms of accuracy through the integration of two different types of features.
International Journal of Fuzzy Systems | 2010
Ja-Hwung Su; Bo-Wen Wang; Tien-Yu Hsu; Chien-Li Chou; Vincent S. Tseng
Traditional image retrieval aims at bridging visual images and human concepts through visual or textual descriptions. However, it is still a challenging issue to reduce the gap between the images and users intentions. To this end, a considerable number of studies in the field of multimedia mining have been conducted on how to effectively meet the users requirements for image retrieval over the past few decades. However, there remain some problems unsettled. For content-based image retrieval, it is not easy to identify the users interest by using visual descriptions only. For textual-based image retrieval, the modern search engines incur the problems of high manual tagging cost and low automated tagging precision. Moreover, precise image retrieval also leads unsatisfied results because of the rough human concepts. To catch users concept well, we propose a novel approach, namely Intelligent SeMantic Image explorER (iSMIER), which considers the requirements of usability, intelligence and effectiveness simultaneously. Based on the proposed web image annotation, concept matching and fuzzy ranking methods, the users can obtain the desired images from the image collection easily and effectively. Through empirical evaluations, our annotation models can deliver high accuracy for serving semantic image retrieval.
sensor networks ubiquitous and trustworthy computing | 2008
Vincent S. Tseng; Ja-Hwung Su; Bo-Wen Wang; Chin-Yuan Hsiao; Jay Huang; Hsin-Ho Yeh
Making a decision among a set of items from compound and complex information has been becoming a difficult task for common users. Collaborative filtering has been the mainstay of automatically personalized search employed in contemporary recommender systems. Until now, it is still a challenging issue to reduce the gap between user perception and multimedia contents. To bridge users interests and multimedia items, in this paper, we present an intelligent multimedia recommender system by integrating annotation and association mining techniques. In our proposed system, low-level multimedia contents are conceptualized to support rule-based collaborative filtering recommendation by automated annotation. From the discovered relations between user contents and conceptualized multimedia contents, the proposed recommender system can provide a suitable recommendation list to assist users in making a decision among a massive amount of items.
web intelligence | 2009
Ja-Hwung Su; Bo-Wen Wang; Hsin-Ho Yeh; Vincent S. Tseng
The goal of traditional visual or textual-based image retrieval is to satisfy user’s queries by associating the images and semantic concepts effectively. As a result, perceptual structures of images have attracted researchers’ attention in recent studies. However, few past studies have been made on achieving semantic image retrieval by using image annotation techniques. To catch user’s ontological intention, we propose a new approach, namely Intelligent Web Image FetchER (iWIFER), which simultaneously considers the ontological requirements in usability, intelligence and effectiveness. Based on the proposed visual and textual-based annotation models, the image query becomes easy and effective. Through empirical evaluations, our annotation models can deliver accurate results for semantic web image retrieval.
international conference on multimedia and expo | 2008
Vincent S. Tseng; Ja-Hwung Su; Hao-Hua Ku; Bo-Wen Wang
Traditional image retrieval based on visual-based matching is not effective in multimedia applications. Consequently, the modeling of high-level human sense for image retrieval has been a challenging issue over the past few years. In fact, the concepts hidden in the images play key roles in semantic image retrieval. In this paper, we propose a novel method named intelligent concept-oriented search (ICOS) that can capture the high-level concepts in images by utilizing data mining and query decomposition techniques. The contributions of the proposed method lie in that we provide: 1) effective annotation for conceptual objects, 2) association mining for conceptual objects, 3) visual ranking for conceptual objects and 4) intelligent search method for enhancing high-level concept image retrieval. Through experimental evaluations, ICOS is shown to be very effective and efficient in capturing the implicit high-level concepts for image retrieval.
international conference on technologies and applications of artificial intelligence | 2011
Bo-Wen Wang; Ja-Hwung Su; Chien-Li Chou; Vincent S. Tseng
Video retrieval has been a hot topic due to the prevalence of video capturing devices and media-sharing services such as YouTube. Until now, few past studies has focused on querying the videos by images due to the semantic gap between images and videos is not easy to narrow. To this end, in this paper, we propose a novel semantic video retrieval system that integrates web image annotation and concept matching function to bridge images, concepts and videos. For web image annotation, we exploit textual and visual information in the web image to achieve effective image annotation. For concept matching function, we identify the concept relations by calculating the similarity between two concepts via Word Net. On the basis of web image annotation and concept matching function, the proposed system reaches the goals of usability and intelligence on semantic video retrieval. The experimental results reveal that our proposed system can successfully capture the users intention between image concepts and video concepts for semantic video retrieval.
international conference on innovative computing, information and control | 2008
Vincent S. Tseng; Ja-Hwung Su; Bo-Wen Wang; Chin-Yuan Hsiao
The explosive growth of information makes people confused in making a choice among a huge amount of products, like movies, books, etc. To help people clarify what they want easily, in this study, we present an intelligent recommendation approach named RSCF (recommendation by rough-set and collaborative filtering) that integrates collaborative information and content features to predict user preferences on the basis of rough-set theory. The contribution of this paper is that our proposed approach can completely solve the traditional problems occurring in recent studies, such as cold-star, first-rater, sparsity and scalability problems. The empirical evaluation results reveal that our proposed approach can reduce the gap between users interest and recommended items more effectively than other existing approaches in terms of accuracy of recommendations.
International Journal of Innovative Computing Information and Control | 2012
Bo-Wen Wang; Vincent S. Tseng
web intelligence/iat workshops | 2009
Ja-Hwung Su; Bo-Wen Wang; Hsin-Ho Yeh; Vincent S. Tseng