Ouen Pinngern
King Mongkut's Institute of Technology Ladkrabang
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ouen Pinngern.
Expert Systems With Applications | 2012
Sombut Foithong; Ouen Pinngern; Boonwat Attachoo
In this paper, we introduced a novel feature selection method based on the hybrid model (filter-wrapper). We developed a feature selection method using the mutual information criterion without requiring a user-defined parameter for the selection of the candidate feature set. Subsequently, to reduce the computational cost and avoid encountering to local maxima of wrapper search, a wrapper approach searches in the space of a superreduct which is selected from the candidate feature set. Finally, the wrapper approach determines to select a proper feature set which better suits the learning algorithm. The efficiency and effectiveness of our technique is demonstrated through extensive comparison with other representative methods. Our approach shows an excellent performance, not only high classification accuracy, but also with respect to the number of features selected.
genetic and evolutionary computation conference | 2007
Kreangsak Tamee; Larry Bull; Ouen Pinngern
This paper presents a novel approach to clustering using an accuracy-based Learning Classifier System. Our approach achieves this by exploiting the generalization mechanisms inherent to such systems. The purpose of the work is to develop an approach to learning rules which accurately describe clusters without prior assumptions as to their number within a given dataset. Favourable comparisons to the commonly used k-means algorithm are demonstrated on a number of synthetic datasets.
IEEE Transactions on Neural Networks | 2009
Pornthep Rojanavasu; Hai Huong Dam; Hussein A. Abbass; Chris Lokan; Ouen Pinngern
Learning classifier systems (LCSs) are rule-based inductive learning systems that have been widely used in the field of supervised and reinforcement learning over the last few years. This paper employs supervised classifier system (UCS), a supervised learning classifier system, that was introduced in 2003 for classification tasks in data mining. We present an adaptive framework of UCS on top of a self-organized map (SOM) neural network. The overall classification problem is decomposed adaptively and in real time by the SOM into subproblems, each of which is handled by a separate UCS. The framework is also tested with replacing UCS by a feedforward artificial neural network (ANN). Experiments on several synthetic and real data sets, including a very large real data set, show that the accuracy of classifications in the proposed distributed environment is as good or better than in the nondistributed environment, and execution is faster. In general, each UCS attached to a cell in the SOM has a much smaller population size than a single UCS working on the overall problem; since each data instance is exposed to a smaller population size than in the single population approach, the throughput of the overall system increases. The experiments show that the proposed framework can decompose a problem adaptively into subproblems, maintaining or improving accuracy and increasing speed.
intelligent systems design and applications | 2006
Kreangsak Tamee; Larry Bull; Ouen Pinngern
This paper presents a novel approach to clustering using a simple accuracy-based learning classifier system. Our approach achieves this by exploiting the evolutionary computing and reinforcement learning techniques inherent to such systems. The purpose of the work is to develop an approach to learning rules which accurately describe clusters without prior assumptions as to their number within a given dataset. Favourable comparisons to the commonly used k-means algorithm are demonstrated on a number of datasets
international conference on control, automation, robotics and vision | 2004
Chaiwat Tiraweerakhajohn; Ouen Pinngern
In this paper we present an approach that tries to alleviate the main item-based collaborative filtering (CF) drawback - the sparsity and the first-rater problem. The contents of items are combined into the item-based CF to find similar items and combined similarity is used to generate predictions. The first step concentrates in using association rules mining methods to discover new similarity relationships among attributes. The second step is to exploit this similarity during the calculation of similar item. Finally, new similarity and rating similarity measures are combined to find neighbor item in item-based CF algorithm and generating ratings predictions based on a combined similarity measure. The experiments show that this novel approach can achieve better prediction accuracy than traditional item-based CF algorithm.
pacific rim international conference on artificial intelligence | 2008
Thach Huy Nguyen; Sombut Foitong; Ouen Pinngern
The class imbalance problem has been recognized as a crucial problem in machine learning and data mining. Learning systems tend to be biased towards the majority class and thus have poor performance in classifying the minority class instances. This paper analyzes the imbalance problem in accuracy-based learning classifier system XCS. XCS has shown excellent performance on some data mining tasks, but as other classifiers, it also performs poorly on imbalance data problems. We analyze XCSs behavior on various imbalance levels and propose an appropriate parameter tuning to improve performance of the system. Particularly, XCS is adapted to eliminate over-general classifiers and protect accurate classifiers of minority class. Experimental results in Boolean function problems show that, with proposal adaptations, XCS is robust to class imbalance.
pacific rim international conference on artificial intelligence | 2008
Kreangsak Tamee; Pornthep Rojanavasu; Sonchai Udomthanapong; Ouen Pinngern
Learning Classifier Systems (LCS) have previously been shown to have application in Intrusion Detection. This paper extends work in the area by applying the Self-Organizing Map (SOM) for creating the new input string by 2-bit encoding rely on degree of deviation of normal behaviour. The performance of systems is investigated under an FTP-only dataset. It is shown that the proposed system is able to perform significantly better than the conventional XCS, modified XCS and twelve ML algorithms.
knowledge discovery and data mining | 2009
Sombut Foitong; Pornthep Rojanavasu; Boonwat Attachoo; Ouen Pinngern
Mutual Information (MI) is a good selector of relevance between input and output feature and have been used as a measure for ranking features in several feature selection methods. Theses methods cannot estimate optimal feature subsets by themselves, but depend on user defined performance. In this paper, we propose estimation of optimal feature subsets by using rough sets to determine candidate feature subset which receives from MI feature selector. The experiment shows that we can correct nonlinear problems and problems in situation of two or more combined features are dominant features, maintain an improve classification accuracy.
IAENG TRANSACTIONS ON ENGINEERING TECHNOLOGIES VOLUME I: Special Edition of the#N#International MultiConference of Engineers and Computer Scientists 2008 | 2009
Thach Nguyen Huy; Sombut Foitong; Sornchai Udomthanapong; Ouen Pinngern
This paper analyzes the effects of distance between classes and training datasets size to XCS classifier system on imbalanced datasets. Our purpose is to answer the question whether the loss of performance incurred by the classifier faced with class imbalance problems stems from the class imbalance per se or it can be explained in some other ways. The experiments from 250 artificial imbalanced datasets show that XCS can perform well in some imbalance domains if the training datasets size is large enough and the distance between classes is appropriate. Thus, it dose not seem fair to correlate imbalance datasets directly to the loss performance of XCS. Through this research, we also know what kinds of datasets are suitable for training XCS and dealing with class imbalances alone will not always help improve performance of classifiers.
ieee conference on cybernetics and intelligent systems | 2006
Kietikul Jearanaitanakij; Ouen Pinngern
We present an analysis on the minimum number of hidden units that is required to recognize English capital letters of the artificial neural network. The letter font that we use as a case study is the system font. In order to have the minimum number of hidden units, the number of input features has to be minimized. Firstly, we apply our heuristic for pruning unnecessary features from the data set. The small number of the remaining features leads the artificial neural network to have the small number of input units as well. The reason is a particular feature has a one-to-one mapping relationship onto the input unit. Next, the hidden units are pruned away from the network by using the hidden unit pruning heuristic. Both pruning heuristic is based on the notion of the information gain. They can efficiently prune away the unnecessary features and hidden units from the network. The experimental results show the minimum number of hidden units required to train the artificial neural network to recognize English capital letters in system font. In addition, the accuracy rate of the classification produced by the artificial neural network is practically high. As a result, the final artificial neural network that we produce is fantastically compact and reliable