Kietikul Jearanaitanakij
King Mongkut's Institute of Technology Ladkrabang
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kietikul Jearanaitanakij.
ieee conference on cybernetics and intelligent systems | 2006
Kietikul Jearanaitanakij; Ouen Pinngern
We present an analysis on the minimum number of hidden units that is required to recognize English capital letters of the artificial neural network. The letter font that we use as a case study is the system font. In order to have the minimum number of hidden units, the number of input features has to be minimized. Firstly, we apply our heuristic for pruning unnecessary features from the data set. The small number of the remaining features leads the artificial neural network to have the small number of input units as well. The reason is a particular feature has a one-to-one mapping relationship onto the input unit. Next, the hidden units are pruned away from the network by using the hidden unit pruning heuristic. Both pruning heuristic is based on the notion of the information gain. They can efficiently prune away the unnecessary features and hidden units from the network. The experimental results show the minimum number of hidden units required to train the artificial neural network to recognize English capital letters in system font. In addition, the accuracy rate of the classification produced by the artificial neural network is practically high. As a result, the final artificial neural network that we produce is fantastically compact and reliable
Journal of Circuits, Systems, and Computers | 2008
Kietikul Jearanaitanakij; Ouen Pinngern
Having more hidden units than necessary can produce a neural network that has a poor generalization. This paper proposes a new algorithm for pruning unnecessary hidden units away from the single-hidden layer feedforward neural networks, resulting in a Spartan network. Our approach is simple and easy to implement, yet produces a very good result. The idea is to train the network until it begins to lose its generalization. Then the algorithm measures the sensitivity and automatically prunes away the most irrelevant unit. We define this sensitivity as the absolute difference between the desirable output and the output of the pruned network. Unlike other pruning methods, our algorithm is distinct in calculating the sensitivity from the validation set, instead of the training set, without increasing the asymptotic time complexity of the back-propagation algorithm. In addition, for a classification problem, we raise a point that the sensitivities of some well-known pruning algorithms may still underestimate the irrelevance of hidden unit even though the validation set is used in measuring the sensitivity. We resolve this problem by considering the number of misclassified patterns as the main concern. The Spartan simplicity algorithm is applied to three artificial and seven standard benchmarks. In most problems, the algorithm can produce a compact-sized network with high generalization ability in comparison with other pruning algorithms.
international computer science and engineering conference | 2013
Nattakon Chompupatipong; Kietikul Jearanaitanakij
Nearest neighbor algorithm is a well-known method, in pattern recognition, for classifying objects based on the nearest examples in the feature space. However, its major drawback is the sequential search operation which calculates the distance between the probing object and the entire set of the training instances. In this paper, we propose a novel method to accelerate the searching operation in the nearest neighbor algorithm. Our method consists of two main steps; creating the reference table and searching the nearest neighbor. Reference table of the training instances is created once in the initial phase and referred periodically by the searching step. Surprisingly, this reference table can drastically reduce the searching time of the nearest neighbor algorithm on any feature space. The experimental results on five real-world datasets from the VCI repository show a remarkable improvement on the searching time while the accuracy is still preserved.
international symposium on communications and information technologies | 2006
Kietikul Jearanaitanakij; Ouen Pinngern
This paper proposes a method to determine the irrelevance of the hidden unit in the artificial neural network. Unlike other approaches, we calculate the sensitivity of the hidden unit from the validation set, instead of the training set. The advantage of using the validation set to calculate the sensitivity is that we never overestimate the relevance of hidden unit. In other words, we always remove the unit that has the least effect on the validation set error. As a result, the pruned neural network has the highest generalization when compared with other choices of removals. Our sensitivity is based on the activation difference of the output unit. This activation difference is the gap between the activation of output units when a particular hidden unit is present and when it is removed. We have applied our technique to two standard benchmark problems. The experimental results show that the proposed technique can correctly determine the least irrelevant hidden unit
international conference on electrical engineering/electronics, computer, telecommunications and information technology | 2015
Thanet Satukitchai; Kietikul Jearanaitanakij
international conference on electrical engineering/electronics, computer, telecommunications and information technology | 2017
Sornnarong Srimakham; Kietikul Jearanaitanakij
international computer science and engineering conference | 2017
Nicha Kaewrod; Kietikul Jearanaitanakij
international computer science and engineering conference | 2017
Suratchanan Kraidech; Kietikul Jearanaitanakij
international computer science and engineering conference | 2017
Rathachai Chawuthai; Kietikul Jearanaitanakij
computer and information technology | 2016
Thanet Satukitchai; Kietikul Jearanaitanakij