Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chidchanok Lursinsap is active.

Publication


Featured researches published by Chidchanok Lursinsap.


knowledge discovery and data mining | 2009

Safe-Level-SMOTE: Safe-Level-Synthetic Minority Over-Sampling TEchnique for Handling the Class Imbalanced Problem

Chumphol Bunkhumpornpat; Krung Sinapiromsaran; Chidchanok Lursinsap

The class imbalanced problem occurs in various disciplines when one of target classes has a tiny number of instances comparing to other classes. A typical classifier normally ignores or neglects to detect a minority class due to the small number of class instances. SMOTE is one of over-sampling techniques that remedies this situation. It generates minority instances within the overlapping regions. However, SMOTE randomly synthesizes the minority instances along a line joining a minority instance and its selected nearest neighbours, ignoring nearby majority instances. Our technique called Safe-Level-SMOTE carefully samples minority instances along the same line with different weight degree, called safe level. The safe level computes by using nearest neighbour minority instances. By synthesizing the minority instances more around larger safe level, we achieve a better accuracy performance than SMOTE and Borderline-SMOTE.


Applied Intelligence | 2012

DBSMOTE: Density-Based Synthetic Minority Over-sampling TEchnique

Chumphol Bunkhumpornpat; Krung Sinapiromsaran; Chidchanok Lursinsap

A dataset exhibits the class imbalance problem when a target class has a very small number of instances relative to other classes. A trivial classifier typically fails to detect a minority class due to its extremely low incidence rate. In this paper, a new over-sampling technique called DBSMOTE is proposed. Our technique relies on a density-based notion of clusters and is designed to over-sample an arbitrarily shaped cluster discovered by DBSCAN. DBSMOTE generates synthetic instances along a shortest path from each positive instance to a pseudo-centroid of a minority-class cluster. Consequently, these synthetic instances are dense near this centroid and are sparse far from this centroid. Our experimental results show that DBSMOTE improves precision, F-value, and AUC more effectively than SMOTE, Borderline-SMOTE, and Safe-Level-SMOTE for imbalanced datasets.


design automation conference | 1989

DTR: A Defect-Tolerant Routing Algorithm

Anucha Pitaksanonkul; Suchai Thanawastien; Chidchanok Lursinsap; J. A. Gandhi

A new channel routing algorithm called DTR (Defect-Tolerant Routing) is investigated. This algorithm minimizes the total area and simultaneously maximizes the performance by reducing the critical area which can potentially be the source of logical faults caused by the bridging effects of spot defects. Experimental results show DTR produces less critical area than Yoshimura & Kuhs algorithm [1].


Pattern Recognition Letters | 2005

A divide-and-conquer approach to the pairwise opposite class-nearest neighbor (POC-NN) algorithm

Thanapant Raicharoen; Chidchanok Lursinsap

This paper presents a new method based on divide-and-conquer approach to the selection and replacement of a set of prototypes from the training set for the nearest neighbor rule. This method aims at reducing the computational time and the memory space as well as the sensitivity of the order and the noise of the training data. A reduced prototype set contains Pairwise Opposite Class-Nearest Neighbor (POC-NN) prototypes which are close to the decision boundary and used instead of the training patterns. POC-NN prototypes are obtained by recursively iterative separation and analysis of the training data into two regions until each region is correctly grouped and classified. The separability is determined by the POC-NN prototypes essential to define the locations of all separating hyperplanes. Our method is fast and order independent. The number of prototypes and the overfitting of the model can be reduced by the user. The experimental results signify the effectiveness of this technique and its performance in both accuracy and prototype rate as well as in training time to those obtained by classical nearest neighbor techniques.


Image and Vision Computing | 2007

Face detection and facial feature localization without considering the appearance of image context

Suphakant Phimoltares; Chidchanok Lursinsap; Kosin Chamnongthai

Face and facial feature detection plays an important role in various applications such as human computer interaction, video surveillance, face tracking, and face recognition. Efficient face and facial feature detection algorithms are required for applying to those tasks. This paper presents the algorithms for all types of face images in the presence of several image conditions. There are two main stages. In the first stage, the faces are detected from an original image by using Canny edge detection and our proposed average face templates. Second, a proposed neural visual model (NVM) is used to recognize all possibilities of facial feature positions. Input parameters are obtained from the positions of facial features and the face characteristics that are low sensitive to intensity change. Finally, to improve the results, image dilation is applied for removing some irrelevant regions. Additionally, the algorithms can be extended to rotational invariance problem by using Radon transformation to extract the main angle of the face. With more than 1000 images, the algorithms are successfully tested with various types of faces affected by intensity, occlusion, structural components, facial expression, illumination, noise, and orientation.


Pattern Recognition Letters | 2013

Handling imbalanced data sets with synthetic boundary data generation using bootstrap re-sampling and AdaBoost techniques

Putthiporn Thanathamathee; Chidchanok Lursinsap

The problem of imbalanced data between classes prevails in various applications such as bioinformatics. The correctness of prediction in case of imbalanced data is usually biased towards the majority class. However, in several applications, the accuracy of prediction in minority class is also significant as much as in majority class. Previously, there were many techniques proposed to increase the accuracy in minority class. These techniques are based on the concept of re-sampling, which can be over-sampling and under-sampling, during the training process. Those re-sampling techniques did not considered how the data are scattered in the space. In this paper, we proposed a new technique based on the fact that the location of separating function in between any two sub-clusters in different classes is defined only by the boundary data of each sub-cluster. In addition, the accuracy is measured only by the testing set. Our technique adapted the concept of bootstrapping to estimate new region of each sub-cluster and synthesize the new boundary data. The new region is for coping with the unseen testing data. All new synthesized data were classified by using the concept of AdaBoost algorithm. Our results outperformed the other techniques under several performance evaluating functions.


IEEE Transactions on Neural Networks | 2010

A Very Fast Neural Learning for Classification Using Only New Incoming Datum

Saichon Jaiyen; Chidchanok Lursinsap; Suphakant Phimoltares

This paper proposes a very fast 1-pass-throw-away learning algorithm based on a hyperellipsoidal function that can be translated and rotated to cover the data set during learning process. The translation and rotation of hyperellipsoidal function depends upon the distribution of the data set. In addition, we present versatile elliptic basis function (VEBF) neural network with one hidden layer. The hidden layer is adaptively divided into subhidden layers according to the number of classes of the training data set. Each subhidden layer can be scaled by incrementing a new node to learn new samples during training process. The learning time is O(n), where n is the number of data. The network can independently learn any new incoming datum without involving the previously learned data. There is no need to store all the data in order to mix with the new incoming data during the learning process.


IEEE Transactions on Neural Networks | 1994

Weight shifting techniques for self-recovery neural networks

Chularat Khunasaraphan; Kanonkluk Vanapipat; Chidchanok Lursinsap

In this paper, a self-recovery technique of feedforward neural networks called weight shifting and its analytical models are proposed. The technique is applied to recover a network when some faulty links and/or neurons occur during the operation. If some input links of a specific neuron are detected faulty, their weights will be shifted to healthy links of the same neuron. On the other hand, if a faulty neuron is encountered, then we can treat it as a special case of faulty links by considering all the output links of that neuron to be faulty. The aim of this technique is to recover the network in a short time without any retraining and hardware repair. We also propose the hardware architecture for implementing this technique.


international conference on computational science and its applications | 2010

Estimating Software Effort with Minimum Features Using Neural Functional Approximation

Pichai Jodpimai; Peraphon Sophatsathit; Chidchanok Lursinsap

The aim of this study is to improve software effort estimation by incorporating straightforward mathematical principles and artificial neural network technique. Our process consists of three major steps. The first step concerns data preparation from each considered database. The second step is to reduce the number of given features by considering only those relevant ones. The final step is to transform the problem of estimating software effort to the problems of classification and functional approximation by using a feedforward neural network. Experimental data are taken from well-known public domains. The results are systematically compared with related prior works using only a few features so obtained, yet demonstrate that the proposed model yields satisfactory estimation accuracy based on MMRE and PRED measures.


design automation conference | 1991

Automated micro-roll-back self-recovery synthesis

Vijay Raghavendra; Chidchanok Lursinsap

The problem and efficient solution of automated synthesis of a self-recovery chip using micro-roll-back and checkpoint insertion techniques is discussed and proposed. An efficient design of micro-roll-back and checkpoint insertion can be achieved by considering them during the scheduling and allocation steps. The rollback and recovery scheme is designed to satisfy the constraints of the available number of registers and the maximum allowable recovery time. The proposed checkpointing( rollback point) algorithm will allow the system to recover from most transient faults.

Collaboration


Dive into the Chidchanok Lursinsap's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Khamron Sunat

Mahanakorn University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anucha Pitaksanonkul

University of Louisiana at Lafayette

View shared research outputs
Researchain Logo
Decentralizing Knowledge