Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Theekapun Charoenpong is active.

Publication


Featured researches published by Theekapun Charoenpong.


international conference on knowledge and smart technology | 2012

Pupil extraction system for Nystagmus diagnosis by using K-mean clustering and Mahalanobis distance technique

Theekapun Charoenpong; Srisupang Thewsuwan; Theerasak Chanwimalueang; Visan Mahasithiwat

As vertigo is a type of dizziness, it causes by problem with nystagmus. Doctors can diagnosis this disease from observing the motion of inner eye. For Nystagmus diagnosis system, efficient and precise pupil extraction system is needed. This paper proposed a method of pupil extraction by using K-mean clustering and Mahalanobis distance. Image sequence is captured via infrared camera mounted on the binocular. Eye tracking algorithm is consisted of K-mean clustering and Mahalanobis Distance. Based on the darkness of pupil, K-means clustering algorithm is used to segment black pixels. Extracted region is pupil, however noise is occurred. The noisy data is eliminated by means of Mahalanobis distance technique. Then the pupil is extracted. For experimental result, 1869 frames from 9 image sequences are use to test the performance of the proposed method. Accuracy is 73.68%, precision is 3.18 pixels error.


international congress on image and signal processing | 2010

Lane detection using smoothing spline

Chaiwat Nuthong; Theekapun Charoenpong

In vehicle safety research area, a Lane Departure Warning System (LDWS) is of a major interest. The system, however, needs lane detection and lane tracking as basis. This paper proposes an algorithm used to detect the lane as an initialization for lane tracking. The algorithm takes an image captured from a video stream as an input. It then segments the image into a number of small tiles and applies the Principle Component Analysis (PCA) to each tile in order to find the centroid and the principle axis. After recovering the original image from all of the tiles, the probable lines are found using k-means clustering techniques. Two out of these lines will be selected to be the lanes of the road. At the end, the corresponding centroid points are chosen to be represented as control points of smoothing splines represented the lane. This obtained splines are aimed to be used further as an initialization of the lane tracking algorithm. The method has been tested and worked fine. However, its limitation has also been stated and needed to be improved in the future.


2015 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS) | 2015

A new method to estimate rotation angle of a 3D eye model from single camera

Theekapun Charoenpong; Thananan Jantima; Chamaporn Chianrabupra; Visan Mahasitthiwat

A challenge of current research concerning eye motion estimation is to estimate motion of a three-dimensional (3D) eye model from a two-dimensional (2D) eye image. This paper proposed a new method to estimate rotation angle of 3D eye model from a 2D eye image by angular orientation identification technique. Real eye movement is captured from a camera mounting on a side of a binocular. A camera focuses on an eye. This method consists of three steps: 1) pupil extraction from a 2D image, 2) ellipse fitting technique, and 3) angular orientation identification. First, 3D eye model orients related real 2D eye image movement. Pupil is extracted to measure its shape in second step. Second, ellipse fitting technique is used to construct complete shape of pupil. Lengths of major and minor axis are computed from ellipse shape. Finally, angular orientation of 3D eye is estimated from major and minor axis length. Rotation angle is defined from lookup table of lengths of ellipse axis and rotation angle. To evaluate the estimation model, computational result will be compared with manual measurement. Rotation angle is varied from -50 degrees to +50 degrees in yaw and roll axis with step of three degrees. Precision of the mathematical model to simulate 3D eye model orientation in roll and yaw direction are 99.53% and 99.29%, respectively. The advantage of this method over other methods is that eye motion in 3D space can be estimated from only one camera.


international conference on knowledge and smart technology | 2017

Hand posture estimation from 2D image sequence by hand landmark identification

Pargorn Puttapirat; Theekapun Charoenpong

This paper investigates a framework to estimate hand posture from 2D image sequence using hand landmarks identification. The acquired 2D data will be combined with known human hand model and its constraints. Important landmarks of the hand in the image were extracted and identified to specify the location of that landmarks, then they will be matched with the corresponding landmarks in the 3D model to estimate the hand posture and generate a 3D hand model. The result using real hand image sequence shows that the 3D model can move accordingly to the real hand. The framework works well on the four fingers including index, middle, ring, and little finger. The advantage of this method is that it works on hands without markers.


international conference on knowledge and smart technology | 2017

Human fall detection by using the body vector

Siriporn Pattamaset; Theekapun Charoenpong; Patsama Charoenpong; Chamaporn Chianrabutra

Due to a problem of current research occurring when detecting a human falling in a camera direction, we propose a new method for detecting human fall detection by using body vector technique. This method consists of three steps. Two image sequence is used as the input of the system. Firstly markers are affixed on sixteen joints. These markers are extracted by using mahalanobis distance. Secondly, the stereo vision technique is used to construct human joints in three dimensional space. Finally, joint coordinates are used to compute principal component vector. Human falling is detected from an angle between the human body vector and the vertical axis and the human center velocity. To test the performance of the proposed method, subject walks to the cameras. Falling down in the camera direction by twenty subjects is used. Accuracy is 100%. This method perform effectively for detecting human falling in the camera direction.


international conference on knowledge and smart technology | 2016

The method to read nutrient quantity in guideline daily amounts label by image processing

Arisa Poonsri; Supiya Charoensiriwath; Theekapun Charoenpong

Guideline Daily Amounts (GDAs) provides guideline of nutrition information to help consumers known the context of their overall diet. In this paper, we proposed a method to read nutrition information of GDAs on a food label by image processing. This method consists of three steps: label extraction, number segmentation, and number recognition. First, GDAs label is captured by a camera. Otsus threshold including with a constants threshold of color level is used to define an area of the label. Second, four numbers of nutrition in the GDAs label is segmented based on an area divider algorithm. Third, the number is recognized by the Neural Network technique. Finally, quantity of each nutrition in a label is read. To evaluate performance of the proposed method, forty images are tested. A GDAs label consists of four nutrition. Number zero to nine in the label is classified. Total number is 407 numbers. 302 numbers are classified correctly. The accuracy is 74.20%. The experimental results is satisfactory.


international conference on knowledge and smart technology | 2016

Development of facial expression recognition by significant sub-region

Sopa Potikanya; Tanissorn Lertpithaksoonthorn; Aslam Meechai; Rattiya Mungauamklang; Puckpoom Keaokao; Chamaiporn Sukjamsri; Chamaporn Chianrabutra; Theekapun Charoenpong

A problem of previous research concerning facial expression recognition using the face plane is that redundant elements are also used for recognition. In this paper, we propose to develop facial expression recognition method to maximize the accuracy by using significant sub-regions on the face plane. This method consists of five steps: image acquisition, the face plane computation, the significant sub-region identification, the displacement vector computation, and classification. In order to recognize facial expression, the face plane is applied. Area on face plane is divided into 196 (14×14) sub-regions. The cross points pass though the face plane in each sub-region is counted and used as an information for computing significant level of each sub-region. By using Principal Component Analysis (PCA), significant sub-regions on the face plane are determined. The displacement vectors are used for facial expression recognition. The support vector machine is applied for classification. To test the performance of the proposed method, the experiments were done for four expressions (happiness, angry, surprise, and sadness) by using the BU3DFE database, the maximum recognition rate is 71.20 % for using 133 significant sub-regions. The results show that the redundant elements are eliminated and the accuracy is improved.


international conference on machine vision | 2015

A new method to detect nystagmus for vertigo diagnosis system by eye movement velocity

Theekapun Charoenpong; Preeyanan Pattrapisetwong; Visan Mahasitthiwat

As vertigo is common disease, it causes by problem with Nystagmus. It is difficult to diagnosis by observation. In this paper, we propose a method to detect nystagmus for vertigo diagnosis system using eye movement velocity. This method consists of three main steps: pupil extraction, velocity of eye movement computation, and nystagmus detection. An infrared camera is used to record eye movement in image sequence format. For first step, pupil is primary extracted by an adaptive threshold, blackest blob, and ellipse fitting technique. For second step, we measures pupil position from its center. Velocity of eye movement is then computed. For third step, involuntary eye movement is detected by comparing velocity of eye movement in each frame with a criterion. To evaluate performance of the proposed method, eye movement is recorded from six subjects. Accuracy rate of involuntary eye movement detection is 87.21%. The results show that the performance of this method is satisfactory. This is first method able to detect nystagmus from video-oculography.


international conference on knowledge and smart technology | 2015

Local foreground extraction technique for rat walking behavior classification

Wimol Chanchanachitkul; Puncharas Nanthiyanuragsa; Ratirat Sangpayap; Watchareewan Thongsaard; Theekapun Charoenpong

An important experiment to test effect of new drug in medical science, and brain science is to study rat behaviors. Open-field test such as holeboard model is popular experiment model to analyze rat behavior. Number of each rat behavior in a period of time, example, walking, rearing, and head dip is counted and recorded. This data is compared between before and after using the drug with rat. In present, these rat behaviors are observed and counted by experimenters, that, obviously, it is included human errors easily. In this paper, we proposed a method for improving rat walking behavior classification accuracy by local foreground extraction technique. Webcam is used to record rat behavior and it is mounted over the model 1.5 meters. The proposed method consists of four steps. First, background is constructed for background subtraction by K-Mean clustering technique. Second, foreground as rat is extracted by background subtraction technique. Third step is local extraction. It is applied to complete rat body data from the second step. Finally, rat body length measured by ellipse fitting technique is used for walking behavior classification. To evaluate performance of the proposed method, classification accuracy is measured. Accuracy rate of the proposed method is 83.8%. Results show that accuracy rate of the proposed method is higher than existing method. An advantage of the proposed method is that it improves the accuracy from the existing research.


international conference on networking sensing and control | 2013

Accurate pupil extraction algorithm by using integrated method

Theekapun Charoenpong; Preeyanan Pattrapisetwong; Theerasak Chanwimalueang; Visan Mahasithiwat

As vertigo disease is diagnosed by observing involuntary eye movement, position of pupil is an important parameter for nystagmus analysis system. Accurate and precise pupil extraction is necessary. In this paper, we improve accuracy of pupil extraction algorithm by using integrated method. It consists of three processes: primary pupil extraction, noise elimination, and shape estimation. Image sequence is used as input of system. Pupil is captured by infrared camera mounted on binocular. For first step, primary pupil in a frame is extracted. An adaptive threshold is applied to extraction pupil preliminary. Black blob is defined as primary pupil. However, noise is occurred in the result. To eliminate the noise, Mahalanobis distance techniques is used. In some cases, pupil is occluded by eyelash or eyelid, complete shape of pupil is estimated by ellipse. Performance of proposed method is evaluated by accuracy. There are 1869 frames of test data. Accuracy and precision are 94.06% and 1.92 pixels of error, respectively. Advantage of our method over other existing research is that criteria threshold is adaptive according to individual illumination condition of each frame, and the accuracy is improved from our previous work [18, 19, 20] by using black blob in noise elimination process.

Collaboration


Dive into the Theekapun Charoenpong's collaboration.

Top Co-Authors

Avatar

Chaiwat Nuthong

King Mongkut's Institute of Technology Ladkrabang

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arisa Poonsri

Srinakharinwirot University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pornthep Sarakon

Sirindhorn International Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Visan Mahasithiwat

Srinakharinwirot University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge