Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thanarat H. Chalidabhongse is active.

Publication


Featured researches published by Thanarat H. Chalidabhongse.


Real-time Imaging | 2005

Real-time foreground-background segmentation using codebook model

Kyungnam Kim; Thanarat H. Chalidabhongse; David Harwood; Larry S. Davis

We present a real-time algorithm for foreground-background segmentation. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. The codebook representation is efficient in memory and speed compared with other background modeling techniques. Our method can handle scenes containing moving backgrounds or illumination variations, and it achieves robust detection for different types of videos. We compared our method with other multimode modeling techniques. In addition to the basic algorithm, two features improving the algorithm are presented-layered modeling/detection and adaptive codebook updating. For performance evaluation, we have applied perturbation detection rate analysis to four background subtraction algorithms and two videos of different types of scenes.


international conference on image processing | 2004

Background modeling and subtraction by codebook construction

Kyungnam Kim; Thanarat H. Chalidabhongse; David Harwood; Larry S. Davis

We present a new fast algorithm for background modeling and subtraction. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. Our method can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos. We compared our method with other multimode modeling techniques.


international conference on information technology coding and computing | 2005

Improved face and hand tracking for sign language recognition

N. Soontranon; Supavadee Aramvith; Thanarat H. Chalidabhongse

In this paper, we develop face and hand tracking for sign language recognition system. The system is divided into two stages: the initial and tracking stages. In initial stage, we use the skin feature to localize face and hands of signer. The ellipse model on CbCr space is constructed and used to detect skin color. After the skin regions have been segmented, face and hand blobs are defined by using size and facial feature with the assumption that the movement of face is less than that of hands in this signing scenario. In tracking stage, the motion estimation is applied only hand blobs, in which first and second derivative are used to compute the position of prediction of hands. We observed that there are errors in the value of tracking position between two consecutive frames in which velocity has changed abruptly. To improve the tracking performance, our proposed algorithm compensates the error of tracking position by using adaptive search area to re-compute the hand blobs. Simulation results indicate our proposed algorithm can track face and hand with greater precision with negligible computational complexity increase.


international symposium on communications and information technologies | 2006

A Car Plate Detector using Edge Information

Preemon Rattanathammawat; Thanarat H. Chalidabhongse

This paper describes a new edge-based car plate detection technique to localize and extract car license plates in complex scene. The system works for both moving and stationary vehicles in compressed low resolution video streams obtained from real working environments. To cope with the varied illumination condition, which typically occurs in outdoor environment, the method works on the edge information of the image. We employs vertical Sobel edge detector with automatic variable thresholding. A moving window is then scanned through the edge image. For each location of moving window, we consider the distribution of edge pixels to determine whether it is likely to be a license plate. In the last step, we employ the temporal analysis to suppress false detections. In experiments, we tested our method on several compressed video streams containing 317 moving vehicles in varied illumination conditions from morning to night time. The license plate localization has 94% precision rate


international symposium on communications and information technologies | 2004

Face and hands localization and tracking for sign language recognition

N. Soontranon; Supavadee Aramvith; Thanarat H. Chalidabhongse

We develop the face and hand detection and tracking for sign language recognition system. We first perform a preliminary evaluation on several color spaces to find the most suitable one using a nonparametric model approach. Then, we propose to use the elliptical model in the CbCr color space to lower the complexity of the detection algorithm and to better model the skin color. After the skin regions from the input video have been segmented, the interesting facial features and hands are detected using luminance differences and skeleton features respectively. In the tracking stage, each blob determines the search region and finds the MMSE (minimum mean square error) to match its own blob. The block matching method between previous and current frame is used. Experimental results show that our proposed system is able to detect and track the face and hands in sign language video sequences.


international symposium on communications and information technologies | 2011

Human action recognition using direction histograms of optical flow

Kanokphan Lertniphonphan; Supavadee Aramvith; Thanarat H. Chalidabhongse

Recognizing human actions is a challenging research area due to the complexity and variation of humans appearances and postures, the variation of camera settings, and angles. In this paper, we introduce a motion descriptor based on direction of optical flow for human action recognition. The directional value of a silhouette is divided into small regions. In each region, the normalized direction histogram of optical flow is computed. The motion vector is the values of a histogram in every region respective concatenation. The vectors are smoothed in time domain by moving average to reduce the motion variation and noise. For the training process, the motion vectors of the training set are clustered by K-mean to represent action. The clustered data group the similar posture together and is represented by the cluster centers. The centers are used to compare input frames by computing distance and using K-nearest neighbor to classify action. The experimental results show that K-mean clustering can group the similar pose together. The motion feature can be used to classify action in a low resolution image with a small number of reference vectors.


international symposium on circuits and systems | 2005

Non-linear learning factor control for statistical adaptive background subtraction algorithm

T. Thongkamwitoon; Supavadee Aramvith; Thanarat H. Chalidabhongse

The background subtraction algorithm has been proven to be a very effective technique for automated video surveillance applications. In statistical approach, background model is usually estimated using Gaussian model and is adaptively updated to deal with changes in dynamic scene environment. However, most algorithms update background parameters linearly. As a result, the classification results are erroneous when performing background convergence process. In this paper, we present a novel learning factor control for adaptive background subtraction algorithm. The method adaptively adjusts the rate of adaptation in background model corresponding to events in video sequence. Experimental results show the algorithm improves classification accuracy compared to other known methods.


international symposium on intelligent signal processing and communication systems | 2009

Video processing and analysis for surveillance applications

Supavadee Aramvith; Suree Pumrin; Thanarat H. Chalidabhongse; Supakorn Siddhichai

Presently researches in networked surveillance system grow continuously and substantially. One reason is because of the insecurity incidents such as terrorism acts in Thailand and many countries around the world. This results in the need of intelligent surveillance and monitoring system consisting of real-time image capture, transmission, processing, and surveillance information understanding. This information will be vital to people safety and indeed national security. Video Information Processing and Analysis for Surveillance System is a pilot project to realize such intelligent surveillance system developed by Thai researchers. The developed system consists of 6 parts, namely, super-resolution image reconstruction, suspect detection, suspect tracking, suspect appearance extraction, suspect activity analysis and level of awareness decision. The output information from the proposed system is suspect features, suspect activity pattern and direction, and level of suspicious incident. This would be very useful for the users to further analyze this information to use in practical surveillance system.


international joint conference on computer science and software engineering | 2015

Vehicle tracking in low hue contrast based on CAMShift and background subtraction

Nitipat Sirikuntamat; Shin'ichi Satoh; Thanarat H. Chalidabhongse

This paper proposes a method to track vehicle in highway using CAMShift-based method. The Continuously Adaptive Mean Shift (CAMShift) is a well-known algorithm in object tracking. However, the ordinary CAMShift works fairly well only for tracking object that can identify by hue, when the difference between object color and background is large. This is not the case in vehicle tracking. The objective of our proposed method is to be able to track vehicles in highway when the hue contrast is low. We incorporate in CAMShift an adaptive background subtraction to help in object localization when lost tracking occurs. The experimental result illustrates a significant improvement in tracking accuracy.


international conference on multimedia and expo | 2005

Face Tracking Using Two Cooporative Static and Moving Cameras

Pichai Amnuaykanjanasin; Supavadee Aramvith; Thanarat H. Chalidabhongse

In this paper, we present a new stereo approach for tracking human face by using only two cameras in system. One pan-tilt camera is used for tracking person focused on face. One static camera cooperate with pan-tilt camera are used as a stereo system to estimate face 3D position. We propose to update relative position between cameras to reflect camera moving and the change of relative position. Experimental results shows that our proposed system is able to track one person in camera viewing and can estimate the 3D moving path of interesting person

Collaboration


Dive into the Thanarat H. Chalidabhongse's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chayaporn Kaensar

King Mongkut's Institute of Technology Ladkrabang

View shared research outputs
Researchain Logo
Decentralizing Knowledge