Nikom Suvonvorn
Prince of Songkla University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Nikom Suvonvorn.
knowledge, information, and creativity support systems | 2010
Kittasil Silanon; Nikom Suvonvorn
In this paper, we propose a system that facilitates the two dimension user-input as general mouse device for controlling applications. The method is based on hand movement analysis by applying image processing technique. The Haar-like with a cascade of boost classifiers is applied for hand detection, and then tracked with skin color using CamShift. Extracted hand features are computed and used for recognizing commands via a finite state machine. We evaluated the performance of system under real time constraint in real environment with demonstration application.
digital information and communication technology and its applications | 2014
Kittasil Silanon; Nikom Suvonvorn
In this paper, we introduce a method for finger-spelling recognition system. The objective is to help the deaf or non-vocal persons to improve their skills on the finger-spelling. Many researches in this field have proposed methods mostly based on hand posture estimation techniques. We propose an alternative flexible method based on fuzzy finger shape and hand appearance analysis. By using depth image, the hand is extracted and tracked using an active contour like method. Its features, such as, finger shape, and hand appearance, have been defined as chain code, which are input to the American finger-spelling recognition system by using a vote method. The performance of the system is tested in real-time environment, which results in around 70% recognition rate.
DaEng | 2014
Nattapon Noorit; Nikom Suvonvorn
High-level human activity recognition is an important method for the automatic event detection and recognition application, such as, surveillance system and patient monitoring system. In this paper, we propose a human activity recognition method based on FSM model. The basic actions with their properties for each person in the interested area are extracted and calculated. The action stream with related features (movement, referenced location) is recognized using the predefined FSM recognizer modeling based on rational activity. Our experimental result shows a good recognition accuracy (86.96 % in average).
international conference on digital image processing | 2010
Nattapon Noorit; Nikom Suvonvorn; Montri Karnchanadecha
The identification of human basic actions plays an important role for recognizing human activities in complex scene. In this paper we propose an approach for automatic human action recognition. The parametric model of human is extracted from image sequences using motion/texture based human detection and tracking. Action features from its model are carefully defined into the action interaction representation and used for the recognizing process. Performance of proposed method is tested experimentally using datasets under indoor environments.
multimedia signal processing | 2008
Nikom Suvonvorn
An on-line video processing for surveillance system is a very challenging problem. The computational complexity of video analysis algorithms and the massive amount of data to be analyzed must be considered under real-time constraints. Moreover it needs to satisfy different criteria of application domain, such as, scalability, re-configurability, and quality of service. In this paper we propose a flexible/efficient video analysis framework for surveillance system which is a component-based architecture. The video acquisition, re-configurable video analysis, and video storage are some of the basic components. The component execution and inter-components synchronization are designed for supporting the multi-cores and multi-processors architecture with multi-threading implementation on .NET Framework. Experimental results on real-time motion tracking are presented with discussion.
DaEng | 2014
Kittasil Silanon; Nikom Suvonvorn
This paper presents a real time estimation method for 3D trajectory of fingertips. Our approach is based on depth vision, with Kinect depth sensor. The hand is extracted using hand detector and depth image from sensor. The fingertips are located by the analysis of the curvature of hand contour. The fingertips detector is implemented using concept of active contour which combine the energy of continuity, curvature, direction, depth and distance. The trajectory of fingertips is filtered to reduce the tracking error. The experiment is evaluated on the fingers movement sequences. Besides, the capabilities of the method are demonstrated on the real-time Human–Computer Interaction (HCI) application.
international joint conference on computer science and software engineering | 2015
Teerasak Kroputaponchai; Nikom Suvonvorn
Digital video is fundamental of the surveillance system. However, it is easy to be modified by threats that may decrease drastically the trust of the surveillance system. In this paper, we introduce the video authentication that could protect the modification, eventually to increase the confidence and allowed the authenticated videos as evidences in law court. We propose the signature based video authentication using the histogram of oriented gradient of the selected DCT coefficients found in the frequency domain of video frames. The efficiency of our technique depends on an optimal threshold that need to use a high threshold to reject all tampered and improvement is required to allow compression.
Computational Intelligence and Neuroscience | 2018
Pongsagorn Chalearnnetkul; Nikom Suvonvorn
Vision-based action recognition encounters different challenges in practice, including recognition of the subject from any viewpoint, processing of data in real time, and offering privacy in a real-world setting. Even recognizing profile-based human actions, a subset of vision-based action recognition, is a considerable challenge in computer vision which forms the basis for an understanding of complex actions, activities, and behaviors, especially in healthcare applications and video surveillance systems. Accordingly, we introduce a novel method to construct a layer feature model for a profile-based solution that allows the fusion of features for multiview depth images. This model enables recognition from several viewpoints with low complexity at a real-time running speed of 63 fps for four profile-based actions: standing/walking, sitting, stooping, and lying. The experiment using the Northwestern-UCLA 3D dataset resulted in an average precision of 86.40%. With the i3DPost dataset, the experiment achieved an average precision of 93.00%. With the PSU multiview profile-based action dataset, a new dataset for multiple viewpoints which provides profile-based action RGBD images built by our group, we achieved an average precision of 99.31%.
2016 International Conference on Robotics and Machine Vision | 2017
Chonthisa Wateosot; Nikom Suvonvorn
Fighting detection is an important issue in security aimed to prevent criminal or undesirable events in public places. Many researches on computer vision techniques have studied to detect the specific event in crowded scenes. In this paper we focus on fighting detection using social-based Interaction Energy Force (IEF). The method uses low level features without object extraction and tracking. The interaction force is modeled using the magnitude and direction of optical flows. A fighting factor is developed under this model to detect fighting events using thresholding method. An energy map of interaction force is also presented to identify the corresponding events. The evaluation is performed using NUSHGA and BEHAVE datasets. The results show the efficiency with high accuracy regardless of various conditions.
international joint conference on computer science and software engineering | 2015
Nattapon Noorit; Nikom Suvonvorn
Human activity recognition has an important role for the automatic anomaly event detection and recognition application such as surveillance system and patient monitoring system. In this paper, we propose a human activity recognition method based on graph similarity measurement technique (GSM). The basic actions with their movements for each person in the interested area are extracted and calculated. The action sequence with movement features of labelled dataset are used as basis data to establish the statistical activity graph model that used to calculate similarity between graphs. The system performs good results, (sensitivity and specificity are about 80% for first testing activity and about 90% for second testing activity).