Thi-Thanh-Hai Tran
Hanoi University of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thi-Thanh-Hai Tran.
international conference on communications | 2014
Thi-Thanh-Hai Tran; Thi-Lan Le; Jeremy Morel
In this paper, we present a novel fall detection system based on the Kinect sensor. The originalities of this system are two-fold. Firstly, based on the observation that using all joints to represent human posture is not pertinent and robust because in several human postures the Kinect is not able to track correctly all joints, we define and compute three features (distance, angle, velocity) on only several important joints. Secondly, in order to distinguish fall with other activities such as lying, we propose to use Support Vector Machine technique. In order to analyze the robustness of the proposed features and joints for fall detection, we have performed intensive experiments on 108 videos of 9 activities (4 falls, 2 falls like and 3 daily activities). The experimental results show that the proposed system is capable of detecting falls accurately and robustly.
ieee international conference on progress in informatics and computing | 2010
Thi-Thanh-Hai Tran; Thi-Thanh-Mai Nguyen
Hand posture classification is a key problem for many human computer interaction applications. However, this is not a simple problem. In this paper, we propose to decompose the hand posture classification problem into 2 steps. In the first step, we detect skin regions using a very fast algorithm of color segmentation based on thresholding technique. This segmentation is robust to lighting condition thank to a step of color normalization using neural network. In the second step, each skin region will be classified into one of hand posture class using Cascaded Adaboost technique. The contributions of this paper are: (i) By applying a step of color normalization, the posture classification rate is significantly improved under varying lighting condition; (ii) The cascaded Adaboost technique has been studied for the problem of face detection (2 classes). In this paper, it will be studied and evaluated in more detail in a problem of classification of hand postures (multi-classes).
international conference on communications | 2014
Do-Dat Tran; Thi-Lan Le; Thi-Thanh-Hai Tran
The health smart home has been become an attractive domain for both research and industrial. One of the main issues in smart home is abnormal event detection that allows alarming and making assistance as soon as possible. In this paper, we analyze and exploit multimedia information (image and audio) to resolve the problem of abnormal event detection. The interested abnormal events of our study include falling, lying motionless on the floor, staying in rest room, being out door in long time, abnormal speech (e.g. screaming, shouting) and abnormal non-speech (e.g. breaking and falling sounds). The experimental results evaluated by false alarm rate and sensitivity show that audio and image information complement each other and the use of both information allows detecting a large number of interested abnormal events which can be used in monitoring system for healthcare application.
intelligent human computer interaction | 2014
Van-Toi Nguyen; Thi-Lan Le; Thi-Thanh-Hai Tran; Rémy Mullot; Vincent Courboulay
Abstract In this paper, we propose to investigate the role of a new descriptor named Kernel Descriptor (KDES), recently introduced in 1 for hand posture recognition. As the hand posture has it own the color characteristic, we will examine kernel descriptor in diffident color channels such as HSV, RGB, Lab to find out the most suitable color space for kernel representation of hand posture. We perform extensive experiments on two datasets. The obtained results are promising (97.3% on NUS-2 dataset and 85.0% on our dataset). Thank to the analysis, kernel descriptor is highly recommended for hand posture recognition.
international conference on communications | 2014
Quoc-Hung Nguyen; Hai Vu; Thi-Thanh-Hai Tran; Quang-Hoan Nguyen
This paper describes a visual-based system that autonomously operators for both map building and localization tasks. The proposed system is to assist mapping services in small or mid-scale environments such as inside a building or campus of school where conventional positioning data such as GPS, WIFI signals are often not available. Toward this end, the proposed approaches utilize only visual data. We design an image acquisition system for data collections. On one hand, a robust visual odometry method is utilized to create routes in the environment. On the other hand, the proposed approaches utilize FAB-MAP (Fast Appearance Based Mapping) algorithm that is maybe the most successful for recognizing places in large scenarios. The building route and learning visited places are offline process in order to represent a map of environment. Through a matching image-to-map procedure, the captured images at current view are continuously positioned on the constructed map. This is an online process. The proposed system is evaluated in a corridor environment of a large building. The experimental results show that the constructed route coincides with ground truth, and matching image-to-map is high confidence. The proposed approaches are feasible to support visually impaired people navigating in the indoor environments.
knowledge and systems engineering | 2015
Van-Hung Le; Hai Vu; Thuy Thi Nguyen; Thi-Lan Le; Thi-Thanh-Hai Tran; Michiel Vlaminck; Wilfried Philips; Peter Veelaert
Finding an object in a 3D scene is an important problem in the robotics, especially in assistive systems for visually impaired people. In most systems, the first and most important step is how to detect an object in a complex environment. In this paper, we propose a method for finding an object using geometrical constraints on depth images from a Kinect. The main advantage of the approach is it is invariant to lighting condition, color and texture of the objects. Our approach does not require a training phase, therefore it can reduce the time of preparing data and learning model. The objects of interest have a simple geometrical structure such as coffee mugs, bowls, boxes and are on a table. Overall, our approach is faster and more accurate than methods using 2D features on depth images for training an object model.
The National Foundation for Science and Technology Development (NAFOSTED) Conference on Information and Computer Science | 2015
Thi-Lan Le; Thi-Thanh-Hai Tran
Recently, abnormal event detection has attracted great research attention because of its wide range of applications. In this paper, we propose an hybrid method combining both tracking output and motion templates. This method consists of two steps: object detection, localization and tracking and abnormal event detection. Our contributions in this paper are three-folds. Firstly, we propose a method that apply only HOG-SVM detector on extended regions detected by background subtraction. This method takes advantages of the background subtraction method (fast computation) and the HOG-SVM detector (reliable detection). Secondly, we do multiple objects tracking based on HOG descriptor. The HOG descriptor, computed in the detection phase, will be used in the phase of observation and track association. This descriptor is more robust than usual grayscale (color) histogram based descriptor. Finally, we propose a hybrid method for abnormal event detection this allows to remove several false detection cases.
knowledge and systems engineering | 2015
Van-Toi Nguyen; Thi-Thanh-Hai Tran; Thi-Lan Le; Rémy Mullot; Vincent Courboulay
Visual interpretation of hand gesture for human-system interaction in general and human-robot interaction in particular is becoming a hot topic in computer vision and robotics fields. Hand gestures provide very intuitive and efficient means and enhance the flexibility in communication. Even a number of works have been proposed for hand gesture recognition, the use of these works for real human-robot interaction is still limited. Based on our previous works for hand detection and hand gesture recognition, we have built a fully automatic hand gesture recognition system and have applied it in a human-robot interaction application: service robot in library. In this paper, we describe in detail this application from the user requirement analysis to system deployment and experiments with end-users.
symposium on information and communication technology | 2013
Van-Toi Nguyen; Thuy Thi Nguyen; Rémy Mullot; Thi-Thanh-Hai Tran; Hung Le
Hand posture recognition has important applications in sign language, human machine interface, etc. In most such systems, the first and important step is hand detection. This paper presents a hand detection method based on internal features in an active boosting-based learning framework. The use of efficient Haar-like, local binary pattern and local orientation histogram as internal features allows fast computation of informative hand features for dealing with a great variety of hand appearances without background interference. Interactive boosting-based on-line learning allows efficiently training and improvement for the detector. Experimental results show that the proposed method outperforms the conventional methods on video data with complex background while using a smaller number of training samples. The proposed method is reliable for hand detection in the hand posture recognition system.
international conference on computing management and telecommunications | 2013
Thi-Lan Le; Van-Ngoc Nguyen; Thi-Thanh-Hai Tran; Van-Toi Nguyen; Thi-Thuy Nguyen