Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thi-Lan Le is active.

Publication


Featured researches published by Thi-Lan Le.


autonomic and trusted computing | 2013

Leaf based plant identification system for Android using SURF features in combination with Bag of Words model and supervised learning

Quang-Khue Nguyen; Thi-Lan Le; Ngoc-Hai Pham

Even many works have been proposed for automatic plant identification, there exists very few plant identification applications on the market. To the best of our knowledge, Leafsnap [1] is the first automatic plant identification application. However, this application is dedicated to iOS users and is working with tree species of the Northeastern United States. Today, a huge number of Android users make an interesting market for developing plant identification for Android. The contribution of this paper is two-fold. Firstly, we propose a leaf based plant identification method using SURF features in combination with Bag of Words and supervised learning. This method obtains better results in comparison with other existed methods in the same database. Secondly, we develop a leaf based plant identification system for Android.


conference on multimedia modeling | 2008

A query language combining object features and semantic events for surveillance video retrieval

Thi-Lan Le; Monique Thonnat; Alain Boucher; Francois Bremond

In this paper, we propose a novel query language for video indexing and retrieval that (1) enables to make queries both at the image level and at the semantic level (2) enables the users to define their own scenarios based on semantic events and (3) retrieves videos with both exact matching and similarity matching. For a query language, four main issues must be addressed: data modeling, query formulation, query parsing and query matching. In this paper we focus and give contributions on data modeling, query formulation and query matching. We are currently using color histograms and SIFT features at the image level and 10 types of events at the semantic level. We have tested the proposed query language for the retrieval of surveillance videos of a metro station. In our experiments the database contains more than 200 indexed physical objects and 48 semantic events. The results using different types of queries are promising.


conference on multimedia modeling | 2007

Subtrajectory-based video indexing and retrieval

Thi-Lan Le; Alain Boucher; Monique Thonnat

This paper proposes an approach for retrieving videos based on object trajectories and subtrajectories. First, trajectories are segmented into subtrajectories according to the characteristics of the movement. Efficient trajectory segmentation relies on a symbolic representation and uses selected control points along the trajectory. The selected control points with high curvature capture the trajectory various geometrical and syntactic features. This symbolic representation, beyond the initial numeric representation, does not suffer from scaling, translation or rotation. Then, in order to compare trajectories based on their subtrajectories, several matching strategies are possible, according to the retrieval goal from the user. Moreover, trajectories can be represented at the numeric, symbolic or the semantic level, with the possibility to go easily from one representation to another. This approach for indexing and retrieval has been tested with a database containing 2500 trajectories, with promising results.


content based multimedia indexing | 2008

A framework for surveillance video indexing and retrieval

Thi-Lan Le; Alain Boucher; Monique Thonnat; Francois Bremond

We propose a framework for surveillance video indexing and retrieval. In this paper, we focus on the following features: (1) combine recognized video contents (output from a video analysis module) with visual words (computed over all the raw video frames) to enrich the video indexation in a complimentary way; using this scheme user can make queries about objects of interest even when the video analysis output is not available; (2) support an interactive feature generation (currently color histogram and trajectory) that gives a facility for users to make queries at different levels according to the a priori available information and the expected results from retrieval; (3) develop a relevance feedback module adapted to the proposed indexing scheme and the specific properties of surveillance videos for the video surveillance context. Results emphasizing these three aspects prove a good integration of video analysis for video surveillance and interactive indexing and retrieval.


Journal of Sensors | 2016

Real-Time Obstacle Detection System in Indoor Environment for the Visually Impaired Using Microsoft Kinect Sensor

Huy-Hieu Pham; Thi-Lan Le; Nicolas Vuillerme

Any mobility aid for the visually impaired people should be able to accurately detect and warn about nearly obstacles. In this paper, we present a method for support system to detect obstacle in indoor environment based on Kinect sensor and 3D-image processing. Color-Depth data of the scene in front of the user is collected using the Kinect with the support of the standard framework for 3D sensing OpenNI and processed by PCL library to extract accurate 3D information of the obstacles. The experiments have been performed with the dataset in multiple indoor scenarios and in different lighting conditions. Results showed that our system is able to accurately detect the four types of obstacle: walls, doors, stairs, and a residual class that covers loose obstacles on the floor. Precisely, walls and loose obstacles on the floor are detected in practically all cases, whereas doors are detected in 90.69% out of 43 positive image samples. For the step detection, we have correctly detected the upstairs in 97.33% out of 75 positive images while the correct rate of downstairs detection is lower with 89.47% from 38 positive images. Our method further allows the computation of the distance between the user and the obstacles.


international conference on communications | 2014

An analysis on human fall detection using skeleton from Microsoft kinect

Thi-Thanh-Hai Tran; Thi-Lan Le; Jeremy Morel

In this paper, we present a novel fall detection system based on the Kinect sensor. The originalities of this system are two-fold. Firstly, based on the observation that using all joints to represent human posture is not pertinent and robust because in several human postures the Kinect is not able to track correctly all joints, we define and compute three features (distance, angle, velocity) on only several important joints. Secondly, in order to distinguish fall with other activities such as lying, we propose to use Support Vector Machine technique. In order to analyze the robustness of the proposed features and joints for fall detection, we have performed intensive experiments on 108 videos of 9 activities (4 falls, 2 falls like and 3 daily activities). The experimental results show that the proposed system is capable of detecting falls accurately and robustly.


conference on image and video retrieval | 2009

Appearance based retrieval for tracked objects in surveillance videos

Thi-Lan Le; Monique Thonnat; Alain Boucher; Francois Bremond

This paper focuses on indexing and retrieval at the object level for video surveillance. Object retrieval is difficult due to imprecise object detection and tracking. In the indexing phase, a new representative blob detection method allows to choose the most relevant blobs that represent various objects visual aspects. In the retrieval phase, a new robust object matching method retrieves successfully objects even though they are not perfectly tracked. We validate our approach thanks to videos coming from a subway monitoring project. The representative blob detection method improves the state of the art. The obtained retrieval results show that the object matching method is robust while working with imprecise object tracking algorithms.


Proceedings of the 2nd International Workshop on Environmental Multimedia Retrieval | 2015

Complex Background Leaf-based Plant Identification Method Based on Interactive Segmentation and Kernel Descriptor

Thi-Lan Le; Nam-Duong Duong; Van-Toi Nguyen; Hai Vu; Van-Nam Hoang; Thi Thanh-Nhan Nguyen

This paper presents a plant identification method from the images of the simple leaf with complex background. In order to extract leaf from the image, we firstly develop an interactive image segmentation for mobile device with tactile screen. This allows to separate the leaf region from the complex background image in few manipulations. Then, we extract the kernel descriptor from the leaf region to build leaf representation. Since the leaf images may be taken at different scale and rotation levels, we propose two improvements in kernel descriptor extraction that makes the kernel descriptor to be robust to scale and rotation. Experiments carried out on a subset of ImageClef 2013 show an important increase in performance compared to the original kernel descriptor and automatic image segmentation.


international conference on image processing | 2014

Kernel descriptor based plant leaf identification

Thi-Lan Le; Duc-Tuan Tran; Ngoc-Hai Pham

Plant identification is an interesting and challenging research topic due to the variety of plant species. Among different part of the plant, leaf is widely used for plant identification because it is usually the most abundant type of data available in botanical reference collections and the easiest to obtain in the field studies. A number of works have been done for plant leaf identification. However, it is far from user expectation. In this paper, we propose a new plant leaf identification based on kernel descriptor (KDES). KDES is recently proposed by Bo et al. This is proved to be robust for different object recognition problem. In this paper, once again, the experimental results obtained on two plant leaf datasets show that this approach outperforms the state of the art.


ieee international conference on automatic face gesture recognition | 2015

A new hand representation based on kernels for hand posture recognition

Van-Toi Nguyen; Thi-Lan Le; Thanh-Hai Tran; Rémy Mullot; Vincent Courboulay

Hand posture recognition is an extremely active research topic in Computer Vision and Robotics, with many applications ranging from automatic sign language recognition to human-system interaction. Recently, a new descriptor for object representation based on the kernel method (KDES) has been proposed. While this descriptor has been shown to be efficient for hand posture representation, across-the-board use of KDES for hand posture recognition has some drawbacks. This paper proposes three improvements to KDES to make it more robust to scale change, rotation, and differences in the object structure. First, the gradient vector inside the gradient kernel is normalized, making gradient KDES invariant to rotation. Second, patches with adaptive size are created, to make hand representation more robust to changes in scale. Finally, for patch-level features pooling, a new pyramid structure is proposed, which is more suitable for hand structure. These innovations are tested on three datasets; the results bring out an increase in recognition rate (as compared to the original method) from 84.4% to 91.2%.

Collaboration


Dive into the Thi-Lan Le's collaboration.

Top Co-Authors

Avatar

Hai Vu

Hanoi University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Thanh-Hai Tran

Hanoi University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Thi-Thanh-Hai Tran

Hanoi University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Van-Toi Nguyen

Hanoi University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Trung-Kien Dao

Hanoi University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rémy Mullot

University of La Rochelle

View shared research outputs
Top Co-Authors

Avatar

Ngoc-Hai Pham

Hanoi University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thi Thanh Hai Tran

Hanoi University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge