Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hai Vu is active.

Publication


Featured researches published by Hai Vu.


medical image computing and computer assisted intervention | 2007

Contraction detection in small bowel from an image sequence of wireless capsule endoscopy

Hai Vu; Tomio Echigo; Ryusuke Sagawa; Keiko Yagi; Masatsugu Shiba; Kazuhide Higuchi; Tetsuo Arakawa; Yasushi Yagi

This paper describes a method for automatic detection of contractions in the small bowel through analyzing Wireless Capsule Endoscopic images. Based on the characteristics of contraction images, a coherent procedure that includes analyzes of the temporal and spatial features is proposed. For temporal features, the image sequence is examined to detect candidate contractions through the changing number of edges and an evaluation of similarities between the frames of each possible contraction to eliminate cases of low probability. For spatial features, descriptions of the directions at the edge pixels are used to determine contractions utilizing a classification method. The experimental results show the effectiveness of our method that can detect a total of 83% of cases. Thus, this is a feasible method for developing tools to assist in diagnostic procedures in the small bowel.


Computers in Biology and Medicine | 2009

Detection of contractions in adaptive transit time of the small bowel from wireless capsule endoscopy videos

Hai Vu; Tomio Echigo; Ryusuke Sagawa; Keiko Yagi; Masatsugu Shiba; Kazuhide Higuchi; Tetsuo Arakawa; Yasushi Yagi

Recognizing intestinal contractions from wireless capsule endoscopy (WCE) image sequences provides a non-invasive method of measurement, and suggests a solution to the problems of traditional techniques for assessing intestinal motility. Based on the characteristics of contractile patterns and information on their frequencies, the contractions can be investigated using essential image features extracted from WCE videos. In this study, we proposed a coherent three-stage procedure using temporal and spatial features. The possible contractions are recognized by changes in the edge structure of the intestinal folds in Stage 1 and evaluating similarity features in consecutive frames in Stage 2. In order to take account of the properties of contraction frequency, we consider that the possible contractions are located within windows including consecutive frames. The size of these contraction windows is adjusted according to the passage of the WCE. These procedures aim to exclude as many non-contractions as possible. True contractions are determined through spatial analysis of directional information in Stage 3. Using the proposed method, 81% of true contractions are detected with a 37% false alarm rate for evaluations in the experiments. The overall performance of this method is better than that of previous methods, in terms of both the quality and quantity indices. The results suggest feasible data for further clinical applications.


Inflammopharmacology | 2007

A diagnosis support system for capsule endoscopy.

Yasushi Yagi; Hai Vu; Tomio Echigo; Ryusuke Sagawa; Keiko Yagi; Masatsugu Shiba; Kazuhide Higuchi; Tetsuo Arakawa

Abstract.The diagnostic time required for a full, 8-hour video capsule endoscopy is usually between 45 and 120 min. The aim of this work is to evaluate the diagnostic time required when applying a method that adaptively controlls the image display rate. The advantage of the method is that the sequence can be played at high speed in stable smooth sequences to save time and then decreased at sequences where there are sudden rough changes, in order to assess suspicious findings detail. In this paper, this method is examined under real conditions: 10 sequences were independently evaluated by 4 medical doctors. The methods of evaluation include: 1) the time required for reading a sequence, 2) the percentage of abnormal regions accurately found, and 3) the manipulations of the evaluating physicians. The results indicate that the proposed method reduces diagnostic time to around 10 ± 1.5% length of the sequence and is of valuable assistance to medical doctors.


Proceedings of the 2nd International Workshop on Environmental Multimedia Retrieval | 2015

Complex Background Leaf-based Plant Identification Method Based on Interactive Segmentation and Kernel Descriptor

Thi-Lan Le; Nam-Duong Duong; Van-Toi Nguyen; Hai Vu; Van-Nam Hoang; Thi Thanh-Nhan Nguyen

This paper presents a plant identification method from the images of the simple leaf with complex background. In order to extract leaf from the image, we firstly develop an interactive image segmentation for mobile device with tactile screen. This allows to separate the leaf region from the complex background image in few manipulations. Then, we extract the kernel descriptor from the leaf region to build leaf representation. Since the leaf images may be taken at different scale and rotation levels, we propose two improvements in kernel descriptor extraction that makes the kernel descriptor to be robust to scale and rotation. Experiments carried out on a subset of ImageClef 2013 show an important increase in performance compared to the original kernel descriptor and automatic image segmentation.


international conference on pattern recognition | 2010

Color Analysis for Segmenting Digestive Organs in VCE

Hai Vu; Yasushi Yagi; Tomio Echigo; Masatsugu Shiba; Kazuhide Higuchi; Tetsuo Arakawa; Keiko Yagi

This paper presents an efficient method for automatically segmenting the digestive organs in a Video Capsule Endoscopy (VCE) sequence. The method is based on unique characteristics of color tones of the digestive organs. We first introduce a color model of the gastrointestinal (GI) tract containing the color components of GI wall and non-wall regions. Based on the wall regions extracted from images, the distribution along the time dimension for each color component is exploited to learn the dominant colors that are candidates for discriminating digestive organs. The strongest candidates are then combined to construct a representative signal to detect the boundary of two adjacent regions. The results of experiments are comparable with previous works, but computation cost is more efficient.


european conference on computer vision | 2014

A Visual SLAM System on Mobile Robot Supporting Localization Services to Visually Impaired People

Quoc-Hung Nguyen; Hai Vu; Thanh-Hai Tran; David Van Hamme; Peter Veelaert; Wilfried Philips; Quang-Hoang Nguyen

This paper describes a Visual SLAM system developed on a mobile robot in order to support localization services to visually impaired people. The proposed system aims to provide services in small or mid-scale environments such as inside a building or campus of school where conventional positioning data such as GPS, WIFI signals are often not available. Toward this end, we adapt and improve existing vision-based techniques in order to handle issues in the indoor environments. We firstly design an image acquisition system to collect visual data. On one hand, a robust visual odometry method is adjusted to precisely create the routes in the environment. On the other hand, we utilize the Fast-Appearance Based Mapping algorithm that is may be the most successful for matching places in large scenarios. In order to better estimate robot’s location, we utilize a Kalman Filter that combines the matching results of current observation and the estimation of robot states based on its kinematic model. The experimental results confirmed that the proposed system is feasible to navigate the visually impaired people in the indoor environments.


robotics and biomimetics | 2009

Evaluating the control of the adaptive display rate for video capsule endoscopy diagnosis

Hai Vu; Ryusuke Sagawa; Yasushi Yagi; Tomio Echigo; Masatsugu Shiba; Kazuhide Higuchi; Tetsuo Arakawa; Keiko Yagi

The excessively long reviewing times for diagnosis of capsule endoscopy present a clinical problem. A new method, “adaptive speed”, which automatically controls display rates of the video capsule endoscopy images, was proposed to address the problem. In this paper, we investigate the effectiveness of this method versus a standard-view using the existing system. The main activities of examining doctors during a series of evaluations using both systems are recorded. For comparisons, logged actions are analyzed to show three criteria: 1. Diagnostic time, 2. Ability to capture abnormal regions, and 3. Operability for the examining doctors. We conclude that adaptive speed reduces examination time by ten minutes from that of the existing system, while the number of abnormalities found are similar. As well, examining doctors need less effort because of the systems efficient operability.


Image and Vision Computing | 2017

Fully-automated person re-identification in multi-camera surveillance system with a robust kernel descriptor and effective shadow removal method

Thi Thanh Thuy Pham; Thi-Lan Le; Hai Vu; Trung-Kien Dao; Van Toi Nguyen

In this paper, a fully-automated person Re-ID (Re-identification) system is proposed for real scenarios of human tracking in non-overlapping camera network. The system includes two phases of human detection and Re-ID. The human ROIs (Regions of Interest) are extracted from human detection phase and then feature extraction is done on these ROIs in order to build human descriptor for Re-ID. Unlike other approaches which deal with manually-cropped human ROIs for person Re-ID, in this system, the person identity is determined based on the human ROIs extracted automatically by a combined method of human detection. Two main contributions are proposed on both phases of human detection and Re-ID in order to enhance the performance of person Re-ID system. First, an effective shadow removal method based on score fusion of density matching is proposed to get better human detection results. Second, a robust KDES (Kernel DEScriptor) is extracted from human ROI for person classification. Additionally, a new person Re-ID dataset is built in real surveillance scenarios from multiple cameras. The experiments on benchmark datasets and our own dataset show that the person Re-ID results using the proposed solutions outperform some of the state-of-the-art methods. Density-based score fusion scheme for shadow removal is proposed.An efficient KDES descriptor for person Re-ID in multi-camera networkFully-automated person Re-ID in multi-camera surveillance system


The National Foundation for Science and Technology Development (NAFOSTED) Conference on Information and Computer Science | 2014

An Efficient Combination of RGB and Depth for Background Subtraction

Van-Toi Nguyen; Hai Vu; Thanh-Hai Tran

This paper describes a new method for background subtraction using RGB and depth data from a Microsoft Kinect sensor. In the first step of the proposed method, noises are removed from depth data using the proposed noise model. Denoising procedure help improving the performance of background subtraction and also avoids major limitations of RGB mostly when illumination changes. Background subtraction then is solved by combining RGB and depth features instead of using individual RGB or depth data. The fundamental idea in our combination strategy is that when depth measurement is reliable, the background subtraction from depth taken priority over all. Otherwise, RGB is used as alternative. The proposed method is evaluated on a public benchmark dataset which is suffered from common problems of the background subtraction such as shadows, reflections and camouflage. The experimental results show better performances in comparing with state-of-the-art. Furthermore, the proposed method is successful with a challenging task such as extracting human fall-down event in a RGB-D image sequence. Therefore, the foreground segmentation is feasibility for the further task such as tracking and recognition.


Multimedia Tools and Applications | 2017

Developing a way-finding system on mobile robot assisting visually impaired people in an indoor environment

Quoc-Hung Nguyen; Hai Vu; Thanh-Hai Tran; Quang-Hoan Nguyen

A way-finding system in an indoor environment consists of several components: localization, representation, path planning, and interaction. For each component, numerous relevant techniques have been proposed. However, deploying feasible techniques, particularly in real scenarios, remains challenging. In this paper, we describe a functional way-finding system deployed on a mobile robot to assist visual impairments (VI). The proposed system deploys state-of-the-art techniques that are adapted to the practical issues at hand. First, we adapt an outdoor visual odometry technique to indoor use by covering manual markers or stickers on ground-planes. The main purpose is to build reliable travel routes in the environment. Second, we propose a procedure to define and optimize the landmark/representative scenes of the environment. This technique handles the repetitive and ambiguous structures of the environment. In order to interact with VI people, we deploy a convenient interface on a smart phone. Three different indoor scenarios and thirteen subjects are conducted in our evaluations. Our experimental results show that VI people, particularly VI pupils, can find the right way to requested targets.

Collaboration


Dive into the Hai Vu's collaboration.

Top Co-Authors

Avatar

Thanh-Hai Tran

Hanoi University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Thi-Lan Le

Hanoi University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Tomio Echigo

Osaka Electro-Communication University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keiko Yagi

Kobe Pharmaceutical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryusuke Sagawa

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Quoc-Hung Nguyen

Hanoi University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge