Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Van-Dung Hoang is active.

Publication


Featured researches published by Van-Dung Hoang.


conference of the industrial electronics society | 2012

Fast human detection based on parallelogram haar-like features

Van-Dung Hoang; Andrey Vavilin; Kang-Hyun Jo

Inspired by a recent image descriptors for object detection, this paper proposed the feature description method based on set of modified Haar-like features which have parallelogram shapes. Using the proposed feature descriptors to develop a rapid detection system for human detection based on cascade structure used for boosting classifier. Specially, human detection in omnidirectional image as well as unwrap omnidirectional to panoramic image were described in this paper. The experimental results showed that the proposed method could produce high accuracy detection rate with lower false positive rate and higher recall rate than Haar-like features, and faster than HOG feature. It is efficiency with different resolutions and poses under a variety conditional such as flare illumination, clutter backgrounds, and so on.


The Scientific World Journal | 2014

Moving object localization using optical flow for pedestrian detection from a moving vehicle.

Joko Hariyono; Van-Dung Hoang; Kang-Hyun Jo

This paper presents a pedestrian detection method from a moving vehicle using optical flows and histogram of oriented gradients (HOG). A moving object is extracted from the relative motion by segmenting the region representing the same optical flows after compensating the egomotion of the camera. To obtain the optical flow, two consecutive images are divided into grid cells 14 × 14 pixels; then each cell is tracked in the current frame to find corresponding cell in the next frame. Using at least three corresponding cells, affine transformation is performed according to each corresponding cell in the consecutive images, so that conformed optical flows are extracted. The regions of moving object are detected as transformed objects, which are different from the previously registered background. Morphological process is applied to get the candidate human regions. In order to recognize the object, the HOG features are extracted on the candidate region and classified using linear support vector machine (SVM). The HOG feature vectors are used as input of linear SVM to classify the given input into pedestrian/nonpedestrian. The proposed method was tested in a moving vehicle and also confirmed through experiments using pedestrian dataset. It shows a significant improvement compared with original HOG using ETHZ pedestrian dataset.


asian conference on intelligent information and database systems | 2014

Simple and Efficient Method for Calibration of a Camera and 2D Laser Rangefinder

Van-Dung Hoang; Danilo Cáceres Hernández; Kang-Hyun Jo

In the last few years, the integration of cameras and laser rangefinders has been applied to a lot of researches on robotics, namely autonomous navigation vehicles, and intelligent transportation systems. The system based on multiple devices usually requires the relative pose of devices for processing. Therefore, the requirement of calibration of a camera and a laser device is very important task. This paper presents a calibration method for determining the relative position and direction of a camera with respect to a laser rangefinder. The calibration method makes use of depth discontinuities of the calibration pattern, which emphasizes the beams of laser to automatically estimate the occurred position of laser scans on the calibration pattern. Laser range scans are also used for estimating corresponding 3D image points in the camera coordinates. Finally, the relative parameters between camera and laser device are discovered by using corresponding 3D points of them.


intelligent robots and systems | 2013

3D motion estimation based on pitch and azimuth from respective camera and laser rangefinder sensing

Van-Dung Hoang; Danilo Cáceres Hernández; My Ha Le; Kang-Hyun Jo

This paper proposes a new method to estimate the 3D motion of a vehicle based on car-like structured motion model using an omnidirectional camera and a laser rangefinder. In recent years, motion estimation using vision sensor has improved by assuming planar motion in most conventional research to reduce requirement parameters and computational cost. However, for real applications in environment of outdoor terrain, the motion does not satisfy this condition. In contrast, our proposed method uses one corresponding image point and motion orientation to estimate the vehicle motion in 3D. In order to reduce requirement parameters for speedup computational systems, the vehicle moves under car-like structured motion model assumption. The system consists of a camera and a laser rangefinder mounted on the vehicle. The laser rangefinder is used to estimate motion orientation and absolute translation of the vehicle. An omnidirectional image-based one-point correspondence is used for combining with motion orientation and absolute translation to estimate rotation components of yaw, pitch angles and three translation components of Tx, Ty, and Tz. Real experiments in sloping terrain demonstrate the accuracy of vehicle localization estimation using the proposed method. The error at the end of travel position of our method, one-point RANSAC are 1.1%, 5.1%, respectively.


international conference on intelligent computing | 2013

Combining edge and one-point RANSAC algorithm to estimate visual odometry

Van-Dung Hoang; Danilo Cáceres Hernández; Kang-Hyun Jo

In recent years, classical structure from motion based SLAM has achieved significant results. Omnidirectional camera-based motion estimation has become interested researchers due to the lager field of view. This paper proposes a method to estimate the 2D motion of a vehicle and mapping by using EKF based on edge matching and one point RANSAC. Edge matching based azimuth rotation estimation is used as pseudo prior information for EKF predicting state vector. In order to reduce requirement parameters for motion estimation and reconstruction, the vehicle moves under nonholonomic constraints car-like structured motion model assumption. The experiments were carried out using an electric vehicle with an omnidirectional camera mounted on the roof. In order to evaluate the motion estimation, the vehicle positions were compared with GPS information and superimposed onto aerial images collected by Google map API. The experimental results showed that the method based on EKF without using prior rotation information given error is about 1.9 times larger than our proposed method.


international conference on computer vision | 2012

Vehicle localization using omnidirectional camera with GPS supporting in wide urban area

My Ha Le; Van-Dung Hoang; Andrey Vavilin; Kang-Hyun Jo

This paper proposes a method for long-range vehicle localization using fusion of omnidirectional camera and Global Positioning System (GPS) in wide urban environments. The main contributions are twofold: first, the positions estimated by visual sensor overcome the motion blur effects. The motion constrains of successive frames are obtained accurately under various scene structures and conditions. Second, the cumulative errors of visual odometry system are solved completely based on the fusion of local (visual odometry) and global positioning system. The visual odometry can yield the correct local position at short distance of movements but it will accumulate errors overtime, on the contrary, the GPS can yields the correct global positions but the local positions may be drifted. Moreover, the signals received from satellites are affected by multi-path and forward diffraction then the position errors increase when vehicles move in dense building regions or jump/miss in tunnels. To utilize the advantages of two sensors, the position information should be evaluated before fusion by Extended Kalman Filter (EKF) framework. This multiple sensor system can also compensate each other in the case of losing one of two. The simulation results demonstrate the accuracy of vehicle positions in long-range movements.


asian conference on intelligent information and database systems | 2014

Human Detection from Mobile Omnidirectional Camera Using Ego-Motion Compensated

Joko Hariyono; Van-Dung Hoang; Kang-Hyun Jo

This paper presents a human detection method using optical flows in an image obtained from an omnidirectional camera mounted in a mobile robot. To detect human region from a mobile omnidirectional camera achieved through several steps. First, a method for detection moving objects using frame difference. Then ego-motion compensated is applied in order to deal with noise caused by moving camera. In this step an image divides as grid windows then compute each affine transform for each window. Human shape as moving object is detected from the background transformation-compensated using every local affine transformation for each local window. Second, in order to localize the region as a human or not, histogram vertical projection is applied with specific threshold. The experimental results show the proposed method achieved comparable result comparing with similar methods, with 87.4% in detection rate and less than 10% in false positive detection.


international conference on human system interactions | 2013

Planar motion estimation using omnidirectional camera and laser rangefinder

Van-Dung Hoang; My-Ha Le; Kang-Hyun Jo

The vision based motion estimation has been investigated in the last few years. Although some progresses have been made in the assumption of planar, still there are no methods satisfying the high accuracy and real-time with absolute translation. This paper proposes a method to estimate vehicle motion by the fusion of omnidirectional camera and laser rangefinder to overcome the drawbacks mentioned above. The vehicle motion contains of rotation and translation components. The rotation is estimated based on the simple but efficient edge matching method by using camera. The absolute translation problem is solved based on ICP method by using laser rangefinder. The experiments were carried out using an electric vehicle with a camera mounted on the roof and a laser device mounted on the bumper. In order to evaluate the motion estimation, the vehicle positions were compared with GPS information and superimposed onto aerial images collected by Google map API. The experimental results showed that the error is 1.1 times smaller and the computational cost is 10.4 times faster than the 1-Point RANSAC method. Also, the error is 4.1 times smaller than the method based on appearance color features.


Vietnam Journal of Computer Science | 2015

Path planning for autonomous vehicle based on heuristic searching using online images

Van-Dung Hoang; Kang-Hyun Jo

In automatic navigation of mobile systems, a path network is required to enable robot/vehicle autonomous motions. Path planning is considered as a significantly important part in creating the path network and thus to be a necessary task for any autonomous vehicle system. This paper proposes a method that constructs the shortest path for vehicle auto-navigation in outdoor environments. The method using two layers of GIS information of online map images, which support to estimate not only the shape of road network but also the directed road. This is also the advantage as compared to methods, which use only aerial/satellite images. Accomplishing the estimation according to the use of this application requires several stages as follows. First, a raw road network is detected using the road map and the satellite image. Second, the road network is refined and represented by a direct graph. Third, the road network is converted into the global coordinate, which is much more convenient for performing online auto-navigation task than the other types of coordinate. Finally, the shortest path for motion is estimated by heuristic searching method based on a hybrid algorithm that is originated from Dijkstra algorithm in a combination with greedy breadth-first search algorithm. The experimental results demonstrate robustness and effectiveness of the proposed method for path network estimation under large scenes of outdoor environments.


international conference on computational collective intelligence | 2014

Motion Segmentation Using Optical Flow for Pedestrian Detection from Moving Vehicle

Joko Hariyono; Van-Dung Hoang; Kang-Hyun Jo

This paper proposes a pedestrian detection method using optical flows analysis and Histogram of Oriented Gradients (HOG). Due to the time consuming problem in sliding window based, motion segmentation proposed based on optical flow analysis to localize the region of moving object. A moving object is extracted from the relative motion by segmenting the region representing the same optical flows after compensating the ego-motion of the camera. Two consecutive images are divided into grid cells 14x14 pixels, then tracking each cell in current frame to find corresponding cells in the next frame. At least using three corresponding cells, affine transformation is performed according to each corresponding cells in the consecutive images, so that conformed optical flows are extracted. The regions of moving object are detected as transformed objects are different from the previously registered background. Morphological process is applied to get the candidate human region. The HOG features are extracted on the candidate region and classified using linear Support Vector Machine (SVM). The HOG feature vectors are used as input of linear SVM to classify the given input into pedestrian/non-pedestrian. The proposed method was tested in a moving vehicle and shown significant improvement compare with the original HOG.

Collaboration


Dive into the Van-Dung Hoang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Van-Huy Pham

Ton Duc Thang University

View shared research outputs
Researchain Logo
Decentralizing Knowledge