Min-Liang Wang
National Chung Cheng University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Min-Liang Wang.
The International Journal of Robotics Research | 2011
Min-Liang Wang; Huei-Yung Lin
We describe a new semantic descriptor for robots to recognize visual places. The descriptor integrates image features and color information via the hull census transform (HCT) and image histogram indexing. Our approach extracts the semantic description based on the convex hull points and statistical calculation. The color histograms are then formed by four indices and added to the descriptor. The semantic codebook consists of several places with many image descriptors. Finally, a one-versus-one (OVO) multi-class support vector machine (SVM) is used to model the places. The proposed technique is achieved by using a high-level cue integration scheme based on the learning information over the color and feature space to optimally combine the weighted cues. It is suitable for visual place recognition, particularly for the images captured by an omnidirectional camera. The experimental results show that the codebook with less vectors is as robust as most popular codebooks under varying environments. The performance is evaluated and compared with several state-of-the-art descriptors.
intelligent robots and systems | 2010
Min-Liang Wang; Huei-Yung Lin
This paper presents a novel encoding method for scene change detection and appearance-based topological localization framework. The relation computation over convex hull points is used to compare the similarity between the scenes. It relies on the relative ordering of the feature strength, not directly on the feature vectors. We first deal with multiple convex hulls over the detected features and then compile statistics for coding on the hull points through a vector magnitude comparison. Finally, the hull points are formed by binary codes. The codes are suitable for scene change detection and visual place recognition by statistical analysis. The experimental results show the coding method is robust under the varying environment.
Sensors | 2014
Huei-Yung Lin; Min-Liang Wang
In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach.
international conference on computer vision | 2009
Huei-Yung Lin; Min-Liang Wang
In this paper we present a general framework for the hybrid omnidirectional and perspective imaging system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic and real scene images have demonstrated the feasibility of our approach.
international symposium on consumer electronics | 2013
Min-Liang Wang; Jing-Jen Wu; Pei-Yuan Lee; Ming-Hsien Hu; Atul Kumar; Li-Xun Chen; Kai-Che Liu; Jacques Marescaux; Stéphane Nicolau; Anant Suraj Vemuri; Luc Soler
This paper proposed a technique for skin curve registration based on landmark detection and calibrated camera-projector system for medical purpose. The technique applies the algorithm which registers a precalculated 3D model for the 2D images of the video frames. The algorithm first calculates the 3D locations of landmarks based upon the calibrated camera projector system and the matching the landmarks in the 3D model using CT-scan. We then generate the image to be projected on the skin curve of a patient from the registered 3D spinal model, and it is created for projecting on the pork or real patient for surgeons to process surgical procedure by utilizing their natural eyes system. The experiments demonstrate the proposed registration method on both animal and real patient and evaluated by several surgeons for spinal surgery.
systems, man and cybernetics | 2012
Min-Liang Wang; Hurng-Sheng Wu; Chien-Hsing He; Wen-Tsai Huang; Huei-Yung Lin
This paper presents geometric techniques for self-localization improvement, especially for the robots equipped with a single catadioptric camera. We take the vertical line and intersection point matching into account, and proposed a novel descriptor named “Double-Gaussian vector”. The vector uses two Gaussian matrices to blur the process image region and build the corresponding feature vectors for solving the vertical line matching in two consecutive video frames. For ground plane estimation, the perpendicular lines with respect to optical axis are extracted by two approximate curve equations. The equations then crop the ground plane area of the omnidirectional image. The sparse bundle adjustment (SBA) is adopted for iterative calculating the 3D matching points between two robot locations for optimizing the robot pose estimation. The convergent 3D points are used to compute the robot poses and record the navigation trajectory. The results show that the proposed methods significantly improve the robot localization and navigation compared to the previous literature in the experiments.
systems, man and cybernetics | 2009
Chia-Hao Hsieh; Min-Liang Wang; Li-Wei Kao; Huei-Yung Lin
In this paper, we propose a self-localization and path-planning method for mobile robot navigation. An omnidirectional camera and infrared sensors are used to extract the landmarks information of the environment. Due to the large field of view of the omnidirectional camera, the mobile robot can capture the rich information of the environment. The landmark features are detected and extracted from the omnidirectional video camera, so the robot is able to navigate in the environment automatically to learn the localization information and avoid obstacles by using infrared sensors. The robot system can then use the localization information to plan a shortest path to visit some particular locations prespecified by the user.
international conference on advanced robotics | 2013
Min-Liang Wang; Jing-Ren Wu; Li-Wei Kao; Huei-Yung Lin
This paper proposed a vision system that enables the wheel-robot could self-localization and navigation, and a strategy simulator software for robot soccer competition. The vision system is based on an omnidirectional camera to develop real-time image capture, doors and ball recognition software. It is a framework for visual self-localization of a mobile robot using a parametric model obtained from panoramic images of the competition environment. The strategy simulator is to simulate the robot behavior, such as the keeper, attacker and ball following further to apply to the real robot. Moreover, our experiments show that the vision system is fast and efficient for robot soccer competition, and the simulator is useful for developing strategies.
international symposium on consumer electronics | 2013
Min-Liang Wang; Anant Vemuri; Yolin Ho; Shi-Feng Yang; Huei-Yung Lin
This paper addresses the problem of establishing hybrid instrument tracking using optical and electromagnetic devices for therapeutic application. Concretely, the hybrid tracking method is a video tracking system based on tracking a uniquely designed marker, and present a scenario where the electromagnetic system can be combined with the video based system to obtain a cost-effective system. A developed objective function is adopted to merge tensor representing the 3-D locations of optical and electromagnetic signals and, estimated instruments trajectory, updated simultaneously using Kalman filter. The proposed approach has been implemented and shows the results on both synthetic and animal data.
systems, man and cybernetics | 2016
Yan-Ru Chen; Huei-Yung Lin; Min-Liang Wang
In orthopedic surgery, three-dimensional models are usually reconstructed through CT or MRI images. In urgent operations, the most common way for 3-D model reconstruction is to extract C-arm images by C-arm fluoroscopic imaging. However, it is harmful to human bodies because of the ionizing radiation. In this work, we present a model deformation approach for 3D reconstruction from medical imaging. Our approach utilizes the visual hull algorithm to reconstruct a rough 3-D model from C-arm images, and then deforms it according a reference model. During the deformation stage, some adjustments are made for the models and the coherent point drift algorithm is carried out to match the 3-D points of two models. Experimental results demonstrate the feasibility of our technique for 3-D model reconstruction from C-arm imaging.