Chao-Ho Chen
National Kaohsiung University of Applied Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chao-Ho Chen.
Journal of Visual Communication and Image Representation | 2015
Wu-Chih Hu; Chao-Ho Chen; Tsong-Yi Chen; Deng-Yuan Huang; Zong-Che Wu
Proposed method has good performance for a moving camera without additional sensors.Proposed method works well for tracking overlapping objects with scale changes.Proposed method outperforms the-state-of-art methods. This paper presents an effective method for the detection and tracking of multiple moving objects from a video sequence captured by a moving camera without additional sensors. Moving object detection is relatively difficult for video captured by a moving camera, since camera motion and object motion are mixed. In the proposed method, the feature points in the frames are found and then classified as belonging to foreground or background features. Next, moving object regions are obtained using an integration scheme based on foreground feature points and foreground regions, which are obtained using an image difference scheme. Then, a compensation scheme based on the motion history of the continuous motion contours obtained from three consecutive frames is applied to increase the regions of moving objects. Moving objects are detected using a refinement scheme and a minimum bounding box. Finally, moving object tracking is achieved using a Kalman filter based on the center of gravity of a moving object region in the minimum bounding box. Experimental results show that the proposed method has good performance.
Journal of Visual Communication and Image Representation | 2014
Yeu-Horng Shiau; Pei Yin Chen; Hsiao-Bai Yang; Chao-Ho Chen; S.-S. Wang
In this paper, we propose an efficient method to remove haze from a signal image based on the atmospheric scattering model and dark channel prior. Our approach applies a weighted technique that automatically finds the possible atmospheric lights, and mixes these candidates to refine the atmospheric light. Then, difference prior, a novel prior processing method, is employed for the estimation of the transmission that mitigates the halo artifact around the sharp edges. This method requires a low computational cost and is suitable for real-time applications. The experimental results show that our approach obtains the comparable results as compared with previous methods.
Journal of Visual Communication and Image Representation | 2012
Deng-Yuan Huang; Chao-Ho Chen; Wu-Chih Hu; Sing-Syong Su
An efficient method for detecting moving vehicles based on the filtering of swinging trees and raindrops is proposed. To extract moving objects from the background, an adaptive background subtraction scheme with a shadow elimination model is used. Swinging trees are removed from foreground objects to reduce the computational complexity of subsequent tracking. Raindrops are removed from foreground objects when necessary. Performance evaluations are carried out using seven real-world traffic image sequences. Experimental results show average recognition rates of 96.83% and 97.20% for swinging trees and raindrops, respectively, indicating the feasibility of the proposed method.
Journal of Visual Communication and Image Representation | 2014
Deng-Yuan Huang; Chao-Ho Chen; Tsong-Yi Chen; Wu-Chih Hu; Bo-Cin Chen
Camera tampering and abnormalities are examined for video surveillance system.Brightness, edge details, and histogram information are computationally efficient.The system runs at 20-30frames/s, meeting the requirement of real-time operation.An average of 4.4% of missed events indicates the feasibility of proposed method. Camera tampering may indicate that a criminal act is occurring. Common examples of camera tampering are turning the camera lens to point to a different direction (i.e., camera motion) and covering the lens by opaque objects or with paint (i.e., camera occlusion). Moreover, various abnormalities such as screen shaking, fogging, defocus, color cast, and screen flickering can strongly deteriorate the performance of a video surveillance system. This study proposes an automated method for rapidly detecting camera tampering and various abnormalities for a video surveillance system. The proposed method is based on the analyses of brightness, edge details, histogram distribution, and high-frequency information, making it computationally efficient. The proposed system runs at a frame rate of 20-30frames/s, meeting the requirement of real-time operation. Experimental results show the superiority of the proposed method with an average of 4.4% of missed events compared to existing works.
Journal of Visual Communication and Image Representation | 2012
Wu-Chih Hu; Chao-Ho Chen; Deng-Yuan Huang; Yan-Ting Ye
A scheme based on a difference scheme using object structures and color analysis is proposed for video object segmentation in rainy situations. Since shadows and color reflections on the wet ground pose problems for conventional video object segmentation, the proposed method combines the background construction-based video object segmentation and the foreground extraction-based video object segmentation where pixels in both the foreground and background from a video sequence are separated using histogram-based change detection from which the background can be constructed and detection of the initial moving object masks based on a frame difference mask and a background subtraction mask can be further used to obtain coarse object regions. Shadow regions and color-reflection regions on the wet ground are removed from the initial moving object masks via a diamond window mask and color analysis of the moving object. Finally, the boundary of the moving object is refined using connected component labeling and morphological operations. Experimental results show that the proposed method performs well for video object segmentation in rainy situations.
intelligent information hiding and multimedia signal processing | 2013
Chao-Ho Chen; Tsong-Yi Chen; Min-Tsung Wu; Tsann-Tay Tang; Wu-Chih Hu
This paper is dedicated to a license plate recognition (LPR) system for moving vehicles by using car video camera. The proposed LPR method mainly consists of preprocessing, plate location, and character segmentation & recognition. At irst, the possible regions of license plate are enhanced from the captured images through the proposed edge detection method and gradient-based binarization. Then, the correct plate regions are selected by analyzing the horizontal projection and the corner distribution. A vertical Sobel processing is performed on the segmented license-plate region and then the proposed weighted-binarization method is employed to segment each character of the license, followed by the skew correction. Finally, a probabilistic neural network (PNN) technique is applied to recognize each segmented character. Experimental results show that the accuracy rates of license-plate location and license-plate recognition can achieve 91.7% and 88.5%, respectively.
ECC (1) | 2014
Wu-Chih Hu; Chao-Ho Chen; Chih-Min Chen; Tsong-Yi Chen
This paper presents an effective method to detect moving objects for videos captured by a moving camera. Moving object detection is relatively difficult to videos captured by a moving camera, since in the case of the video filmed by moving cameras, not only do the objects move, but also the frames shift. In the proposed schemes, the feature points in the frames are first found and then classified into the foreground and background. Next, the foreground regions and image difference are obtained and then further merged to obtain moving object contours. Finally, the moving object is detected based on the motion history of the continuous motion contours and refinement schemes. Experimental results show that the proposed method performs well in terms of moving object detection.
Journal of Visual Communication and Image Representation | 2017
Deng-Yuan Huang; Chao-Ho Chen; Tsong-Yi Chen; Wu-Chih Hu; Kai-Wei Feng
Abstract This paper presents a driver assistance system for vehicle detection and inter-vehicle distance estimation using a single-lens video camera on urban/suburb roads. The task of vehicle detection on urban/suburb roads is more challenging due to their high scene complexity. In this work, the still area of frame inside the host vehicle is first removed using temporal differencing, followed by detecting vanishing point. Segmentation of road regions is then conducted using vanishing point and road’s edge lines. Shadow regions at the bottoms of vehicles verified using the HOG feature and an SVM classifier are utilized to detect vehicle positions. The distances between the host and its front vehicles are estimated based on the locations of detected vehicles and vanishing point. Experimental results show varied performance of vehicle detection with different scenes of urban/suburb roads and the detection rate can achieve up to 94.08%, indicating the feasibility of the proposed method.
international conference on robot vision and signal processing | 2015
Chao-Ho Chen; Tsong-Yi Chen; Deng-Yuan Huang; Kai-Wei Feng
This paper is dedicated to the front vehicle detection and its distance estimation using one single-lens car video camera on the urban & suburb road roads. The proposed method mainly consists of road area detection and the front vehicle detection & distance calculation. Firstly, Hough transform is used to detect lines in which the appropriate straight lines are selected and their intersection points are exploited for obtaining the vanishing point. Then, both strong right and left edges are extracted and connected to the vanishing point for segmenting the road area, and on such area the bottom shadow of the front vehicle is utilized to locate the vehicles position. Finally, the distance between the host vehicle and the front vehicle is calculated based on the position of a vehicle and the vanishing point. Experimental results show that the proposed technique can moderately detect the front vehicles on the urban & suburb roads with detection rate of 78% at least.
international conference on robot vision and signal processing | 2015
Chao-Ho Chen; Tsong-Yi Chen; Wu-Chih Hu; Min-Yang Peng
This paper is dedicated to video stabilization for fast moving photographing based on feature points classification, especially for the moving camera in various speeds. The proposed method mainly consists of feature point detection and classification, calculating global motion vector and rotation angle of frame, and frame compensation. It is first to search feature points and then classify these feature points into foreground (i.e., moving object) type and background type based on multiple view geometry and DBSCAN algorithm. Then, the global feature points and their optical-flows are derived and utilized for calculating the global motion vector and global rotation angle of frame. Finally, both global motion vector and global rotation angle are refined through motion smoothing using a Kalman filter for providing better frame compensation to generate stable frames. Experimental results show that the proposed method can moderately stabilize the frames captured by a moving camera.