Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fan-Chieh Cheng is active.

Publication


Featured researches published by Fan-Chieh Cheng.


IEEE Transactions on Broadcasting | 2011

Illumination-Sensitive Background Modeling Approach for Accurate Moving Object Detection

Fan-Chieh Cheng; Shih-Chia Huang; Shanq-Jang Ruan

Background subtraction involves generating the background model from the video sequence to detect the foreground and object for many computer vision applications, including traffic security, human-machine interaction, object recognition, and so on. In general, many background subtraction approaches cannot update the current status of the background image in scenes with sudden illumination change. This is especially true in regard to motion detection when light is suddenly switched on or off. This paper proposes an illumination-sensitive background modeling approach to analyze the illumination change and detect moving objects. For the sudden illumination change, an illumination evaluation is used to determine two background candidates, including a light background image and a dark background image. Based on the background model and illumination evaluation, the binary mask of moving objects can be generated by the proposed thresholding function. Experimental results demonstrate the effectiveness of the proposed approach in providing a promising detection outcome and low computational cost.


Engineering Applications of Artificial Intelligence | 2013

Fast and efficient median filter for removing 1-99% levels of salt-and-pepper noise in images

Mu-Hsien Hsieh; Fan-Chieh Cheng; Mon-Chau Shie; Shanq-Jang Ruan

This paper proposes a new median filter using prior information to capture natural pixels for restoration. In addition to being very efficient in logic execution, the proposed filter restores corrupted images with 1-99% levels of salt-and-pepper impulse noise to satisfactory ones. Without any iteration for noise detection, it intuitively and simply recognizes impulse noises, while keeping the others intact as nonnoises. Depending on different noise ratios at an image, two different sets of masked pixels are employed separately for the adoption of candidates for median finding. Furthermore, no limit to the size of mask windows assures that a proper median can be found. The simple logic of the proposed algorithm achieves significant milestones on the fidelity of a restored image. Moreover, the very fast execution speed of the proposed filter is very suitable for being applied to real-time processing. Relevant experimental results on subjective visualization and objective digital measure are reported to validate the robustness of the proposed filter.


systems man and cybernetics | 2011

Scene Analysis for Object Detection in Advanced Surveillance Systems Using Laplacian Distribution Model

Fan-Chieh Cheng; Shih-Chia Huang; Shanq-Jang Ruan

In this paper, we propose a novel background subtraction approach in order to accurately detect moving objects. Our method involves three important proposed modules: a block alarm module, a background modeling module, and an object extraction module. The block alarm module efficiently checks each block for the presence of either a moving object or background information. This is accomplished by using temporal differencing pixels of the Laplacian distribution model and allows the subsequent background modeling module to process only those blocks that were found to contain background pixels. Next, the background modeling module is employed in order to generate a high-quality adaptive background model using a unique two-stage training procedure and a novel mechanism for recognizing changes in illumination. As the final step of our process, the proposed object extraction module will compute the binary object detection mask through the applied suitable threshold value. This is accomplished by using our proposed threshold training procedure. The performance evaluation of our proposed method was analyzed by quantitative and qualitative evaluation. The overall results show that our proposed method attains a substantially higher degree of efficacy, outperforming other state-of-the-art methods by Similarity and F1 accuracy rates of up to 35.50% and 26.09%, respectively.


systems, man and cybernetics | 2011

Efficient contrast enhancement using adaptive gamma correction and cumulative intensity distribution

Yi-Sheng Chiu; Fan-Chieh Cheng; Shih-Chia Huang

This paper proposes an efficient histogram modification method for contrast enhancement, which plays a significant role in digital image processing, computer vision, and pattern recognition. We present an automatic transformation technique to improve the brightness of dimmed images based on the gamma correction and probability distribution of the luminance pixel. Experimental results show that the proposed method produces enhanced images of comparable or higher quality than previous state-of-the-art methods.


IEEE Transactions on Intelligent Transportation Systems | 2012

Accurate Motion Detection Using a Self-Adaptive Background Matching Framework

Fan-Chieh Cheng; Shanq-Jang Ruan

Automatic video surveillance is of critical importance to security in commercial, law enforcement, military, and many other environments due to terrorist activity and other social problems. Generally, motion detection plays an important role as the threshold function of background and moving objects in video surveillance systems. This paper proposes a novel motion detection method with a background model module and an object mask generation module. We propose a self-adaptive background matching method to select the background pixel at each frame with regard to background model generation. After generating the adaptive background model, the binary motion mask can be computed by the proposed object mask generation module that consists of the absolute difference estimation and the Cauchy distribution model. We analyze the detection quality of the proposed method based on qualitative visual inspection. On the other hand, quantitative accuracy measurement is also obtained by using four accuracy metrics, namely, Recall, Precision, Similarity, and F1 . Experimental results demonstrate the effectiveness of the proposed method in providing a promising detection outcome and a low computational cost.


international conference on multimedia and expo | 2010

Advanced background subtraction approach using Laplacian distribution model

Fan-Chieh Cheng; Shih-Chia Huang; Shanq-Jang Ruan

In this paper, we propose a novel background subtraction approach in order to accurately detect moving objects. Our method involves three important proposed modules: a block alarm module, a background modeling module, and an object extraction module. Our proposed block alarm module efficiently checks each block for the presence of either moving object or background information. This is accomplished by using temporal differencing pixels of the Laplacian distribution model and allows the subsequent background modeling module to process only those blocks found to contain background pixels. For our proposed background modeling module, a unique two-stage background training procedure is performed using Rough Training followed by Precise Training in order to generate a high-quality adaptive background model. As the final step of our process, we present an object extraction module which will compute the binary object detection mask through the applied suitable threshold value. This is accomplished by using our proposed threshold training procedure in order to achieve accurate and complete detection of moving objects. The overall results of these analyses demonstrate that our proposed method attains a substantially higher degree of efficacy, outperforming other state-of-the-art methods by Similarity and F1 accuracy rates of up to 57.17% and 48.48%, respectively.


acm symposium on applied computing | 2010

Advanced motion detection for intelligent video surveillance systems

Fan-Chieh Cheng; Shih-Chia Huang; Shanq-Jang Ruan

In this paper, we propose a novel background subtraction method that makes use of spectral, spatial, and temporal features extracted from the video sequence in determination of the best background candidates for background modeling. As the final step of our process, the binary moving object detection mask is computed using prompt background subtraction with our proposed background model. The overall results of these analyses thus demonstrate that our proposed method substantially outperforms existing methods by an F1 metric accuracy rate increase of up to 79%.


international symposium on intelligent signal processing and communication systems | 2011

An error-correction scheme with Reed-Solomon codec for CAN bus transmission

I-An Chen; Chang-Hsin Cheng; Hong-Yuan Jheng; Chung-Kai Liu; Fan-Chieh Cheng; Shanq-Jang Ruan; Chang Hong Lin

This paper presents an error-correction scheme to enhance the performance of typical CAN bus. The proposed scheme uses Reed-Solomon (R-S) codec to calculate the parity for the transmission of typical CAN bus. Compared with prior work in terms of Hybrid Automatic Repeat Request (HARQ) scheme for CAN bus transmission, the proposed scheme does not modify the standard CAN protocol but insert an R-S codec unit as an error-correction between Electronic Control Unit (ECU) and CAN bus. In other words, the focus of our proposed scheme is not on modifying the fixed CRC codec of the standard structure, but on increasing the performance based on an additional enhancement module. Experimental results show that the execution time of standard CAN bus can be reduced by the proposed scheme for almost half of transmission time on typical design, while accompanying very minor cost when errors are not correctable or without errors.


IEEE\/OSA Journal of Display Technology | 2016

A Power-Saving Histogram Adjustment Algorithm for OLED-Oriented Contrast Enhancement

Li-Ming Jan; Fan-Chieh Cheng; Chia-Hua Chang; Shanq-Jang Ruan; Chung-An Shen

For the modern multimedia devices, display resolution and image quality are actively improved nowadays. Although such improvement can produce the high visual perception for the observer, the power consumption becomes an inevasible problem as it is rising progressively. In order to achieve a good balance between visual perception and power consumption, we propose a histogram-based power saving algorithm to improve the image contrast for OLED display panels. The proposed algorithm modifies the empty bins of the image histogram as a pre-process of power reduction. Furthermore, the visual effect is compensated using the power saving histogram equalization algorithm. Experimental results show that the proposed algorithm not only decreases the display power to be lower than that of compared algorithms, but also generates the highly perceptual contrast of the images.


international conference on image processing | 2013

Histogram shrinking for power-saving contrast enhancement

Yan-Tsung Peng; Fan-Chieh Cheng; Li-Ming Jan; Shanq-Jang Ruan

In this paper, a power-saving method for emissive display by shrinking histogram is proposed. Based on a modern pixel-level power model of an OLED module, the power consumption factor can be employed in the objective function. Nevertheless, contrast enhancement intrinsically contradicts saving power. In order to solve this problem, we formulate a new objective function which is subject to the constant entropy. By minimizing the distance between two near non-empty bins of image histogram, the power reduction and entropy preservation are simultaneously achieved. To further enhance the perceptional quality, the proposed method is also integrated with other related algorithms. Experimental results show that the proposed method is capable of reducing display power, while the performance of contrast enhancement is also improved.

Collaboration


Dive into the Fan-Chieh Cheng's collaboration.

Top Co-Authors

Avatar

Shanq-Jang Ruan

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Shih-Chia Huang

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Chang Hong Lin

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yan-Tsung Peng

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hong-Yuan Jheng

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Li-Ming Jan

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu-Wen Tsai

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chang-Hsin Cheng

Industrial Technology Research Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge