Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shih-Chia Huang is active.

Publication


Featured researches published by Shih-Chia Huang.


IEEE Transactions on Image Processing | 2013

Efficient Contrast Enhancement Using Adaptive Gamma Correction With Weighting Distribution

Shih-Chia Huang; Fan-Chieh Cheng; Yi-Sheng Chiu

This paper proposes an efficient method to modify histograms and enhance contrast in digital images. Enhancement plays a significant role in digital image processing, computer vision, and pattern recognition. We present an automatic transformation technique that improves the brightness of dimmed images via the gamma correction and probability distribution of luminance pixels. To enhance video, the proposed image-enhancement method uses temporal information regarding the differences between each frame to reduce computational complexity. Experimental results demonstrate that the proposed method produces enhanced images of comparable or higher quality than those produced using previous state-of-the-art methods.


IEEE Transactions on Circuits and Systems for Video Technology | 2014

Visibility Restoration of Single Hazy Images Captured in Real-World Weather Conditions

Shih-Chia Huang; Bo-Hao Chen; Wei-Jheng Wang

The visibility of outdoor images captured in inclement weather is often degraded due to the presence of haze, fog, sandstorms, and so on. Poor visibility caused by atmospheric phenomena in turn causes failure in computer vision applications, such as outdoor object recognition systems, obstacle detection systems, video surveillance systems, and intelligent transportation systems. In order to solve this problem, visibility restoration (VR) techniques have been developed and play an important role in many computer vision applications that operate in various weather conditions. However, removing haze from a single image with a complex structure and color distortion is a difficult task for VR techniques. This paper proposes a novel VR method that uses a combination of three major modules: 1) a depth estimation (DE) module; 2) a color analysis (CA) module; and 3) a VR module. The proposed DE module takes advantage of the median filter technique and adopts our adaptive gamma correction technique. By doing so, halo effects can be avoided in images with complex structures, and effective transmission map estimation can be achieved. The proposed CA module is based on the gray world assumption and analyzes the color characteristics of the input hazy image. Subsequently, the VR module uses the adjusted transmission map and the color-correlated information to repair the color distortion in variable scenes captured during inclement weather conditions. The experimental results demonstrate that our proposed method provides superior haze removal in comparison with the previous state-of-the-art method through qualitative and quantitative evaluations of different scenes captured during various weather conditions.


Engineering Applications of Artificial Intelligence | 2013

Image contrast enhancement for preserving mean brightness without losing image features

Shih-Chia Huang; Chien-Hui Yeh

Histogram equalization is a well-known and effective technique for improving the contrast of images. However, the traditional histogram equalization (HE) method usually results in extreme contrast enhancement, which causes an unnatural look and visual artifacts of the processed image. In this paper, we propose a novel histogram equalization method that is composed of an automatic histogram separation module and an intensity transformation module. First, the proposed histogram separation module is a combination of the proposed prompt multiple thresholding procedure and an optimum peak signal-to-noise ratio (PSNR) calculation to separate the histogram in small-scale detail. As the final step of the proposed process, the use of the intensity transformation module can enhance the image with complete brightness preservation for each generated sub-histogram. Experimental results show that the proposed method not only retains the shape features of the original histogram but also enhances the contrast effectively.


IEEE Transactions on Broadcasting | 2011

Illumination-Sensitive Background Modeling Approach for Accurate Moving Object Detection

Fan-Chieh Cheng; Shih-Chia Huang; Shanq-Jang Ruan

Background subtraction involves generating the background model from the video sequence to detect the foreground and object for many computer vision applications, including traffic security, human-machine interaction, object recognition, and so on. In general, many background subtraction approaches cannot update the current status of the background image in scenes with sudden illumination change. This is especially true in regard to motion detection when light is suddenly switched on or off. This paper proposes an illumination-sensitive background modeling approach to analyze the illumination change and detect moving objects. For the sudden illumination change, an illumination evaluation is used to determine two background candidates, including a light background image and a dark background image. Based on the background model and illumination evaluation, the binary mask of moving objects can be generated by the proposed thresholding function. Experimental results demonstrate the effectiveness of the proposed approach in providing a promising detection outcome and low computational cost.


IEEE Transactions on Neural Networks | 2013

Highly Accurate Moving Object Detection in Variable Bit Rate Video-Based Traffic Monitoring Systems

Shih-Chia Huang; Bo-Hao Chen

Automated motion detection, which segments moving objects from video streams, is the key technology of intelligent transportation systems for traffic management. Traffic surveillance systems use video communication over real-world networks with limited bandwidth, which frequently suffers because of either network congestion or unstable bandwidth. Evidence supporting these problems abounds in publications about wireless video communication. Thus, to effectively perform the arduous task of motion detection over a network with unstable bandwidth, a process by which bit-rate is allocated to match the available network bandwidth is necessitated. This process is accomplished by the rate control scheme. This paper presents a new motion detection approach that is based on the cerebellar-model-articulation-controller (CMAC) through artificial neural networks to completely and accurately detect moving objects in both high and low bit-rate video streams. The proposed approach is consisted of a probabilistic background generation (PBG) module and a moving object detection (MOD) module. To ensure that the properties of variable bit-rate video streams are accommodated, the proposed PBG module effectively produces a probabilistic background model through an unsupervised learning process over variable bit-rate video streams. Next, the MOD module, which is based on the CMAC network, completely and accurately detects moving objects in both low and high bit-rate video streams by implementing two procedures: 1) a block selection procedure and 2) an object detection procedure. The detection results show that our proposed approach is capable of performing with higher efficacy when compared with the results produced by other state-of-the-art approaches in variable bit-rate video streams over real-world limited bandwidth networks. Both qualitative and quantitative evaluations support this claim; for instance, the proposed approach achieves Similarity and F1 accuracy rates that are 76.40% and 84.37% higher than those of existing approaches, respectively.


systems man and cybernetics | 2011

Scene Analysis for Object Detection in Advanced Surveillance Systems Using Laplacian Distribution Model

Fan-Chieh Cheng; Shih-Chia Huang; Shanq-Jang Ruan

In this paper, we propose a novel background subtraction approach in order to accurately detect moving objects. Our method involves three important proposed modules: a block alarm module, a background modeling module, and an object extraction module. The block alarm module efficiently checks each block for the presence of either a moving object or background information. This is accomplished by using temporal differencing pixels of the Laplacian distribution model and allows the subsequent background modeling module to process only those blocks that were found to contain background pixels. Next, the background modeling module is employed in order to generate a high-quality adaptive background model using a unique two-stage training procedure and a novel mechanism for recognizing changes in illumination. As the final step of our process, the proposed object extraction module will compute the binary object detection mask through the applied suitable threshold value. This is accomplished by using our proposed threshold training procedure. The performance evaluation of our proposed method was analyzed by quantitative and qualitative evaluation. The overall results show that our proposed method attains a substantially higher degree of efficacy, outperforming other state-of-the-art methods by Similarity and F1 accuracy rates of up to 35.50% and 26.09%, respectively.


IEEE Transactions on Industrial Electronics | 2014

Automatic Moving Object Extraction Through a Real-World Variable-Bandwidth Network for Traffic Monitoring Systems

Shih-Chia Huang; Bo-Hao Chen

Automated motion detection has become an increasingly important subject in traffic surveillance systems. Video communication in traffic surveillance systems may experience network congestion or unstable bandwidth over real-world networks with limited bandwidth, which is harmful in regard to motion detection in video streams of variable bit rate. In this paper, we propose a unique Fishers linear discriminant-based radial basis function network motion detection approach for accurate and complete detection of moving objects in video streams of both high and low bit rates. The proposed approach is accomplished through a combination of two stages: adaptive pattern generation (APG) and moving object extraction (MOE). For the APG stage, the variable-bit-rate video stream properties are accommodated by the proposed approach, which subsequently distinguishes the moving objects within the regions belonging to the moving object class by using two devised procedures during the MOE stage. Qualitative and quantitative detection accuracy evaluations show that the proposed approach exhibits superior efficacy when compared to previous methods. For example, accuracy rates produced by F1 and Similarity metrics for the proposed approach were, respectively, up to 92.23% and 88.24% higher than those produced for other previous methods.


Engineering Applications of Artificial Intelligence | 2012

Motion detection with pyramid structure of background model for intelligent surveillance systems

Shih-Chia Huang; Fan-Chieh Cheng

This paper proposes a pyramidal background matching structure for motion detection. The proposed method utilizes spectral, spatial, and temporal features to generate a pyramidal structure of the background model. After performing the background subtraction based on the proposed background model, the moving targets can be accurately detected at each frame of the video sequence. In order to produce high accuracy for the motion detection, the proposed method also further includes a noise filter based on Bezier curve to smooth noise pixels, after which the binary motion mask can be computed by the proposed threshold function. Experimental results demonstrate that the proposed method substantially outperforms existing methods by perceptional evaluation.


systems, man and cybernetics | 2011

Efficient contrast enhancement using adaptive gamma correction and cumulative intensity distribution

Yi-Sheng Chiu; Fan-Chieh Cheng; Shih-Chia Huang

This paper proposes an efficient histogram modification method for contrast enhancement, which plays a significant role in digital image processing, computer vision, and pattern recognition. We present an automatic transformation technique to improve the brightness of dimmed images based on the gamma correction and probability distribution of the luminance pixel. Experimental results show that the proposed method produces enhanced images of comparable or higher quality than previous state-of-the-art methods.


IEEE Transactions on Broadcasting | 2008

Optimization of Hybridized Error Concealment for H.264

Shih-Chia Huang; Sy-Yen Kuo

Transmission of highly compressed video bitstreams can result in packet erasures when channel status is unfavorable, the consequence being not only the corruption of a single frame, but propagation to its successors. In order to avoid error-catalyzed artifacts from producing visible corruption of affected video frames, the use of error concealment at the video decoder becomes essential. The purpose of this paper proposes an efficient and integrated novel EC method for the latest video compression standard H.264/AVC, using not only spatially and temporally correlated information but also the tandem utilization of two new coding tools: directional spatial prediction for intracoding and variable block size motion compensation of H.264/AVC. Experiments performed using the proposed hybridization method of combining the above spatial and temporal estimation elements fulfilled the expectations of control-whole-scheme. The experimental results show that the proposed method offers excellent gains of up to 10.62 dB compared to that of the Joint Model (JM) decoder for a wide range of benchmark sequences without any considerable increase in time demand.

Collaboration


Dive into the Shih-Chia Huang's collaboration.

Top Co-Authors

Avatar

Bo-Hao Chen

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Sy-Yen Kuo

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Fan-Chieh Cheng

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Tan-Hsu Tan

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Munkhjargal Gochoo

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Patrick C. K. Hung

University of Ontario Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Po-Hsiung Lin

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Shanq-Jang Ruan

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Sheng-Kai Chou

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Shing-Hong Liu

Chaoyang University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge