Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chiung Yao Fang is active.

Publication


Featured researches published by Chiung Yao Fang.


vehicular technology conference | 2003

Road-sign detection and tracking

Chiung Yao Fang; Sei Wang Chen; Chiou-Shann Fuh

In a visual driver-assistance system, road-sign detection and tracking is one of the major tasks. This study describes an approach to detecting and tracking road signs appearing in complex traffic scenes. In the detection phase, two neural networks are developed to extract color and shape features of traffic signs from the input scenes images. Traffic signs are then located in the images based on the extracted features. This process is primarily conceptualized in terms of fuzzy-set discipline. In the tracking phase, traffic signs located in the previous phase are tracked through image sequences using a Kalman filter. The experimental results demonstrate that the proposed method performs well in both detecting and tracking road signs present in complex scenes and in various weather and illumination conditions.


Computer Vision and Image Understanding | 2004

An automatic road sign recognition system based on a computational model of human recognition processing

Chiung Yao Fang; Chiou-Shann Fuh; P. S. Yen; Shen Cherng; Sei Wang Chen

This paper presents an automatic road sign detection and recognition system that is based on a computational model of human visual recognition processing. Road signs are typically placed either by the roadside or above roads. They provide important information for guiding, warning, or regulating the behaviors drivers in order to make driving safer and easier. The proposed recognition system is motivated by human recognition processing. The system consists of three major components: sensory, perceptual, and conceptual analyzers. The sensory analyzer extracts the spatial and temporal information of interest from video sequences. The extracted information then serves as the input stimuli to a spatiotemporal attentional (STA) neural network in the perceptual analyzer. If stimulation continues, focuses of attention will be established in the neural network. Potential features of road signs are then extracted from the image areas corresponding to the focuses of attention. The extracted features are next fed into the conceptual analyzer. The conceptual analyzer is composed of two modules: a category module and an object module. The former uses a configurable adaptive resonance theory (CART) neural network to determine the category of the input stimuli, whereas the later uses a configurable heteroassociative memory (CHAM) neural network to recognize an object in the determined category of objects. The proposed computational model has been used to develop a system for automatically detecting and recognizing road signs from sequences of traffic images. The experimental results revealed both the feasibility of the proposed computational model and the robustness of the developed road sign detection system.


IEEE Transactions on Neural Networks | 2003

Automatic change detection of driving environments in a vision-based driver assistance system

Chiung Yao Fang; Sei Wang Chen; Chiou-Shann Fuh

Detecting critical changes of environments while driving is an important task in driver assistance systems. In this paper, a computational model motivated by human cognitive processing and selective attention is proposed for this purpose. The computational model consists of three major components, referred to as the sensory, perceptual, and conceptual analyzers. The sensory analyzer extracts temporal and spatial information from video sequences. The extracted information serves as the input stimuli to a spatiotemporal attention (STA) neural network embedded in the perceptual analyzer. If consistent stimuli repeatedly innervate the neural network, a focus of attention will be established in the network. The attention pattern associated with the focus, together with the location and direction of motion of the pattern, form what we call a categorical feature. Based on this feature, the class of the attention pattern and, in turn, the change in driving environment corresponding to the class are determined using a configurable adaptive resonance theory (CART) neural network, which is placed in the conceptual analyzer. Various changes in driving environment, both in daytime and at night, have been tested. The experimental results demonstrated the feasibilities of both the proposed computational model and the change detection system.


computer vision and pattern recognition | 2003

A road sign recognition system based on dynamic visual model

Chiung Yao Fang; Chiou-Shann Fuh; Sei Wang Chen; P. S. Yen

We propose a computational model motivated by human cognitive processes for detecting changes of driving environments. The model, called dynamic visual model, consists of three major components: sensory, perceptual, and conceptual components. The proposed model is used as the underlying framework in which a system for detecting and recognizing road signs is developed.


IEEE Transactions on Intelligent Transportation Systems | 2009

Critical Motion Detection of Nearby Moving Vehicles in a Vision-Based Driver-Assistance System

Shen Cherng; Chiung Yao Fang; Chia Pei Chen; Sei Wang Chen

Driving always involves risk. Various means have been proposed to reduce the risk. Critical motion detection of nearby moving vehicles is one of the important means of preventing accidents. In this paper, a computational model, which is referred to as the dynamic visual model (DVM), is proposed to detect critical motions of nearby vehicles while driving on a highway. The DVM is motivated by the human visual system and consists of three analyzers: 1) sensory analyzers, 2) perceptual analyzers, and 3) conceptual analyzers. In addition, a memory, which is called the episodic memory, is incorporated, through which a number of features of the system, including hierarchical processing, configurability, adaptive response, and selective attention, are realized. A series of experimental results with both single and multiple critical motions are demonstrated and show the feasibility of the proposed system.


Computer Vision and Image Understanding | 1998

Extended attributed string matching for shape recognition

Sei Wang Chen; S. T. Tung; Chiung Yao Fang; Shen Cheng; Anil K. Jain

In this paper, we extend the attributed string matching (ASM) technique, which originally dealt with single objects, to handle scenes containing multiple objects. The emerging issues have uncovered several weaknesses inherent in ASM. We overcome these weaknesses in this study. Major tasks include the introduction of an invariant two-way relaxation process with fuzzy split-and-merge mechanism, a new set of cost functions for edit operators, and the legality costs of edit operations. Three algorithms have been developed, respectively, implementing the original ASM, its modification (MASM) characterized by the proposed new cost functions, and extended ASM (EASM) further incorporating the legality costs of edit operations. These algorithms are then applied to a number of real images. By comparing their performances, we observe that both the new cost functions and the legality costs of edit operations have greatly enlarged the range of the computed similarity values. An augmentation in the separability of similarity values signifies an increment in the discernibility among objects. Experimental results support the applicability of the extended ASM.


Journal of Systems and Software | 2013

An improved DCT-based perturbation scheme for high capacity data hiding in H.264/AVC intra frames

Tseng Jung Lin; Kuo-Liang Chung; Po-Chun Chang; Yong Huai Huang; Hong-Yuan Mark Liao; Chiung Yao Fang

Recently, Ma et al. proposed an efficient error propagation-free discrete cosine transform-based (DCT-based) data hiding algorithm that embeds data in H.264/AVC intra frames. In their algorithm, only 46% of the 4x4 luma blocks can be used to embed hidden bits. In this paper, we propose an improved error propagation-free DCT-based perturbation scheme that fully exploits the remaining 54% of luma blocks and thereby doubles the data hiding capacity of Ma et al.s algorithm. Further, in order to preserve the visual quality and increase the embedding capacity of the embedded video sequences, a new set of sifted 4x4 luma blocks is considered in the proposed DCT-based perturbation scheme. The results of experiments on twenty-six test video sequences confirm the embedding capacity superiority of the proposed improved algorithm while keeping the similar human visual effect in terms of SSIM (structural similarity) index.


scandinavian conference on image analysis | 2007

FPGA implementation of kNN classifier based on wavelet transform and partial distance search

Yao-Jung Yeh; Hui-Ya Li; Wen Jyi Hwang; Chiung Yao Fang

A novel algorithm for field programmable gate array (FPGA) realization of kNN classifier is presented in this paper. The algorithm identifies first k closest vectors in the design set of a kNN classifier for each input vector by performing the partial distance search (PDS) in the wavelet domain. It employs subspace search, bitplane reduction and multiple-coefficient accumulation techniques for the effective reduction of the area complexity and computation latency. The proposed implementation has been embedded in a softcore CPU for physical performance measurement. Experimental results show that the implementation provides a cost-effective solution to the FPGA realization of kNN classification systems where both high throughput and low area cost are desired.


IEEE Transactions on Signal Processing | 1997

Neural-fuzzy classification for segmentation of remotely sensed images

Sei Wang Chen; Chi-Farn Chen; Meng-Seng Chen; Shen Cheng; Chiung Yao Fang; Kuo-En Chang

An unsupervised classification technique conceptualized in terms of neural and fuzzy disciplines for the segmentation of remotely sensed images is presented. The process consists of three major steps: 1) pattern transformation; 2) neural classification; 3) fuzzy grouping. In the first step, the multispectral patterns of image pixels are transformed into what we call coarse patterns. In the second step, a delicate classification of pixels is attained by applying an ART neural classifier to the transformed pixel patterns. Since the resultant clusters of pixels are usually too keen to be of practical significance, in the third step, a fuzzy clustering algorithm is invoked to integrate pixel clusters. A function for measuring clustering validity is defined with which the optimal number of classes can be automatically determined by the clustering algorithm. The proposed technique is applied to both synthetic and real images. High classification rates have been achieved for synthetic images. We also feel comfortable with the results of the real images because their spectral variances are even smaller than the spectral variances of the synthetic images examined.


vehicular technology conference | 2010

Real-Time Vision-Based Driver Drowsiness/Fatigue Detection System

K. P. Yao; W. H. Lin; Chiung Yao Fang; Jung Ming Wang; Shyang-Lih Chang; Sei Wang Chen

In this paper, a vision system for monitoring drivers vigilance is presented. The level of vigilance is determined by integrating a number of facial parameters. In order to estimate these parameters, the facial features of eyes, mouth and head are first located in the input video sequence. The located facial features are then tracked over the subsequent images. Facial parameters are estimated during facial feature tracking. The estimated parametric values are collected and analyzed every fixed time interval to provide a real-time vigilance level of the driver. A series of experiments on real sequences were demonstrated to reveal the feasibility of the proposed system.

Collaboration


Dive into the Chiung Yao Fang's collaboration.

Top Co-Authors

Avatar

Sei Wang Chen

National Taiwan Normal University

View shared research outputs
Top Co-Authors

Avatar

Chiou-Shann Fuh

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Jung Ming Wang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

An Chun Luo

National Taiwan Normal University

View shared research outputs
Top Co-Authors

Avatar

Chiao Shan Lo

National Taiwan Normal University

View shared research outputs
Top Co-Authors

Avatar

Chung Wen Ma

National Taiwan Normal University

View shared research outputs
Top Co-Authors

Avatar

Kuo-En Chang

National Taiwan Normal University

View shared research outputs
Top Co-Authors

Avatar

P. S. Yen

National Taiwan Normal University

View shared research outputs
Top Co-Authors

Avatar

Shen Cheng

National Taiwan Normal University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge