Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sei Wang Chen is active.

Publication


Featured researches published by Sei Wang Chen.


vehicular technology conference | 2003

Road-sign detection and tracking

Chiung Yao Fang; Sei Wang Chen; Chiou-Shann Fuh

In a visual driver-assistance system, road-sign detection and tracking is one of the major tasks. This study describes an approach to detecting and tracking road signs appearing in complex traffic scenes. In the detection phase, two neural networks are developed to extract color and shape features of traffic signs from the input scenes images. Traffic signs are then located in the images based on the extracted features. This process is primarily conceptualized in terms of fuzzy-set discipline. In the tracking phase, traffic signs located in the previous phase are tracked through image sequences using a Kalman filter. The experimental results demonstrate that the proposed method performs well in both detecting and tracking road signs present in complex scenes and in various weather and illumination conditions.


international conference on networking, sensing and control | 2004

Shadow detection and removal for traffic images

Jung-Ming Wang; Yun-Chung Chung; C. L. Chang; Sei Wang Chen

Shadow detection and removal is an important task when dealing with outdoor images. Shadows cast by objects together with the objects form distorted figures. Furthermore, separate objects can be connected through shadows. Both can confuse object recognition systems. In this paper, an effective method is presented for detecting and removing shadows from foreground figures. We assume that foreground figures have been extracted from the input image by some background subtraction method. A figure may contain an object with or without shadow or multiple objects connected by shadows. To begin, we decide whether there are shadows in a given figure. A method based on illumination assessment is developed for this purpose. Once shadows have been confirmed existing in the given figure, their locations and orientations are estimated. A number of points are-then sampled from the shadow candidates, from which attributes of shadow are computed. We do not remove shadows simply based on the computed attributes. The reason is twofold. First, the distribution of intensity within a shadow is not uniform. Second, shadows can be divided into cast and self shadows; only cast shadows are to be removed. To deal with the first issue, we recover object shapes progressively instead of directly removing shadows. The second issue is resolved based on the observation that self shadows possess denser distributions of texture than cast shadows in our application. A number of experiments have been performed. The results revealed the applicability of the proposed technique.


Computer Vision and Image Understanding | 2004

An automatic road sign recognition system based on a computational model of human recognition processing

Chiung Yao Fang; Chiou-Shann Fuh; P. S. Yen; Shen Cherng; Sei Wang Chen

This paper presents an automatic road sign detection and recognition system that is based on a computational model of human visual recognition processing. Road signs are typically placed either by the roadside or above roads. They provide important information for guiding, warning, or regulating the behaviors drivers in order to make driving safer and easier. The proposed recognition system is motivated by human recognition processing. The system consists of three major components: sensory, perceptual, and conceptual analyzers. The sensory analyzer extracts the spatial and temporal information of interest from video sequences. The extracted information then serves as the input stimuli to a spatiotemporal attentional (STA) neural network in the perceptual analyzer. If stimulation continues, focuses of attention will be established in the neural network. Potential features of road signs are then extracted from the image areas corresponding to the focuses of attention. The extracted features are next fed into the conceptual analyzer. The conceptual analyzer is composed of two modules: a category module and an object module. The former uses a configurable adaptive resonance theory (CART) neural network to determine the category of the input stimuli, whereas the later uses a configurable heteroassociative memory (CHAM) neural network to recognize an object in the determined category of objects. The proposed computational model has been used to develop a system for automatically detecting and recognizing road signs from sequences of traffic images. The experimental results revealed both the feasibility of the proposed computational model and the robustness of the developed road sign detection system.


vehicular technology conference | 2004

Video stabilization for a camcorder mounted on a moving vehicle

Yu Ming Liang; Hsiao Rong Tyan; Shyang Lih Chang; Hong-Yuan Mark Liao; Sei Wang Chen

Vision systems play an important role in many intelligent transportation systems (ITS) applications, such as traffic monitoring, traffic law reinforcement, driver assistance, and automatic vehicle guidance. These systems installed in either outdoor environments or vehicles have often suffered from image instability. In this paper, a video stabilization technique for a camcorder mounted on a moving vehicle is presented. The proposed approach takes full advantage of the a priori information of traffic images, significantly reducing the computational and time complexities. There are four major steps involved in the proposed approach: global feature extraction, camcorder motion estimation, motion taxonomy, and image compensation. We begin with extracting the global features of lane lines and the road vanishing point from the input image. The extracted features are then combined with those detected in previous images to compute the camcorder motion corresponding to the current input image. The computed motion consists of both expected and unexpected components. They are discriminated and the expected motion component is further smoothed. The resulting motion is next integrated with a predicted motion, which is extrapolated from the previous desired camcorder motions, leading to the desired camcorder motion associated with the input image under consideration. The current input image is finally stabilized based on the computed desired camcorder motion using an image transformation technique. A series of experiments with both real and synthetic data have been conducted. The experimental results have revealed the effectiveness of the proposed technique.


IEEE Transactions on Neural Networks | 2003

Automatic change detection of driving environments in a vision-based driver assistance system

Chiung Yao Fang; Sei Wang Chen; Chiou-Shann Fuh

Detecting critical changes of environments while driving is an important task in driver assistance systems. In this paper, a computational model motivated by human cognitive processing and selective attention is proposed for this purpose. The computational model consists of three major components, referred to as the sensory, perceptual, and conceptual analyzers. The sensory analyzer extracts temporal and spatial information from video sequences. The extracted information serves as the input stimuli to a spatiotemporal attention (STA) neural network embedded in the perceptual analyzer. If consistent stimuli repeatedly innervate the neural network, a focus of attention will be established in the network. The attention pattern associated with the focus, together with the location and direction of motion of the pattern, form what we call a categorical feature. Based on this feature, the class of the attention pattern and, in turn, the change in driving environment corresponding to the class are determined using a configurable adaptive resonance theory (CART) neural network, which is placed in the conceptual analyzer. Various changes in driving environment, both in daytime and at night, have been tested. The experimental results demonstrated the feasibilities of both the proposed computational model and the change detection system.


ieee conference on cybernetics and intelligent systems | 2004

A non-parametric blur measure based on edge analysis for image processing applications

Yun Chung Chung; Jung Ming Wang; Robert R. Bailey; Sei Wang Chen; Shyang Lih Chang

A nonparametric image blur measure is presented. The measure is based on edge analysis and is suitable for various image processing applications. The proposed measure for any edge point is obtained by combining the standard deviation of the edge gradient magnitude profile and the value of the edge gradient magnitude using a weighted average. The standard deviation describes the width of the edge, and its edge gradient magnitude is also included to make the blur measure more reliable. Moreover, the value of the weight is related to image contrast and can be calculated directly from the image. Experiments on natural scenes indicate that the proposed technique can effectively describe the blurriness of images in image processing applications.


computer vision and pattern recognition | 2003

A road sign recognition system based on dynamic visual model

Chiung Yao Fang; Chiou-Shann Fuh; Sei Wang Chen; P. S. Yen

We propose a computational model motivated by human cognitive processes for detecting changes of driving environments. The model, called dynamic visual model, consists of three major components: sensory, perceptual, and conceptual components. The proposed model is used as the underlying framework in which a system for detecting and recognizing road signs is developed.


IEEE Transactions on Intelligent Transportation Systems | 2009

Critical Motion Detection of Nearby Moving Vehicles in a Vision-Based Driver-Assistance System

Shen Cherng; Chiung Yao Fang; Chia Pei Chen; Sei Wang Chen

Driving always involves risk. Various means have been proposed to reduce the risk. Critical motion detection of nearby moving vehicles is one of the important means of preventing accidents. In this paper, a computational model, which is referred to as the dynamic visual model (DVM), is proposed to detect critical motions of nearby vehicles while driving on a highway. The DVM is motivated by the human visual system and consists of three analyzers: 1) sensory analyzers, 2) perceptual analyzers, and 3) conceptual analyzers. In addition, a memory, which is called the episodic memory, is incorporated, through which a number of features of the system, including hierarchical processing, configurability, adaptive response, and selective attention, are realized. A series of experimental results with both single and multiple critical motions are demonstrated and show the feasibility of the proposed system.


師大學報:數理與科技類 | 2002

A Vision-Based Traffic Light Detection System at Intersections

Yun-Chung Chung; Jung-Ming Wang; Sei Wang Chen

The traffic light detection system is one of the key components of the vision traffic law enforcement system, such as red light runner detecting, turning against traffic light, and stopping at the non-stopping zone. With various conditions of both open outdoor environments and device setups, the traffic light detection must be robust to weather and illumination conditions, and also tolerant to various perspective angles.An automatic traffic light detection system at intersections is presented in this paper. It performs traffic light detection on traffic videos without any signals from the traffic light controllers. This system is useful to be integrated with another ITS (Intelligent Transportation System) components. Background images are first generated by the system and in the mean time illumination parameters are estimated. The HSI color model is employed, and fuzzy methods together with morphological technique are utilized to acquire the candidate traffic light areas. With the relative spatial and temporal information, the scales, positions, and timing sequences of traffic lights are obtained. Some results from a preliminary trial are reported, and the associated researches are in progress


Computer Vision and Image Understanding | 1998

Extended attributed string matching for shape recognition

Sei Wang Chen; S. T. Tung; Chiung Yao Fang; Shen Cheng; Anil K. Jain

In this paper, we extend the attributed string matching (ASM) technique, which originally dealt with single objects, to handle scenes containing multiple objects. The emerging issues have uncovered several weaknesses inherent in ASM. We overcome these weaknesses in this study. Major tasks include the introduction of an invariant two-way relaxation process with fuzzy split-and-merge mechanism, a new set of cost functions for edit operators, and the legality costs of edit operations. Three algorithms have been developed, respectively, implementing the original ASM, its modification (MASM) characterized by the proposed new cost functions, and extended ASM (EASM) further incorporating the legality costs of edit operations. These algorithms are then applied to a number of real images. By comparing their performances, we observe that both the new cost functions and the legality costs of edit operations have greatly enlarged the range of the computed similarity values. An augmentation in the separability of similarity values signifies an increment in the discernibility among objects. Experimental results support the applicability of the extended ASM.

Collaboration


Dive into the Sei Wang Chen's collaboration.

Top Co-Authors

Avatar

Chiung Yao Fang

National Taiwan Normal University

View shared research outputs
Top Co-Authors

Avatar

Chiou-Shann Fuh

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Jung Ming Wang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kuo-En Chang

National Taiwan Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

An Chun Luo

National Taiwan Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge