Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Duan-Yu Chen is active.

Publication


Featured researches published by Duan-Yu Chen.


international symposium on circuits and systems | 2006

Real-time event detection and its application to surveillance systems

M.H.-Y. Liao; Duan-Yu Chen; Chih-Wen Sua; Hsiao-Rang Tyan

In recent years, real-time direct detection of events by surveillance systems has attracted a great deal of attention. In this paper, we propose a new video-based surveillance system that can perform real-time event detection. In the background modeling phase, we adopt a mixture of Gaussian approach to determine the background. Meanwhile, we use color blob-based tracking to track foreground objects. Due to the self-occlusion problem, the tracking module is designed as a multi-blob tracking process to obtain similar multiple trajectories. We devise an algorithm to merge these trajectories into a representative one. After applying the Douglas-Peucker algorithm to approximate a trajectory, we can compare two arbitrary trajectories. The above mechanism enables us to conduct real-time event detection if a number of wanted trajectories are pre-stored in a video surveillance system


IEEE Transactions on Multimedia | 2007

Motion Flow-Based Video Retrieval

Chih-Wen Su; Hong-Yuan Mark Liao; Hsiao-Rong Tyan; Chia-Wen Lin; Duan-Yu Chen; Kuo-Chin Fan

In this paper, we propose the use of motion vectors embedded in MPEG bitstreams to generate so-called ldquomotion flowsrdquo, which are applied to perform video retrieval. By using the motion vectors directly, we do not need to consider the shape of a moving object and its corresponding trajectory. Instead, we simply ldquolinkrdquo the local motion vectors across consecutive video frames to form motion flows, which are then recorded and stored in a video database. In the video retrieval phase, we propose a new matching strategy to execute the video retrieval task. Motions that do not belong to the mainstream motion flows are filtered out by our proposed algorithm. The retrieval process can be triggered by query-by-sketch or query-by-example. The experiment results show that our method is indeed superb in the video retrieval process.


IEEE Transactions on Circuits and Systems for Video Technology | 2010

Non-Orthogonal View Iris Recognition System

Chia-Te Chou; Sheng-Wen Shih; Wen-Shiung Chen; Victor W. Cheng; Duan-Yu Chen

This paper proposes a non-orthogonal view iris recognition system comprising a new iris imaging module, an iris segmentation module, an iris feature extraction module and a classification module. A dual-charge-coupled device camera was developed to capture four-spectral (red, green, blue, and near-infrared) iris images which contain useful information for simplifying the iris segmentation task. An intelligent random sample consensus iris segmentation method is proposed to robustly detect iris boundaries in a four-spectral iris image. In order to match iris images acquired at different off-axis angles, we propose a circle rectification method to reduce the off-axis iris distortion. The rectification parameters are estimated using the detected elliptical pupillary boundary. Furthermore, we propose a novel iris descriptor which characterizes an iris pattern with multiscale step/ridge edge-type maps. The edge-type maps are extracted with the derivative of Gaussian and the Laplacian of Gaussian filters. The iris pattern classification is accomplished by edge-type matching which can be understood intuitively with the concept of classifier ensembles. Experimental results show that the equal error rate of our approach is only 0.04% when recognizing iris images acquired at different off-axis angles within ±30°.


international conference on multimedia and expo | 2007

Human Action Recognition Using 2-D Spatio-Temporal Templates

Duan-Yu Chen; Sheng-Wen Shih; Hong-Yuan Mark Liao

A framework for human action modeling and recognition in continuous action sequences is proposed. A star figure enclosed by a bounding convex polygon is used to effectively represent the extremities of the silhouette of a human body. Thus, human actions are recorded as a sequence of the star figures parameters, which is then used for action modeling. To model human actions in a compact manner while characterizing their spatio-temporal patterns, star figure parameters are represented by a 2-D feature map, which is used and regarded as a spatio-temporal template. Experiments to evaluate the performance of the proposed framework show that it can recognize human actions in an efficient and effective manner.


IEEE Transactions on Circuits and Systems for Video Technology | 2014

Visual Depth Guided Color Image Rain Streaks Removal Using Sparse Coding

Duan-Yu Chen; Chien-Cheng Chen; Li-Wei Kang

Rain removal from a single color image is a challenging problem as no temporal information among successive images can be obtained. In this paper, we propose a single-color-image-based rain removal framework by properly formulating rain removal as an image decomposition problem based on sparse representation. In our framework, an input color image is first decomposed into a low-frequency part and a high-frequency part by using the guided image filter so that the rain streaks would be in the high-frequency part with nonrain textures/edges, and the high-frequency part is then decomposed into a rain component and a nonrain component by performing dictionary learning and sparse coding. To separate rain streaks from the high-frequency part, a hybrid feature set, including histogram of oriented gradients, depth of field, and Eigen color, is employed to further decompose the high-frequency part. With the hybrid feature set applied, most rain streaks can be removed; simultaneously nonrain component can be enhanced. To the best of our knowledge, compared with the state-of-the-art approaches, the proposed method is among the first to focus on the problem of single color image rain removal and achieves promising results with not only the rain component being removed more completely, but also the visual quality of restored images being improved.


IEEE Transactions on Multimedia | 2008

Spatiotemporal Motion Analysis for the Detection and Classification of Moving Targets

Duan-Yu Chen; Kevin J. Cannons; Hsiao-Rong Tyan; Sheng-Wen Shih; Hong-Yuan Mark Liao

This paper presents a video surveillance system in the environment of a stationary camera that can extract moving targets from a video stream in real time and classify them into predefined categories according to their spatiotemporal properties. Targets are detected by computing the pixel-wise difference between consecutive frames, and then classified with a temporally boosted classifier and ldquospatiotemporal-oriented energyrdquo analysis. We demonstrate that the proposed classifier can successfully recognize five types of objects: a person, a bicycle, a motorcycle, a vehicle, and a person with an umbrella. In addition, we process targets that do not match any of the AdaBoost-based classifiers categories by using a secondary classification module that categorizes such targets as crowds of individuals or non-crowds. We show that the above classification task can be performed effectively by analyzing a targets spatiotemporal-oriented energies, which provide a rich description of the targets spatial and dynamic features. Our experiment results demonstrate that the proposed system is extremely effective in recognizing all predefined object classes.


international symposium on multimedia | 2006

Continuous Human Action Segmentation and Recognition Using a Spatio-Temporal Probabilistic Framework

Duan-Yu Chen; Hong-Yuan Mark Liao; Sheng-Wen Shih

In this paper, a framework of automatic human action segmentation and recognition in continuous action sequences is proposed. A star-like figure is proposed to effectively represent the extremities in the silhouette of human body. The human action, thus, is recorded as a sequence of the star-like figure parameters, which is used for action modeling. To model human actions in a compact manner while characterizing their spatio-temporal distributions, star-like figure parameters are represented by Gaussian mixture models (GMM). In addition, to address the intrinsic nature of temporal variations in a continuous action sequence, we transform the time sequence of star-like figure parameters into frequency domain by discrete cosine transform (DCT) and use only the first few coefficients to represent different temporal patterns with significant discriminating power. The performance shows that the proposed framework can recognize continuous human actions in an efficient way


international conference on multimedia and expo | 2008

Dynamic visual saliency modeling based on spatiotemporal analysis

Duan-Yu Chen; Hsiao-Rong Tyan; Dun-Yu Hsiao; Sheng-Wen Shih; Hong-Yuan Mark Liao

Producing an appropriate extent of visually salient regions in video sequences is a challenging task. In this work, we propose a novel approach for modeling dynamic visual attention based on spatiotemporal analysis. Our model first detects salient points in three-dimensional video volumes, and then uses them as seeds to search the extent of salient regions in a motion attention map. To determine the extent of attended regions, the maximum entropy in the spatial domain is used to analyze the dynamics obtained from spatiotemporal analysis. The experiment results show that the proposed dynamic visual attention model can effectively detect visual saliency through successive video volumes.


International Journal of Semantic Computing | 2007

ATOMIC HUMAN ACTION SEGMENTATION AND RECOGNITION USING A SPATIO-TEMPORAL PROBABILISTIC FRAMEWORK

Duan-Yu Chen; Hong-Yuan Mark Liao; Sheng-Wen Shih

In this paper, a framework of automatic human action segmentation and recognition in continuous action sequences is proposed. A star figure enclosed by a bounding convex polygon is used to effectively represent the extremities of the silhouette of a human body. The human action, thus, is recorded as a sequence of the star-figures parameters, which is used for action modeling. To model human actions in a compact manner while characterizing their spatio-temporal distributions, the star-figures parameters are represented by Gaussian mixture models (GMM). In addition, to address the intrinsic nature of temporal variations in a continuous action sequence, we transform the time sequence of star-like figure parameters into frequency domain by discrete cosine transform (DCT) and use only the first few coefficients to represent different temporal patterns with significant discriminating power. The performance shows that the proposed framework can recognize continuous human actions in an efficient way.


Journal of Visual Communication and Image Representation | 2005

Robust video sequence retrieval using a novel object-based T2D-histogram descriptor

Duan-Yu Chen; Suh-Yin Lee; Hong-Yuan Mark Liao

Due to the tremendous growth in the number of digital videos, the development of video retrieval algorithms that can perform efficient and effective retrieval task is indispensable. In this paper, we propose a high-level motion activity descriptor, object-based transformed 2D-histogram (T2D-histogram), which exploits both spatial and temporal features to characterize video sequences in a semantics-based manner. The discrete cosine transform (DCT) is applied to convert the object-based 2D-histogram sequences from the time domain to the frequency domain. Using this transform, the original high-dimensional time domain features used to represent successive frames are significantly reduced to a set of low-dimensional features in frequency domain. The energy concentration property of DCT allows us to use only a few DCT coefficients to effectively capture the variations of moving objects. Having the efficient scheme for video representation, one can perform video retrieval in an accurate and efficient way.

Collaboration


Dive into the Duan-Yu Chen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sheng-Wen Shih

National Chi Nan University

View shared research outputs
Top Co-Authors

Avatar

Hsiao-Rong Tyan

Chung Yuan Christian University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chia-Te Chou

National Chi Nan University

View shared research outputs
Top Co-Authors

Avatar

Chia-Wen Lin

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Suh-Yin Lee

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Wen-Shiung Chen

National Chi Nan University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge