Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hsiao-Rong Tyan is active.

Publication


Featured researches published by Hsiao-Rong Tyan.


Computer Vision and Image Understanding | 1999

Wavelet-Based Off-Line Handwritten Signature Verification

Peter Shaohua Deng; Hong-Yuan Mark Liao; Chin Wen Ho; Hsiao-Rong Tyan

In this paper, a wavelet-based off-line handwritten signature verification system is proposed. The proposed system can automatically identify useful and common features which consistently exist within different signatures of the same person and, based on these features, verify whether a signature is a forgery or not. The system starts with a closed-contour tracing algorithm. The curvature data of the traced closed contours are decomposed into multiresolutional signals using wavelet transforms. Then the zero-crossings corresponding to the curvature data are extracted as features for matching. Moreover, a statistical measurement is devised to decide systematically which closed contours and their associated frequency data of a writer are most stable and discriminating. Based on these data, the optimal threshold value which controls the accuracy of the feature extraction process is calculated. The proposed approach can be applied to both on-line and off-line signature verification systems. Experimental results show that the average success rates for English signatures and Chinese signatures are 92.57% and 93.68%, respectively.


IEEE Transactions on Multimedia | 2007

Motion Flow-Based Video Retrieval

Chih-Wen Su; Hong-Yuan Mark Liao; Hsiao-Rong Tyan; Chia-Wen Lin; Duan-Yu Chen; Kuo-Chin Fan

In this paper, we propose the use of motion vectors embedded in MPEG bitstreams to generate so-called ldquomotion flowsrdquo, which are applied to perform video retrieval. By using the motion vectors directly, we do not need to consider the shape of a moving object and its corresponding trajectory. Instead, we simply ldquolinkrdquo the local motion vectors across consecutive video frames to form motion flows, which are then recorded and stored in a video database. In the video retrieval phase, we propose a new matching strategy to execute the video retrieval task. Motions that do not belong to the mainstream motion flows are filtered out by our proposed algorithm. The retrieval process can be triggered by query-by-sketch or query-by-example. The experiment results show that our method is indeed superb in the video retrieval process.


international conference on multimedia and expo | 2006

Internet Traffic Classification for Scalable QOS Provision

Junghun Park; Hsiao-Rong Tyan; C.-C.J. Kuo

A new scheme that classifies the Internet traffic according to their application types for scalable QoS provision is proposed in this work. The traditional port-based classification method does not yield satisfactory performance, since the same port can be shared by multiple applications. Furthermore, asymmetric routing and errors of modern measurement tools such as PCF and NetFlow degrades the classification performance. To address these issues, the proposed classification process consists of two steps: feature selection and classification. Candidate features that can be obtained easily by ISP are examined. Then, we perform feature reduction so as to balance the performance and complexity. As to classification, the REPTree and the bagging schemes are adopted and compared. It is demonstrated by simulations with real data that the proposed classification scheme outperforms existing techniques


international conference on multimedia and expo | 2002

A motion-tolerant dissolve detection algorithm

Chih-Wen Su; Hsiao-Rong Tyan; Hong-Yuan Mark Liao; Liang-Hua Chen

Gradual shot change detection is one of the most important research issues in the field of video indexing/retrieval. Among the numerous types of gradual transitions, dissolve is considered the most common one, but also the most difficult to be detected one. Its well known that an efficient dissolve detection algorithm which can be executed on a real video is still deficient. In this paper, we present a novel dissolve detection algorithm that can efficiently detect dissolves with different durations. In addition, global motions caused by camera movement and local motions caused by object movement can be discriminated from a real dissolve by our algorithm. The experimental results show that the new method is indeed powerful.


IEEE Transactions on Multimedia | 2008

Spatiotemporal Motion Analysis for the Detection and Classification of Moving Targets

Duan-Yu Chen; Kevin J. Cannons; Hsiao-Rong Tyan; Sheng-Wen Shih; Hong-Yuan Mark Liao

This paper presents a video surveillance system in the environment of a stationary camera that can extract moving targets from a video stream in real time and classify them into predefined categories according to their spatiotemporal properties. Targets are detected by computing the pixel-wise difference between consecutive frames, and then classified with a temporally boosted classifier and ldquospatiotemporal-oriented energyrdquo analysis. We demonstrate that the proposed classifier can successfully recognize five types of objects: a person, a bicycle, a motorcycle, a vehicle, and a person with an umbrella. In addition, we process targets that do not match any of the AdaBoost-based classifiers categories by using a secondary classification module that categorizes such targets as crowds of individuals or non-crowds. We show that the above classification task can be performed effectively by analyzing a targets spatiotemporal-oriented energies, which provide a rich description of the targets spatial and dynamic features. Our experiment results demonstrate that the proposed system is extremely effective in recognizing all predefined object classes.


international symposium on neural networks | 1993

Camera-based bar code recognition system using neural net

Shu-Jen Liu; Hong-Yuan Liao; Liang-Hua Chen; Hsiao-Rong Tyan; Jun-Wei Hsieh

In this paper, a bar code recognition system using neural networks is proposed. It is well known that in many stores the laser bar code reader is adopted at check-out counters. However, there is a major constraint when this tool is used. That is, unlike traditional camera-based picturing, the distance between the laser reader (sensor) and the target object is close to zero when the reader is applied. This may result in inconvenience in store automation because human operator has to take care of either the sensor or the objects (or both). For the purpose of store automation, human operator has to be removed from the process, i.e., a robot with visual capability requires to play an important role in such system. In this paper, we propose a camera-based bar code recognition system using backpropagation neural networks. The ultimate goal of this approach is to use camera instead of laser reader such that store automation can be achieved. There are a number of steps involved in the proposed system. The first step the system has to perform is to locate the position and orientation of the bar code in the acquired image. Secondly, the proposed system has to segment the bar code. Finally, we use a trained backpropagation neural network to perform bar code recognition task. Experiments have been conducted to corroborate the proposed method.


asian conference on computer vision | 1998

Face Recognition Using a Face-Only Database: A New Approach

Hong-Yuan Mark Liao; Chin-Chuan Han; Gwo-Jong Yu; Hsiao-Rong Tyan; Meng Chang Chen; Liang-Hua Chen

In this paper, a coarse-to-fine, LDA-based face recognition system is proposed. Through careful implementation, we found that the databases adopted by two state-of-the-art face recognition systems[1,2] were incorrect because they mistakenly use some non-face portions for face recognition. Hence, a face-only database is used in the proposed system. Since the facial organs on a human face only differ slightly from person to person, the decision-boundary determination process is tougher in this system than it is in conventional approaches. Therefore, in order to avoid the above mentioned ambiguity problem, we propose to retrieve a closest subset of database samples instead of retrieving a single sample. The proposed face recognition system has several advantages. First, the system is able to deal with a very large database and can thus provide a basis for efficient search. Second, due to its design nature, the system can handle the defocus and noise problems.Third, the system is faster than the autocorrelation plus LDA approach [1] and the PCA plus LDA approach [2], which are believed to be two statistics-based, state-of-the-art face recognition systems. Experimental results prove that the proposed method is better than traditional methods in terms of efficiency and accuracy.


Engineering Applications of Artificial Intelligence | 1995

A bar-code recognition system using backpropagation neural networks☆

Hong-Yuan Liao; Shu-Jen Liu; Liang-Hua Chen; Hsiao-Rong Tyan

Abstract In this paper, a bar-code recognition system using neural networks is proposed. It is well known that in many stores laser bar-code readers are used at check-out counters. However, there is a major constraint when this tool is used. That is, unlike traditional camera-based picturing, the distance between the laser reader (sensor) and the target object is close to zero when the reader is applied. This may result in inconvenience in store automation because the human operator has to manipulate either the sensor or the objects, or both. For the purpose of in-store automation, the human operator needs to be removed from the process, i.e. a robot with visual capability is required to play an important role in such a system. This paper proposes a camera-based bar-code recognition system using backpropagation neural networks. The ultimate goal of this approach is to use a camera instead of a laser reader so that in-store automation can be achieved. There are a number of steps involved in the proposed system. The first step the system has to perform is to locate the position and orientation of the bar code in the acquired image. Secondly, the proposed system has to segment the bar code. Finally, a trained backpropagation neural network is used to perform the bar-code recognition task. Experiments have been conducted to corroborate the efficiency of the proposed method.


international conference on multimedia and expo | 2013

Skyline localization for mountain images

Yao-Ling Hung; Chih-Wen Su; Yuan-Hsiang Chang; Jyh-Chian Chang; Hsiao-Rong Tyan

In this paper, we propose a novel method for automatically locating the skyline that represents the shape of mountains. The appearances of mountain and sky are variable because of the weather, season or region. In order to extract the skyline of mountains under complicated and variable circumstances, support vector machine (SVM) is applied for the prediction of a part of the skyline between sky region and mountain region by using the color, statistics features and location information of edge. Then, the linking of incomplete fragments of skyline is formulated as a shortest path problem and solved by dynamic programming strategy. Our experimental results demonstrate that the proposed method is accurate and robust.


international conference on multimedia and expo | 2008

Dynamic visual saliency modeling based on spatiotemporal analysis

Duan-Yu Chen; Hsiao-Rong Tyan; Dun-Yu Hsiao; Sheng-Wen Shih; Hong-Yuan Mark Liao

Producing an appropriate extent of visually salient regions in video sequences is a challenging task. In this work, we propose a novel approach for modeling dynamic visual attention based on spatiotemporal analysis. Our model first detects salient points in three-dimensional video volumes, and then uses them as seeds to search the extent of salient regions in a motion attention map. To determine the extent of attended regions, the maximum entropy in the spatial domain is used to analyze the dynamics obtained from spatiotemporal analysis. The experiment results show that the proposed dynamic visual attention model can effectively detect visual saliency through successive video volumes.

Collaboration


Dive into the Hsiao-Rong Tyan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sheng-Wen Shih

National Chi Nan University

View shared research outputs
Top Co-Authors

Avatar

Liang-Hua Chen

Fu Jen Catholic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hahn-Ming Lee

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge