Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Liang-Hua Chen is active.

Publication


Featured researches published by Liang-Hua Chen.


Pattern Recognition | 2000

Fast face detection via morphology-based pre-processing

Chin-Chuan Han; Hong-Yuan Mark Liao; Gwo-Jong Yu; Liang-Hua Chen

Abstract An efficient face detection algorithm which can detect multiple faces oriented in any directions in a cluttered environment is proposed. In this paper, a morphology-based technique is first devised to perform eye-analogue segmentation. Next, the previously located eye-analogue segments are used as guides to search for potential face regions. Then, each of these potential face images is normalized to a standard size and fed into a trained backpropagation neural network for identification. In this detection system, the morphology-based eye-analogue segmentation process is able to reduce the background part of a cluttered image by up to 95%. This process significantly speeds up the subsequent face detection procedure because only 5–10% of the regions of the original image remain for further processing. Experiments demonstrate that an approximately 94% success rate is reached, and that the relative false detection rate is very low.


Image and Vision Computing | 2003

Mean quantization based image watermarking

Liang-Hua Chen; Jyh-Jiun Lin

Abstract In this paper, a mean quantization based watermarking technique for the copyright protection of still digital images is proposed. Watermark embedding is performed in the wavelet transform domain by encoding each watermark bit into a set of wavelet coefficients. Watermark extraction does not require the existence of the original image. The proposed technique also integrates the human visual system characteristics and complementary hiding strategy to achieve the highest possible robustness without degrading image quality. Experimental results show that the proposed watermarking scheme is robust to a wide range of image distortions and is superior to the conventional quantization based technique.


Pattern Recognition | 2008

Movie scene segmentation using background information

Liang-Hua Chen; Yu-Chun Lai; Hong-Yuan Mark Liao

Scene extraction is the first step toward semantic understanding of a video. It also provides improved browsing and retrieval facilities to users of video database. This paper presents an effective approach to movie scene extraction based on the analysis of background images. Our approach exploits the fact that shots belonging to one particular scene often have similar backgrounds. Although part of the video frame is covered by foreground objects, the background scene can still be reconstructed by a mosaic technique. The proposed scene extraction algorithm consists of two main components: determination of the shot similarity measure and a shot grouping process. In our approach, several low-level visual features are integrated to compute the similarity measure between two shots. On the other hand, the rules of film-making are used to guide the shot grouping process. Experimental results show that our approach is promising and outperforms some existing techniques.


international conference on multimedia and expo | 2002

A motion-tolerant dissolve detection algorithm

Chih-Wen Su; Hsiao-Rong Tyan; Hong-Yuan Mark Liao; Liang-Hua Chen

Gradual shot change detection is one of the most important research issues in the field of video indexing/retrieval. Among the numerous types of gradual transitions, dissolve is considered the most common one, but also the most difficult to be detected one. Its well known that an efficient dissolve detection algorithm which can be executed on a real video is still deficient. In this paper, we present a novel dissolve detection algorithm that can efficiently detect dissolves with different durations. In addition, global motions caused by camera movement and local motions caused by object movement can be discriminated from a real dissolve by our algorithm. The experimental results show that the new method is indeed powerful.


international symposium on neural networks | 1993

Camera-based bar code recognition system using neural net

Shu-Jen Liu; Hong-Yuan Liao; Liang-Hua Chen; Hsiao-Rong Tyan; Jun-Wei Hsieh

In this paper, a bar code recognition system using neural networks is proposed. It is well known that in many stores the laser bar code reader is adopted at check-out counters. However, there is a major constraint when this tool is used. That is, unlike traditional camera-based picturing, the distance between the laser reader (sensor) and the target object is close to zero when the reader is applied. This may result in inconvenience in store automation because human operator has to take care of either the sensor or the objects (or both). For the purpose of store automation, human operator has to be removed from the process, i.e., a robot with visual capability requires to play an important role in such system. In this paper, we propose a camera-based bar code recognition system using backpropagation neural networks. The ultimate goal of this approach is to use camera instead of laser reader such that store automation can be achieved. There are a number of steps involved in the proposed system. The first step the system has to perform is to locate the position and orientation of the bar code in the acquired image. Secondly, the proposed system has to segment the bar code. Finally, we use a trained backpropagation neural network to perform bar code recognition task. Experiments have been conducted to corroborate the proposed method.


asian conference on computer vision | 1998

Face Recognition Using a Face-Only Database: A New Approach

Hong-Yuan Mark Liao; Chin-Chuan Han; Gwo-Jong Yu; Hsiao-Rong Tyan; Meng Chang Chen; Liang-Hua Chen

In this paper, a coarse-to-fine, LDA-based face recognition system is proposed. Through careful implementation, we found that the databases adopted by two state-of-the-art face recognition systems[1,2] were incorrect because they mistakenly use some non-face portions for face recognition. Hence, a face-only database is used in the proposed system. Since the facial organs on a human face only differ slightly from person to person, the decision-boundary determination process is tougher in this system than it is in conventional approaches. Therefore, in order to avoid the above mentioned ambiguity problem, we propose to retrieve a closest subset of database samples instead of retrieving a single sample. The proposed face recognition system has several advantages. First, the system is able to deal with a very large database and can thus provide a basis for efficient search. Second, due to its design nature, the system can handle the defocus and noise problems.Third, the system is faster than the autocorrelation plus LDA approach [1] and the PCA plus LDA approach [2], which are believed to be two statistics-based, state-of-the-art face recognition systems. Experimental results prove that the proposed method is better than traditional methods in terms of efficiency and accuracy.


Journal of Visual Communication and Image Representation | 2003

On the preview of digital movies

Liang-Hua Chen; Chih-Wen Su; Hong-Yuan Mark Liao; Chun-Chieh Shih

In this paper, a new technique is proposed for the automatic generation of a preview sequence of a feature film. The input video is decomposed into a number of basic components called shots. In this step, the proposed shot change detection algorithm is able to detect both the abrupt and gradual transition boundary. Then, shots are grouped into semantic-related scenes by taking into account the visual characteristics and temporal dynamics of video. Finally, by making use of an empirically motivated approach, the intense-interaction and action scenes are extracted to form the abstracting video. Compared with related works which integrate visual and audio information, our visual-based approach is computationally simple yet effective. � 2003 Elsevier Science (USA). All rights reserved.


systems man and cybernetics | 1999

Automatic data capture for geographic information systems

Liang-Hua Chen; Hong-Yuan Mark Liao; Jiing-Yuh Wang; Kuo-Chin Fan

We present a map interpretation system for automatic extraction of high level information from the scanned images of Chinese land register maps. Our map interpretation system consists of three main components: text/graphics separation, parcel extraction, and rotated character recognition. Our approach to text/graphics separation is based on a simple yet effective rule: the feature points of characters are more compact than those of graphics. In the parcel extraction process, the proposed algorithm traces the branches between feature points to extract polygon structure from line drawings. Our character recognition method is based on the matching of extracted strokes using a neural network. The techniques of text/graphics separation and character recognition are robust to the rotation and writing style of characters. Another advantage of our separation algorithm is that it can successfully extract a character connected to a graphical line. Experimental results have shown that the proposed system is effective for the data capture of geographic information systems.


Journal of Multimedia | 2009

Action Scene Detection with Support Vector Machines

Liang-Hua Chen; Chih-Wen Su; Chi-Feng Weng; Hong-Yuan Mark Liao

To entice the target audience into paying to see the full movie, the production of movie trailers is an integral part of movie industry. Action scene is the main component of a movie trailer. In this paper, we propose an automatic action scene detection algorithm based on the analysis of high-level video structure. The input video is first decomposed into a number of basic components called shots. Then, shots are grouped into semantic-related scenes by taking into account the visual characteristics and temporal dynamics of video. Based on the filmmaking characteristics of action scene, some features of the scene are extracted to feed into the support vector machine for classification. Compared with related works which integrate visual and audio information, our visual based approach is computationally simple yet effective.


Engineering Applications of Artificial Intelligence | 1995

A bar-code recognition system using backpropagation neural networks☆

Hong-Yuan Liao; Shu-Jen Liu; Liang-Hua Chen; Hsiao-Rong Tyan

Abstract In this paper, a bar-code recognition system using neural networks is proposed. It is well known that in many stores laser bar-code readers are used at check-out counters. However, there is a major constraint when this tool is used. That is, unlike traditional camera-based picturing, the distance between the laser reader (sensor) and the target object is close to zero when the reader is applied. This may result in inconvenience in store automation because the human operator has to manipulate either the sensor or the objects, or both. For the purpose of in-store automation, the human operator needs to be removed from the process, i.e. a robot with visual capability is required to play an important role in such a system. This paper proposes a camera-based bar-code recognition system using backpropagation neural networks. The ultimate goal of this approach is to use a camera instead of a laser reader so that in-store automation can be achieved. There are a number of steps involved in the proposed system. The first step the system has to perform is to locate the position and orientation of the bar code in the acquired image. Secondly, the proposed system has to segment the bar code. Finally, a trained backpropagation neural network is used to perform the bar-code recognition task. Experiments have been conducted to corroborate the efficiency of the proposed method.

Collaboration


Dive into the Liang-Hua Chen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hsiao-Rong Tyan

Chung Yuan Christian University

View shared research outputs
Top Co-Authors

Avatar

Jiing-Yuh Wang

National Central University

View shared research outputs
Top Co-Authors

Avatar

Kuo-Chin Fan

National Central University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gwo-Jong Yu

National Central University

View shared research outputs
Top Co-Authors

Avatar

Yu-Chun Lai

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Chi-Feng Weng

Fu Jen Catholic University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge