Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Haiyuan Wu is active.

Publication


Featured researches published by Haiyuan Wu.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1999

Face detection from color images using a fuzzy pattern matching method

Haiyuan Wu; Qian Chen; Masahiko Yachida

This paper describes a new method to detect faces in color images based on the fuzzy theory. We make two fuzzy models to describe the skin color and hair color, respectively. In these models, we use a perceptually uniform color space to describe the color information to increase the accuracy and stableness. We use the two models to extract the skin color regions and the hair color regions, and then comparing them with the prebuilt head-shape models by using a fuzzy theory based pattern-matching method to detect face candidates.This paper describes a new method to detect faces in color images based on the fuzzy theory. We make two fuzzy models to describe the skin color and hair color, respectively. In these models, we us...


european conference on computer vision | 2004

Camera Calibration with Two Arbitrary Coplanar Circles

Qian Chen; Haiyuan Wu; Toshikazu Wada

In this paper, we describe a novel camera calibration method to estimate the extrinsic parameters and the focal length of a camera by using only one single image of two coplanar circles with arbitrary radius.


ieee international conference on automatic face and gesture recognition | 1998

3D head pose estimation without feature tracking

Qian Chen; Haiyuan Wu; Takeshi Fukumoto; Masahiko Yachida

We present a robust approach to estimate the 3D pose of human heads in a single image. In contrast with other research, this method only makes use of the information about the skin region and the hair region of heads. At first, we use an efficient algorithm based on a perceptually uniform color system and fuzzy theory to extract the skin region and hair region, which are then used to detect faces in images. After that, the areas, the centers, and the axis of the least inertia both of the skin region and the hair region are calculated, which are then used to estimate the 3D pose of the head. We tested our method by estimating the 3D pose of a head from live video sequences. The three angles describing the three rotation elements of a head about X, Y and Z axes are extracted from each frame of the image sequence, which are then sent to a program that generates CG animations of a synthesized head in the estimated pose.


international conference on automatic face and gesture recognition | 1996

Face and facial feature extraction from color image

Haiyuan Wu; Taro Yokoyama; Dadet Pramadihanto; Masahiko Yachida

This paper presents an automatic processing of human face from color images. The system works hierarchically from detecting the position of human face and its features (such as eyes, nose, mouth, etc.) to contours and feature points extraction. The position of human face and its parts are detected from the image by applying the integral projection method, which synthesize the color information (skin and hair color) and the edge information (intensity and sign). In order to extract the contour-line of face features we used a multiple active contour model with color information based energy terms. Facial feature points are decided based on the optimized contours. The proposed system is confirmed to be very effective and robust to deal with the image of faces in the complex background.


international conference on pattern recognition | 2002

Glasses frame detection with 3D Hough transform

Haiyuan Wu; G. Yoshikawa; Tadayoshi Shioyama; T. Lao; T. Kawade

This paper describes a method to detect glasses frames for robust facial image processing. This method makes use of the 3D features obtained by a trinocular stereo vision system. The glasses frame detection is based on the fact that the rims of a pair of glasses lie on the same plane in 3D space. We use a 3D Hough transform to obtain a plane in which 3D features are concentrated. Then, based on the obtained 3D plane and with some geometry constraints, we can detect a group of 3D features belonging to the frame of the glasses. Using this approach, we can separate the 3D features of the glasses frame from those of facial features. This approach does not require any prior knowledge about face pose, eye positions, or the shape of the glasses.


international conference on pattern recognition | 2002

Optimal Gabor filters for high speed face identification

Haiyuan Wu; Yukio Yoshida; Tadayoshi Shioyama

This paper describes a fast face identification method with Gabor filters. Two efforts are made to achieve the acceptable processing speed: 1) we design the optimal Gabor filters with the arrangement theory that uses a few directions and layers; and 2) the transformation with Gabor filters (called Gabor transformation) is only done over the regions around the facial feature points, not the whole input image. The facial feature points extraction is performed by detecting the facial organ regions with color information and edge information, followed by, the corner detection in each detected facial organ region with the SUSAN operator.


Measurement Science and Technology | 2002

Measurement of the length of pedestrian crossings and detection of traffic lights from image data

Tadayoshi Shioyama; Haiyuan Wu; Naoki Nakamura; Suguru Kitawaki

This paper proposes a method for measurement of the length of a pedestrian crossing and for the detection of traffic lights from image data observed with a single camera. The length of a crossing is measured from image data of white lines painted on the road at a crossing by using projective geometry. Furthermore, the state of the traffic lights, green (go signal) or red (stop signal), is detected by extracting candidates for the traffic light region with colour similarity and selecting a true traffic light from them using affine moment invariants. From the experimental results, the length of a crossing is measured with an accuracy such that the maximum relative error of measured length is less than 5% and the rms error is 0.38 m. A traffic light is efficiently detected by selecting a true traffic light region with an affine moment invariant.


international conference on pattern recognition | 1998

Spotting recognition of head gestures from color image series

Haiyuan Wu; Tadayoshi Shioyama; Hirotomo Kobayashi

Presents an approach for spotting recognition of human head gestures from color image series. First, we use an algorithm based on a perceptually uniform color system to detect skin color region and hair color region of the face from input images. Then, the 3D pose of the head relative to the camera is estimated by calculating the primary moment and secondary moment of the skin color region and the hair color region. The standard patterns of each gesture were represented by the sequence of rotation angles in X, Y, Z axis, and the head area. Moreover the human head gestures were recognized by using the continuous dynamic programming algorithm to compare input image series with the standard patterns. Extensive experiments show the effectiveness of this approach in the human head gestures recognition from live video sequences with different people even wearing glasses, with different head size, and with unknown complex background.


international conference on pattern recognition | 1996

Facial feature extraction and face verification

Haiyuan Wu; Qian Chen; Masahiko Yachida

The outputs of many face detection systems are face candidates that may contain some false faces. This paper describes an algorithm to verify the face candidates. Once a face candidate is detected from an image, the positions and the sizes of the facial features on the face are predicted based on the knowledge about the arrangement of the facial features on a human face. Then the facial features are detected from the predicted positions with a coarse to fine approach. The face candidates are verified by considering whether facial features can be extracted and how well they match with a relational face model that describes the geometric relationship among the facial features of a generic human face.


Journal of Multimedia | 2006

K-means Tracker: A General Algorithm for Tracking People

Chunsheng Hua; Haiyuan Wu; Qian Chen; Toshikazu Wada

In this paper, we present a clustering-based tracking algorithm for tracking people (e.g. hand, head, eyeball, body, and lips). It is always a challenging task to track people under complex environment, because such target often appears as a concave object or having apertures. In this case, many background areas are mixed into the tracking area which are difficult to be removed by modifying the shape of the search area during tracking. Our method becomes a robust tracking algorithm by applying the following four key ideas simultaneously: 1) Using a 5D feature vector to describe both the geometric feature “(x,y)” and color feature “(Y,U,V)” of each pixel uniformly. This description ensures our method to follow both the position and color changes simultaneously during tracking; 2) This algorithm realizes the robust tracking for objects with apertures by classifying the pixels, within the search area, into “target” and “background” with K-means clustering algorithm that uses both the “positive” and “negative” samples. 3) Using a variable ellipse model (a) to describe the shape of a nonrigid object (e.g. hand) approximately, (b) to restrict the search area, and (c) to model the surrounding non-target background. This guarantees the stable tracking of objects with various geometric transformations. 4) With both the “positive” and “negative” samples, our algorithm achieves the automatic self tracking failure detection and recovery. This ability makes our method distinctively more robust than the conventional tracking algorithms. Through extensive experiments in various environments and conditions, the effectiveness and the efficiency of the proposed algorithm is confirmed.

Collaboration


Dive into the Haiyuan Wu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tadayoshi Shioyama

Kyoto Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge