Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rui Ishiyama is active.

Publication


Featured researches published by Rui Ishiyama.


international conference on pattern recognition | 2002

Geodesic illumination basis: compensating for illumination variations in any pose for face recognition

Rui Ishiyama; Shizuo Sakamoto

Proposes a model of illumination variations of the object appearance, called the geodesic illumination basis model. It calculates pose-independent illumination bases on a 3D model, and these bases are warped into view-dependent bases in any pose. We experimentally evaluate how many illumination samples and bases are necessary, and show that our model can compensate for any illumination variations in any pose. A face recognition system incorporating our proposed model is constructed, and its performance is tested using a database of 3D models and test images of 42 individuals captured in drastically differing pose and illumination conditions. Our system achieves a first-choice success ratio of 97.3% when the position and pose of the target face are known.


systems man and cybernetics | 2005

An appearance model constructed on 3-D surface for robust face recognition against pose and illumination variations

Rui Ishiyama; Masahiko Hamanaka; Shizuo Sakamoto

We propose a face recognition method that is robust against image variations due to arbitrary lighting and a large extent of pose variations, ranging from frontal to profile views. Existing appearance models defined on image planes are not applicable for such pose variations that cause occlusions and changes of silhouette. In contrast, our method constructs an appearance model of a three-dimensional (3-D) object on its surface. Our proposed model consists of a 3-D shape and geodesic illumination bases (GIBs). GIBs can describe the irradiances of an objects surface under any illumination and generate illumination subspace that can describe illumination variations of an image in an arbitrary pose. Our appearance model is automatically aligned to the target image by pose optimization based on a rough pose, and the residual error of this model fitting is used as the recognition score. We tested the recognition performance of our method with an extensive database that includes 14 000 images of 200 individuals with drastic illumination changes and pose variations up to 60/spl deg/ sideward and 45/spl deg/ upward. The method achieved a first-choice success ratio of 94.2% without knowing precise poses a priori.


international conference on pattern recognition | 2004

Fast and accurate facial pose estimation by aligning a 3D appearance model

Rui Ishiyama; Shizuo Sakamoto

This paper proposed a method to estimate pose including large rotation in depth by aligning the 3D appearance model with the target image captured under various illumination conditions. Pose estimation is formulated by the minimization of the error between the target image and an image reproduced by the model. In the experiments, the performances of our proposed method for static and realtime pose estimation has been evaluated with test images including pose variations up to 60 degrees even from frontal and drastic illumination variations. It has shown that the proposed method is fast enough.


international conference on pattern recognition | 2006

A Compact Model of Human Postures Extracting Common Motion from Individual Samples

Rui Ishiyama; Hiroo Ikeda; Shizuo Sakamoto

Model-based marker less human motion capture is often affected by instabilities of estimation mainly due to-high degrees of freedom and inaccuracies in the body model. The authors propose a compact model of human postures which extracts common motion across different persons from individual samples. Our analysis on motion capture data shows that individualities appear as constant offsets that represent individual figures. The proposed model compactly describes the variations of postures in common motion by using a low-dimensional linear model. Experimental results show that our model gives moderate constraints to improve the accuracy of posture estimation from a single image of an unknown person whose body size is unknown


international conference on image processing | 2009

Face image enhancement using 3D and spectral information

Charles Dubout; Masato Tsukada; Rui Ishiyama; Chisato Funayama; Sabine Süsstrunk

This paper presents a novel method of enhancing image quality of face pictures using 3D and spectral information. Most conventional techniques directly work on the image data, shifting the skin color to a predefined skin tone, and thus do not take into account the effects of shape and lighting. The proposed method first recovers the 3D shape of a face in an input image using a 3D morphable model. Then, using color constancy and inverse rendering techniques, specularities and the true skin color, i.e., its spectral reflectance, are recovered. The quality of the input image is improved by matching the skin reflectance to a predefined reference and reducing the amount of specularities. The method realizes the enhancement in a more physically accurate manner compared to previous ones. Subjective experiments on image quality demonstrate the validity of the proposed method.


international conference on machine vision | 2017

Mass-produced parts traceability system based on automated scanning of “Fingerprint of Things”

Toru Takahashi; Yuta Kudo; Rui Ishiyama

This paper presents a prototype of parts traceability system which employs the “Fingerprint of Things”-based individual identification technique. Traceability of mass-produced tiny parts such as bolts and nuts are required to ensure quality and safety of big machines. However conventional systems using ID tags or serial marking are not applicable because of quantity and tiny size. To overcome this problem, we propose a tag-less traceability system which uses their appearance images as “fingerprints” to identify each of them. Our traceability system consists of three components; (i) automated fingerprints scanning machine for enrollment, (ii) mobile device for query and (iii) cloud server for identification from database. The key to success of our traceability system is enabling us to capture repeatable image features from the same parts in both of (i) enrollment and (ii) query. To this end, we designed the two lighting mechanisms; one for fast scanning of numerous bolts by automatic feeding, and another for a mobile device to capture one parts by hand. In our experiments, we achieve that 1,000 metal bolts produced with the same mold are perfectly identified by matching their surface images captured with our automatic scanning machine and a smart phone.


international conference on image processing | 2009

Specularity removal for enhancing face recognition

Rui Ishiyama; Masato Tsukada

This paper proposes a new method for removing specularity from face images so that albedo (diffuse reflectivity) can be accurately estimated. Our method utilizes common structural properties of face images to estimate the Lambertian component (including shadows) without using albedo, then specularity is separated as the positive component of the difference between the estimated Lambertian component and the source image. Experimental results show that face-recognition performance is significantly improved by applying our new algorithm. Numerous previous face-recognition methods use 3D shape and albedo of an enrolled image for face recognition under variable pose and illumination conditions. However albedo used by the previous methods contained a residual of specularity from the enrolled image, which produced matching errors. Our algorithm is effective for solving this problem.


international conference on computer vision | 2010

Extracting scene-dependent discriminant features for enhancing face recognition under severe conditions

Rui Ishiyama; Nobuyuki Yasukawa

This paper proposes a new method to compare similarities of candidate models that are fitted to different areas of a query image. This method extracts the discriminant features that are changed due to the varying pose/lighting condition of given query image, and the confidence of each model-fitting is evaluated based on how much of the discriminant features is captured in each foreground. The confidence is fused with the similarity to enhance the face-identification performance. In an experiment using 7,000 images of 200 subjects taken under largely varying pose and lighting conditions, our proposed method reduced the recognition errors by more than 25% compared to the conventional method.


Archive | 2001

Device, method and record medium for image comparison

Rui Ishiyama


Archive | 2007

Method and apparatus for collating object

Rui Ishiyama

Researchain Logo
Decentralizing Knowledge