Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yasushi Kanazawa is active.

Publication


Featured researches published by Yasushi Kanazawa.


british machine vision conference | 2004

Detection of Planar Regions with Uncalibrated Stereo using Distributions of Feature Points

Yasushi Kanazawa; Hiroshi Kawakami

We propose a robust method for detecting local planar regions in a scene with an uncalibrated stereo. Our method is based on random sampling using distributions of feature point locations. For doing RANSAC, we use the distributions for each feature point defined by the distances between the point and the other points. We first choose a correspondence by using an uniform distribution and next choose candidate correspondences by using the distribution of the chosen point. Then, we compute a homography from the chosen correspondences and find the largest consensus set of the homography. We repeat this procedure until all regions are detected. We demonstrate that our method is robust to the outliers in a scene by simulations and real image examples.


british machine vision conference | 2006

Wide Baseline Matching using Triplet Vector Descriptor

Yasushi Kanazawa; Koki Uemura

We propose an image matching method using triplet vector descriptor. The triplet vector descriptor consists of two different types of affine invariants: the gray level profile between two feature points and the two covariance matrices of those points. In order to establish point matches, we first vote the similarities of the triplet vector descriptors into candidate matches, and then, we verify the matches by normalized triangular region vectors, which are also affine invariant. After enforcing the uniqueness of the candidate matches, we finally adopt RANSAC with the epipolar constraint for removing outliers. By using our method, we can obtain correct matches on wide baseline matching problems. We show the effectiveness of our method by real image examples.


RobVis'08 Proceedings of the 2nd international conference on Robot vision | 2008

Accurate image matching in scenes including repetitive patterns

Sunao Kamiya; Yasushi Kanazawa

We propose an accurate method for image matching in scenes including repetitive patterns like buildings, walls, and so on. We construct our matching method with two phases: matching between the elements of repetitive regions; matching between the points in the remained regions. We first detect the elements of repetitive patterns in each image and find matches between the elements in the regions without using any descriptors depended on a view-point. We then find matches between the points in the remained regions of the two images using the informations of the detected matches. The advantage of our method is to use an efficient matching information in the repetitive patterns. We show the effectiveness of our method by real image examples.


asian conference on computer vision | 2016

Lip Reading from Multi View Facial Images Using 3D-AAM

Takuya Watanabe; Kouichi Katsurada; Yasushi Kanazawa

Lip reading is a technique to recognize the spoken words base on lip movement. In this process, it is important to detect the correct features of the facial images. However, detection is not easy in the real situations because the facial images may be taken from various angles. To cope with this problem, lip reading from multi view facial images has been conducted in several research institutes. In this paper, we propose a lip reading approach using the 3D Active Appearance Models (AAM) features and the Hidden Markov Model (HMM)-based recognition model. The AAM is a parametric model constructed from both shape and appearance parameters. The parameters are compressed into the combination parameters in the AAM, and are used in lip reading or some other facial image processing applications. The 3D-AAM extends the traditional 2D shape model to 3D shape model built from three different view angles (frontal, left, and right profile). It provides an effective algorithm to align the model with the RGB and the 3D range images obtained by the RGBD-camera. The benefit of using 3D-AAM in lip reading is that it enables to recognize the spoken words from any angle of the facial images. In the experiment, we compared the accuracy of lip reading using 3D-AAM with that of the traditional 2D-AAM on various angles of facial images. Based on the result, we confirmed that 3D-AAM is effective in cross view lip reading despite using only the frontal images in the HMM training phase.


Ipsj Transactions on Computer Vision and Applications | 2014

Decomposing Three Fundamental Matrices for Initializing 3-D Reconstruction from Three Views

Yasushi Kanazawa; Yasuyuki Sugaya; Kenichi Kanatani

This paper focuses on initializing 3-D reconstruction from scratch without any prior scene information. Traditionally, this has been done from two-view matching, which is prone to the degeneracy called “imaginary focal lengths.” We overcome this difficulty by using three images, but we do not require three-view matching; all we need is three fundamental matrices separately computed from pair-wise image matching. We exploit the redundancy of the three fundamental matrices to optimize the camera parameters and the 3-D structure. The main theme of this paper is to give an analytical procedure for computing the positions and orientations of the three cameras and their internal parameters from three fundamental matrices. The emphasis is on resolving the ambiguity of the solution resulting from the sign indeterminacy of the fundamental matrices. We do numerical simulation to show that imaginary focal lengths are less likely for our three view methods, resulting in higher accuracy than the conventional two-view method. We also test the degeneracy tolerance capability of our method by using endoscopic intestine tract images, for which the camera configuration is almost always nearly degenerate. We demonstrate that our method allows us to obtain more detailed intestine structures than two-view reconstruction and observe how our three-view reconstruction is refined by bundle adjustment. Our method is expected to broaden medical applications of endoscopic images.


pacific-rim symposium on image and video technology | 2013

Initializing 3-D Reconstruction from Three Views Using Three Fundamental Matrices

Yasushi Kanazawa; Yasuyuki Sugaya; Kenichi Kanatani

This paper focuses on initializing 3-D reconstruction from scratch without any prior scene information. Traditionally, this has been done from two-view matching, which is prone to the degeneracy called imaginary focal lengths. We overcome this difficulty by using three images, but we do not require three-view matching; all we need is three fundamental matrices separately computed from image pairs. We exploit the redundancy of the three fundamental matrices to optimize the camera parameters and the 3-D structure. We do numerical simulation to show that imaginary focal lengths are less likely to occur, resulting in higher accuracy than two-view reconstruction. We also test the degeneracy tolerance capability of our method by using endoscopic intestine tract images, for which the camera configuration is almost always nearly degenerate. We demonstrate that our method allows us to obtain more detailed intestine structures than two-view reconstruction and hence leads to new medical applications to endoscopic image analysis.


2016 International Conference On Advanced Informatics: Concepts, Theory And Application (ICAICTA) | 2016

Accurate 3-D reconstruction of sands from UAV image sequence

Ryotaro Matsunaga; Mizuki Hashimoto; Yasushi Kanazawa; Jun Sonoda

This paper proposes an accurate 3-D reconstruction method for almost planar surface like sands from image sequences taken by UAV. In this case, there are two problems: computing time and degenerated case in two-view reconstruction. The degenerated case causes the distortion in the reconstructed shape. By our method, we can reduce not only the computing time and but also the distortion of the reconstructed shape. In our method, we select effective frames by using optical flow for reducing computing time and adopt homography-based reconstruction for accurate reconstruction. We show the effectiveness of our method by some real image examples.


asian conference on pattern recognition | 2013

Image Matching for Repetitive Patterns by Clustering and Transforming in Feature Space

Yoshiki Tanno; Yasushi Kanazawa

We propose a method for image matching in scenes including repetitive patterns like buildings, walls, and so on. In this paper, by introducing a feature space of the elements of the repetitive patterns, we can detect not only the repetitive pattern regions in each image, but also find correspondences between the two images. We perform geometric clustering in the feature space using geometric AIC in the detection, we obtain correspondences by finding a proper transformation between the corresponding clusters in the different two feature spaces in the matching. We show the effectiveness of our method by experimental results using simulated and real images.


Ipsj Transactions on Computer Vision and Applications | 2013

Color Image Enhancement for Dichromats by Additive Image Noise

Kakeru Wakimoto; Yasushi Kanazawa; Naoya Ohta

We present a method for enhancing the color recognition ability of dichromats. Whereas trichromats (usual people) recognize all colors in a 3-D color space, dichromats only recognize colors on a degenerate 2-D space in it. Our method compensates for the lost information along the degenerate direction in the color space with the amount of noise in the image. Dichromats recognize the lost color information as noisy textures, while the original color information for trichromats is preserved. Our method is applicable not only to artificial figures such as graphs but also to natural photographs. We show the effectiveness of our method by experiments.


asian conference on computer vision | 2004

Robust Image Matching Preserving Global Consistency

Yasushi Kanazawa; Kenichi Kanataniy

Collaboration


Dive into the Yasushi Kanazawa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hiroshi Kawakami

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Koki Uemura

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar

Mizuki Hashimoto

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yasuyuki Sugaya

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kakeru Wakimoto

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kouichi Katsurada

Toyohashi University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge