Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kazuhisa Ishimaru is active.

Publication


Featured researches published by Kazuhisa Ishimaru.


british machine vision conference | 2014

Real-time Dense Disparity Estimation based on Multi-Path Viterbi for Intelligent Vehicle Applications.

Qian Long; Qiwei Xie; Seiichi Mita; Hossein Tehrani Niknejad; Kazuhisa Ishimaru; Chunzhao Guo

This paper proposes a new real-time stereo matching algorithm paired with an online auto-rectification framework. The algorithm treats disparities of stereo images as hidden states and conducts Viterbi process at 4 bi-directional paths to estimate them. Structural similarity, total variation constraint, and a specific hierarchical merging strategy are combined with the Viterbi process to improve the robustness and accuracy. Based on the results of Viterbi, a convex optimization equation is derived to estimate epipolar line distortion. The estimated distortion information is used for the online compensation of Viterbi process at an auto-rectification framework. Extensive experiments were conducted to compare proposed algorithm with other practical state-of-the-art methods for intelligent vehicle applications.


ieee intelligent vehicles symposium | 2017

3D point cloud map based vehicle localization using stereo camera

Yuquan Xu; Vijay John; Seiichi Mita; Hossein Tehrani; Kazuhisa Ishimaru; Sakiko Nishino

Nowadays, the driverless automobiles have become a near reality and are going to become widely available. For autonomous navigation, the vehicles need to precise localize itself within a pre-defined map. In this paper, we propose a novel algorithm for the problem of three-dimensional (3D) point cloud map (PCL) based localization using a stereo camera. This 3D point cloud map consists of dense 3D geometric information and intensity measures of surface reflectivity value generated by the 3D light detection and ranging (LIDAR) scanner based mapping system. Although some LIDAR based localization algorithms have been proposed, in this paper we present a comparable centimeter-level accuracy localization algorithm using much cheaper and commodity stereo camera. Specifically, at each candidate position we transform the 3D data points from the real-world coordinate system to the camera coordinate system and synthetic the virtual depth and intensity images from the 3D PCL map. We localize the ego vehicle by estimating the transformation between the real-world and vehicle coordinates in each frame by matching these virtual images with the stereo depth and intensity images. In the experiment part, we show that although the 3D map was generated 3 years ago, the proposed algorithm still can produce reliable localization results even in many difficult cases, such as shadow, dynamic objects, new lane marker and night.


ieee intelligent vehicles symposium | 2016

Real-time stereo vision system at nighttime with noise reduction using simplified non-local matching cost

Yuquan Xu; Qian Long; Seiichi Mita; Hossein Tehrani; Kazuhisa Ishimaru; Noriaki Shirai

Reconstructing the depth information from the 3D scene using stereo vision is a key element in the development of advanced driver assistance systems. We previously proposed a novel real-time stereo matching method based on the Multi-paths Viterbi that outperforms the well-known SGBM (Semi-Global Block-Matching Algorithm) algorithm in both disparity accuracy and density. In this paper, we extend the previous framework to estimate the depth information for challenging environments such as nighttime. Estimating the depth at nighttime is generally challenging as the night images are dark and noisy and the estimated depth information is not accurate. In our proposed work, we modify the non-local means filter and propose a new non-local cost function to combine the noise reduction and stereo vision within a single framework. We evaluate our proposed algorithm on both natural and synthetic datasets and show that the proposed algorithm can significantly improve the quality of the stereo results in the low light condition. Moreover, our proposed method can be implemented in real-time for autonomous driver applications.


international conference on intelligent transportation systems | 2014

A real-time dense stereo matching method for critical environment sensing in autonomous driving

Qian Long; Qiwei Xie; Seiichi Mita; Kazuhisa Ishimaru; Noriaki Shirai

This paper proposes a novel real-time stereo matching algorithm paired with an online auto-rectification framework to solve environment sensing problems in autonomous driving, such as road, curb and small object detection etc. The algorithm treats disparities of stereo images as hidden states and conducts a Viterbi process at 4 bi-directional paths to estimate them. Structural similarity, total variation constraint, and a specific hierarchical merging strategy are combined with the Viterbi process to improve the robustness and accuracy. Based on the results of Viterbi, a convex optimization equation is derived to estimate epipolar line distortion. The estimated distortion information is used for the online compensation of Viterbi process at an auto-rectification framework. The stereo matching result is used to generate the histogram map. A Radon transform and a Viterbi process are applied to the histogram map to detect roads and curbs. Disparities near the road are mapped to 3D space to detect small objects on road. Real-word experiments show that this method can help sensing the surrounding 3D environment for driving vehicles robustly.


british machine vision conference | 2013

High Frequency 3D Reconstruction from Unsynchronized Multiple Cameras

Yumi Kakumu; Fumihiko Sakaue; Jun Sato; Kazuhisa Ishimaru; Masayuki Imanishi

Stereo reconstruction generally requires image correspondence such as point and line correspondences in multiple images. Cameras need to be synchronized to obtain corresponding points of time-varying shapes. However, the image information obtained from synchronized multiple cameras is redundant, and has limitations with resolution. In this paper, we show that by using a set of unsynchronized cameras instead of synchronized one, we can obtain much more information on 3D motions, and can reconstruct higher frequency 3D motions than that with the standard synchronized cameras. As a result, we can attain super resolution 3D reconstruction from unsynchronized cameras. In the standard reconstruction with multiple cameras, the cameras are synchronized, and observe the same set of M sequential points in 3D space as shown in Fig. 1 (a). Thus, the maximum frequency fS of 3D motion, which can be recovered from camera images, is fS < 1 2 M. If we observe the same 3D motion by using K unsynchronized cameras, we observe K×M different points in the 3D space as shown in Fig. 1 (b). As a result, all the 2KM observations from K cameras are independent of one another, unlike those from the synchronized cameras. Thus, we have a possibility of reconstructing 3D points up to 2 3 KM. Therefore, the maximum frequency fU of recoverable 3D motion is fU < 1 3 KM. Thus, we find that the unsynchronized cameras have a possibility of reconstructing 3 K times higher frequency 3D motion than the synchronized cameras as follows:


international conference on computer vision theory and applications | 2017

Real-time Stereo Vision System at Tunnel.

Yuquan Xu; Seiichi Mita; Hossein Tehrani Niknejad; Kazuhisa Ishimaru

Although stereo vision has made great progress in recent years, there are limited works which estimate the disparity for challenging scenes such as tunnel scenes. In such scenes, owing to the low light conditions and fast camera movement, the images are severely degraded by motion blur. These degraded images limit the performance of the standard stereo vision algorithms. To address this issue, in this paper, we combine the stereo vision with the image deblurring algorithms to improve the disparity result. The proposed algorithm consists of three phases: the PSF estimation phase; the image restoration phase; and the stereo vision phase. In the PSF estimation phase, we introduce three methods to estimate the blur kernel, which are optical flow based algorithm, cepstrum base algorithm and simple constant kernel algorithm, respectively. In the image restoration phase, we propose a fast non-blind image deblurring algorithm to recover the latent image. In the last phase, we propose a multi-scale multi-path Viterbi algorithm to compute the disparity given the deblurred images. The advantages of the proposed algorithm are demonstrated by the experiments with data sequences acquired in the tunnel.


ieee intelligent vehicles symposium | 2017

Automated driving by monocular camera using deep mixture of experts

Vijay John; Seiichi Mita; Hossein Tehrani; Kazuhisa Ishimaru

In this paper, we propose a real-time vision-based filtering algorithm for steering angle estimation in autonomous driving. A novel scene-based particle filtering algorithm is used to estimate and track the steering angle using images obtained from a monocular camera. Highly accurate proposal distributions and likelihood are modeled for the second order particle filter, at the scene-level, using deep learning. For every road scene, an individual proposal distribution and likelihood model is learnt for the corresponding particle filter. The proposal distribution is modeled using a novel long short term memory network-mixture-of-expert-based regression framework. To facilitate the learning of highly accurate proposal distributions, each road scene is partitioned into straight driving, left turning and right turning sub-partitions. Subsequently, each expert in the regression framework accurately model the expert drivers behavior within a specific partition of the given road scene. Owing to the accuracy of the modelled proposal distributions, the steering angle is robustly tracked, even with a limited number of sampled particles. The sampled particles are assigned importance weights using a deep learning-based likelihood. The likelihood is modeled with a convolutional neural network and extra trees-based regression framework, which predicts the steering angle for a given image. We validate our proposed algorithm using multiple sequences. We perform a detailed parameter analysis and a comparative analysis of our proposed algorithm with different baseline algorithms. Experimental results show that the proposed algorithm can robustly track the steering angles with few particles in real-time even for challenging scenes.


international conference on pattern recognition | 2016

3D reconstruction under light ray distortion from parametric focal cameras

Satoshi Morinaka; Fumihiko Sakaue; Jun Sato; Kazuhisa Ishimaru; Naoki Kawasaki

In this paper, we propose a new camera model for reconstructing 3D objects under light ray distortion caused by refractive medias. The proposed method can reconstruct 3D scene, even if light rays projected into the cameras are refracted by the refractive media, such as glasses and raindrops. For this objective, we represent light ray projection of multiple cameras by using a pair of planes shared by the multiple cameras in the scene. By using this model, intrinsic and extrinsic camera parameters as well as the refractive properties of the refractive media can be represented efficiently. By using the newly defined camera model, we propose a method for recovering 3D points and camera parameters with refractive properties simultaneously. The experimental results show the efficiency of the proposed camera model and reconstruction method.


international conference on computer vision theory and applications | 2015

Accurate 3D Reconstruction from Naturally Swaying Cameras

Yasunori Nishioka; Fumihiko Sakaue; Jun Sato; Kazuhisa Ishimaru; Naoki Kawasaki; Noriaki Shirai

In this paper, we propose a method for reconstructing 3D structure accurately from images taken by unintentionally swaying cameras. In this method, image super-resolution and 3D reconstruction are achieved simultaneously by using series of motion blur images. In addition, we utilize coded exposure in order to achieve stable super resolution. Furthermore, we show efficient stereo camera arrangement for stable 3D reconstruction from swaying cameras. The experimental results show that the proposed method can reconstruct 3D shape very accurately.


Archive | 2015

Apparatus for detecting boundary line of vehicle lane and method thereof

Kazuhisa Ishimaru; Naoki Kawasaki; Shunsuke Suzuki; Tetsuya Takafuji

Collaboration


Dive into the Kazuhisa Ishimaru's collaboration.

Top Co-Authors

Avatar

Seiichi Mita

Toyota Technological Institute

View shared research outputs
Top Co-Authors

Avatar

Jun Sato

Nagoya Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fumihiko Sakaue

Nagoya Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Masayuki Imanishi

Nagoya Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge