Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Koichiro Yamaguchi is active.

Publication


Featured researches published by Koichiro Yamaguchi.


european conference on computer vision | 2014

Efficient Joint Segmentation, Occlusion Labeling, Stereo and Flow Estimation

Koichiro Yamaguchi; David A. McAllester; Raquel Urtasun

In this paper we propose a slanted plane model for jointly recovering an image segmentation, a dense depth estimate as well as boundary labels (such as occlusion boundaries) from a static scene given two frames of a stereo pair captured from a moving vehicle. Towards this goal we propose a new optimization algorithm for our SLIC-like objective which preserves connecteness of image segments and exploits shape regularization in the form of boundary length. We demonstrate the performance of our approach in the challenging stereo and flow KITTI benchmarks and show superior results to the state-of-the-art. Importantly, these results can be achieved an order of magnitude faster than competing approaches.


computer vision and pattern recognition | 2013

Robust Monocular Epipolar Flow Estimation

Koichiro Yamaguchi; David A. McAllester; Raquel Urtasun

We consider the problem of computing optical flow in monocular video taken from a moving vehicle. In this setting, the vast majority of image flow is due to the vehicles ego-motion. We propose to take advantage of this fact and estimate flow along the epipolar lines of the egomotion. Towards this goal, we derive a slanted-plane MRF model which explicitly reasons about the ordering of planes and their physical validity at junctions. Furthermore, we present a bottom-up grouping algorithm which produces over-segmentations that respect flow boundaries. We demonstrate the effectiveness of our approach in the challenging KITTI flow benchmark [11] achieving half the error of the best competing general flow algorithm and one third of the error of the best epipolar flow algorithm.


international conference on pattern recognition | 2006

Vehicle Ego-Motion Estimation and Moving Object Detection using a Monocular Camera

Koichiro Yamaguchi; Takeo Kato; Yoshiki Ninomiya

This paper proposes a method for estimating the ego-motion of the vehicle and for detecting moving objects on roads by using a vehicle mounted monocular camera. There are two problems in ego-motion estimation. Firstly, a typical road scene contains moving objects such as other vehicles. Secondly, roads display fewer feature points compared to the number associated with background structures. In our approach, ego-motion is estimated from the correspondences of feature points extracted from various regions other than those in which objects are moving. After estimating the ego-motion, the three dimensional structure of the scene is reconstructed and any moving objects are detected. In our experiments, it has been shown that the proposed method is able to detect moving objects such as vehicles and pedestrians


european conference on computer vision | 2012

Continuous markov random fields for robust stereo estimation

Koichiro Yamaguchi; Tamir Hazan; David A. McAllester; Raquel Urtasun

In this paper we present a novel slanted-plane model which reasons jointly about occlusion boundaries as well as depth. We formulate the problem as one of inference in a hybrid MRF composed of both continuous (i.e., slanted 3D planes) and discrete (i.e., occlusion boundaries) random variables. This allows us to define potentials encoding the ownership of the pixels that compose the boundary between segments, as well as potentials encoding which junctions are physically possible. Our approach outperforms the state-of-the-art on Middlebury high resolution imagery [1] as well as in the more challenging KITTI dataset [2], while being more efficient than existing slanted plane MRF methods, taking on average 2 minutes to perform inference on high resolution imagery.


IEEE Transactions on Intelligent Transportation Systems | 2011

Multiband Image Segmentation and Object Recognition for Understanding Road Scenes

Yousun Kang; Koichiro Yamaguchi; Takashi Naito; Yoshiki Ninomiya

This paper presents a novel method for semantic segmentation and object recognition in a road scene using a hierarchical bag-of-textons method. Current driving-assistance systems rely on multiple vehicle-mounted cameras to perceive the road environment. The proposed method relies on integrated color and near-infrared images and uses the hierarchical bag-of-textons method to recognize the spatial configuration of objects and extract contextual information from the background. The histogram of the hierarchical bag-of-textons is concatenated to textons extracted from a multiscale grid window to automatically learn the spatial context for semantic segmentation. Experimental results show that the proposed method has better segmentation accuracy than the conventional bag-of-textons method. By integrating it with other scene interpretation systems, the proposed system can be used to understand road scenes for vehicle environment perception.


ieee intelligent vehicles symposium | 2006

Moving Obstacle Detection using Monocular Vision

Koichiro Yamaguchi; Takeo Kato; Yoshiki Ninomiya

This paper proposes a method for detecting moving obstacles on roads, by using a vehicle mounted monocular camera. To detect various moving obstacles, such as vehicles and pedestrians, the ego-motion of the vehicle is initially estimated from images captured by the camera. There are two problems in ego-motion estimation. Firstly, a typical road scene contains moving obstacles. This causes false estimation of the ego-motion. Secondly, roads possess fewer features, when compared to the number associated with background structures. This reduces the accuracy of ego-motion estimation. In our approach, the ego-motion is estimated from the correspondences of dispersed feature points extracted from various regions other than those that contain moving obstacles. After estimating the ego-motion, any moving obstacles are detected by tracking the feature points over consecutive frames. In our experiments, it has been shown that the proposed method is able to detect moving obstacles


international conference on pattern recognition | 2008

Road region estimation using a sequence of monocular images

Koichiro Yamaguchi; Akihiro Watanabe; Takashi Naito; Yoshiki Ninomiya

In advanced driving assistance systems, it is important to be able to detect the region covered by the road in the images. This paper proposes a method for estimating the road region in images captured by a vehicle-mounted monocular camera. Our proposed method first estimates all of relevant parameters for the camera motion and the 3D road plane from correspondence points between successive images. By calculating a homography matrix from the estimated camera motion and the estimated road plane parameters, and then warping the image at the previous frame, the road region can be determined. To achieve robustness in various road scenes, our method selects the threshold for determining the road region adaptively and incorporates the assumption of a simple road region boundary. In our experiments, it has been shown that the proposed method is able to estimate the road region in real road environments.


asian conference on computer vision | 2009

Pedestrian recognition using second-order HOG feature

Hui Cao; Koichiro Yamaguchi; Takashi Naito; Yoshiki Ninomiya

Histogram of Oriented Gradients (HOG) is a well-known feature for pedestrian recognition which describes object appearance as local histograms of gradient orientation. However, it is incapable of describing higher-order properties of object appearance. In this paper we present a second-order HOG feature which attempts to capture second-order properties of object appearance by estimating the pairwise relationships among spatially neighbor components of HOG feature. In our preliminary experiments, we found that using harmonic-mean or min function to measure pairwise relationship gives satisfactory results. We demonstrate that the proposed second-order HOG feature can significantly improve the HOG feature on several pedestrian datasets, and it is also competitive to other second-order features including GLAC and CoHOG.


Ipsj Transactions on Computer Vision and Applications | 2009

Texture Segmentation of Road Environment Scene Using SfM Module and HLAC Features

Yousun Kang; Koichiro Yamaguchi; Takashi Naito; Yoshiki Ninomiya

This paper presents a new image segmentation method for the recognition of texture-based objects in a road environment scene. Using the proposed method, we can classify texture-based objects three dimensionally using the SfM (Structure from Motion) module and the HLAC (Higher-order Local Autocorrelation) features. By estimating the vehicle’s ego-motion, the SfM module can reconstruct the three dimensional structure of the road scene. Texture features of input images are extracted from HLAC functions according to their depth, as obtained using the SfM module. The proposed method can effectively recognize texture-based objects of a road scene by considering their three-dimensional structure in a perspective 2D image. Experimental results show that the proposed method can not only effectively classify the texture patterns of structures in a 2D road scene, but also represent classified texture patterns as three-dimensional structures. The proposed system can revolutionize a three-dimensional scene understanding system for vehicle environment perception.


international conference on intelligent transportation systems | 2014

Improved Lane Detection Based on Past Vehicle Trajectories

Chunzhao Guo; Jun-ichi Meguro; Koichiro Yamaguchi; Kiyosumi Kidono; Yoshiko Kojima

Knowing where the host lane lies is paramount to the effectiveness of many advanced driver assistance systems (ADAS), such as lane keep assist (LKA) and adaptive cruise control (ACC). This paper presents an approach for improving lane detection based on the past trajectories of vehicles. Instead of expensive high-precision map, we use the vehicle trajectory information to provide additional lane-level spatial support of the traffic scene, and combine it with the visual evidence to improve each step of the lane detection procedure, thereby overcoming typical challenges of normal urban streets. Such an approach could serve as an Add-On to enhance the performance of existing lane detection systems in terms of both accuracy and robustness. Experimental results in various typical but challenging scenarios show the effectiveness of the proposed system.

Collaboration


Dive into the Koichiro Yamaguchi's collaboration.

Researchain Logo
Decentralizing Knowledge