Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ken Sakurada is active.

Publication


Featured researches published by Ken Sakurada.


computer vision and pattern recognition | 2013

Detecting Changes in 3D Structure of a Scene from Multi-view Images Captured by a Vehicle-Mounted Camera

Ken Sakurada; Takayuki Okatani; Koichiro Deguchi

This paper proposes a method for detecting temporal changes of the three-dimensional structure of an outdoor scene from its multi-view images captured at two separate times. For the images, we consider those captured by a camera mounted on a vehicle running in a city street. The method estimates scene structures probabilistically, not deterministically, and based on their estimates, it evaluates the probability of structural changes in the scene, where the inputs are the similarity of the local image patches among the multi-view images. The aim of the probabilistic treatment is to maximize the accuracy of change detection, behind which there is our conjecture that although it is difficult to estimate the scene structures deterministically, it should be easier to detect their changes. The proposed method is compared with the methods that use multi-view stereo (MVS) to reconstruct the scene structures of the two time points and then differentiate them to detect changes. The experimental results show that the proposed method outperforms such MVS-based methods.


british machine vision conference | 2015

Change Detection from a Street Image Pair using CNN Features and Superpixel Segmentation.

Ken Sakurada; Takayuki Okatani

Figures 1 100 show the results of the scenes of Panoramic Change Detection Dataset. The rows show, from top to bottom, input image pair, ground-truth of change detection, final change detection results, superpixel segmentation results, feature distance between each grid using feature of pool-5 layer, feature distance projected to superpixel segmentation result of each input image, probabilities of the sky and the ground estimated using Geometric Context.


Computer Vision and Image Understanding | 2016

Hybrid macromicro visual analysis for city-scale state estimation

Ken Sakurada; Takayuki Okatani; Kris M. Kitani

We estimate large-scale land surface conditions using aerial and street images.These two types of images are captured from orthogonal viewpoints.Aerial image is used to learn the correspondence of land conditions.Street image is used to acquire high-resolution statistics of land conditions. We address the task of estimating large-scale land surface conditions using overhead aerial (macro-level) images and street view (micro-level) images. These two types of images are captured from orthogonal viewpoints and have different resolutions, thus conveying very different types of information that can be used in a complementary way. Moreover, their integration is necessary to enable an accurate understanding of changes in natural phenomena over massive city-scale landscapes. The key technical challenge is devising a method to integrate these two disparate types of image data in an effective manner, to leverage the wide coverage capabilities of macro-level images and detailed resolution of micro-level images. The strategy proposed in this work uses macro-level imaging to learn the extent to which the land condition corresponds between land regions that share similar visual characteristics (e.g., mountains, streets, buildings, rivers), whereas micro-level images are used to acquire high resolution statistics of land conditions (e.g., the amount of debris on the ground). By combining macro- and micro-level information about regional correspondences and surface conditions, our proposed method is capable of generating detailed estimates of land surface conditions over an entire city.


asian conference on computer vision | 2014

Massive City-Scale Surface Condition Analysis Using Ground and Aerial Imagery

Ken Sakurada; Takayuki Okatani; Kris M. Kitani

Automated visual analysis is an effective method for understanding changes in natural phenomena over massive city-scale landscapes. However, the view-point spectrum across which image data can be acquired is extremely wide, ranging from macro-level overhead (aerial) images spanning several kilometers to micro-level front-parallel (street-view) images that might only span a few meters. This work presents a unified framework for robustly integrating image data taken at vastly different viewpoints to generate large-scale estimates of land surface conditions. To validate our approach we attempt to estimate the amount of post-Tsunami damage over the entire city of Kamaishi, Japan (over 4 million square-meters). Our results show that our approach can efficiently integrate both micro and macro-level images, along with other forms of meta-data, to efficiently estimate city-scale phenomena. We evaluate our approach on two modes of land condition analysis, namely, city-scale debris and greenery estimation, to show the ability of our method to generalize to a diverse set of estimation tasks.


ieee/sice international symposium on system integration | 2010

Real-time prediction of fall and collision of tracked vehicle for remote-control support

Ken Sakurada; Shihoko Suzuki; Kazunori Ohno; Eijiro Takeuchi; Satoshi Tadokoro; Akihiko Hata; Naoki Miyahara; Kazuyuki Higashi

This thesis describes a new method that in real time predicts fall and collision in order to support remote control of a tracked vehicle with sub-tracks. A tracked vehicle has high ability of getting over rough terrain. However, it is difficult for an operator at a remote place to control the vehicles moving direction and speed. Hence, we propose a new path evaluation system based on the measurement of environmental shapes around the vehicle. In this system, the candidate paths are generated by operator inputs and terrain information. For evaluating the traversability of the path, we estimate the pose of the robot on the path and contact points with the ground. Then, the combination of translational and rotational velocity is chosen.


field and service robotics | 2014

Creating Multi-Viewpoint Panoramas of Streets with Sparsely Located Buildings

Takayuki Okatani; Jun Yanagisawa; Daiki Tetsuka; Ken Sakurada; Koichiro Deguchi

This paper presents a method for creating multi-viewpoint panoramas that is particularly targeted at streets with sparsely located buildings. As is known in the literature, it is impossible to create panoramas of such scenes having a wide range of depths in a distortion-free manner. To overcome this difficulty, our method renders sharp images only for the facades of buildings and the ground surface (e.g., vacant lands and sidewalks) along the target streets; it renders blurry images for other objects in the scene to make their geometric distortion less noticeable while maintaining their presence. To perform these, our method first estimates the three-dimensional structures of the target scenes using the results obtained by SfM (structure from motion), identifies to which category (i.e., the facade surface, the ground surface, or other objects) each scene point belongs based on MRF (Markov Random Field) optimization, and creates panoramic images of the scene by mosaicing the images of the three categories. The blurry images of objects are generated by a similar technique to digital refocus of the light field photography. We present several panoramic images created by our method for streets in the tsunami-devastated areas in the north-eastern Japan coastline because of the Great East Japan Earthquake of March 11, 2011.


intelligent robots and systems | 2010

Development of motion model and position correction method using terrain information for tracked vehicles with sub-tracks

Ken Sakurada; Eijiro Takeuchi; Kazunori Ohno; Satoshi Tadokoro

Gyro-based odometry is an easy-to-use localization method for tracked vehicles because it uses only internal sensors. However, on account of track-terrain slippage and transformation caused by changes in sub-track angles, gyro-based odometry for tracked vehicles with sub-tracks experiences difficulties in estimating the exact location of the vehicles. In order to solve this problem, we propose an estimation method with 6 degrees of freedom (DOF) for determining the position and pose of the tracked vehicles using terrain information. (In this study, position refers to the robots position and pose.) In the proposed method, position are estimated using a particle filter. The subsequent position of each particle are predicted using a motion model that separately considers each contact point of the vehicle with the ground. In addition, each particle is analyzed using terrain and gravity information. Experimental results demonstrate the effectiveness of this method.


ieee/sice international symposium on system integration | 2010

Development of a laser scan method to decrease hidden areas caused by objects like pole at whole 3-D shape measurement

Akihiko Hata; Kazunori Ohno; Eijiro Takeuchi; Satoshi Tadokoro; Ken Sakurada; Naoki Miyahra; Kazuyuki Higashi


The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) | 2010

1A1-D17 Localization using Terrain Information for Tracked Vehicles with Sub-Crawlers and Evaluation of Effectiveness for the Shape of the Land

Ken Sakurada; Eijiro Takeuchi; Kazunori Ohno; Satoshi Tadokoro


The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) | 2009

1A1-F01 Three-dimensional Self-position Estimation of Tracked Vehicles using Terrain Information

Ken Sakurada; Eijiro Takeuchi; Kazunori Ohno; Satoshi Tadokoro

Collaboration


Dive into the Ken Sakurada's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kris M. Kitani

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge