Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kiyoshi Irie is active.

Publication


Featured researches published by Kiyoshi Irie.


intelligent robots and systems | 2010

A sensor platform for outdoor navigation using gyro-assisted odometry and roundly-swinging 3D laser scanner

Tomoaki Yoshida; Kiyoshi Irie; Eiji Koyanagi; Masahiro Tomono

This paper proposes a light-weight sensor platform that consists of gyro-assisted odometry and a 3D laser scanner for localization of human-scale robots. The gyro-assisted odometry provides highly accurate positioning only by dead-reckoning. The 3D laser scanner has a wide field of view and uniform measuring-point distribution. Robust and computationally inexpensive localization is implemented on the sensor platform using a particle filter on a 2D grid map generated by projecting 3D points on to the ground. The system uses small and low-cost sensors, and can be applied to a variety of mobile robots in human-scale environments. Outdoor navigation experiments were performed at the Tsukuba Challenge 2009, which is an open proving ground for human-scale robots. Our robot successfully navigated the assigned 1-km course in a fully autonomous mode multiple times.


Advanced Robotics | 2012

Outdoor Localization Using Stereo Vision Under Various Illumination Conditions

Kiyoshi Irie; Tomoaki Yoshida; Masahiro Tomono

We present a mobile robot localization method using only a stereo camera. Vision-based localization in outdoor environments is a challenging issue because of extreme changes in illumination. To cope with varying illumination conditions, we use two-dimensional occupancy grid maps generated from three-dimensional point clouds obtained by a stereo camera. Furthermore, we incorporate salient line segments extracted from the ground into the grid maps. The grid maps are not significantly affected by illumination conditions because occupancy information and salient line segments can be robustly obtained. On the grid maps, a robots poses are estimated using a particle filter that combines visual odometry and map matching. We use edge-point-based stereo simultaneous localization and mapping to obtain simultaneously occupancy information and robot ego-motion estimation. We tested our method under various illumination and weather conditions, including sunny and rainy days. The experimental results showed the effectiveness and robustness of the proposed method. Our method enables localization under extremely poor illumination conditions, which are challenging for even existing state-of-the-art methods.


international conference on robotics and automation | 2011

3D laser scanner with gazing ability

Tomoaki Yoshida; Kiyoshi Irie; Eiji Koyanagi; Masahiro Tomono

This paper presents a 3D laser scanner that can gaze at an arbitrary region. By modulating the secondary rotation speed of roundly swinging 3D laser scanner, it gazes at a specific region and measures with high density measurement points. The proposed method uses a secondary rotation motor to control the measurement point density and no extra motor is required. The 3D scanner can keep the field of view of and scan cycle time while gazing at any region. The system is evaluated on experimental targets for different scenarios using a mobile robot.


international conference on robotics and automation | 2011

A High Dynamic Range vision approach to outdoor localization

Kiyoshi Irie; Tomoaki Yoshida; Masahiro Tomono

We propose a novel localization method for outdoor mobile robots using High Dynamic Range (HDR) vision technology. To obtain an HDR image, multiple images at different exposures is typically captured and combined. However, since mobile robots can be moving during a capture sequence, images cannot be fused easily. Instead, we generate a set of keypoints that incorporates those detected in each image. The position of the robot is estimated using the keypoint sets to match measured positions with a map. We conducted experimental comparisons of HDR and auto-exposure images, and our HDR method showed higher robustness and localization accuracy.


international conference on robotics and automation | 2012

Localization and road boundary recognition in urban environments using digital street maps

Kiyoshi Irie; Masahiro Tomono

In this study, we aim to achieve autonomous navigation for robots in environments that they have not previously visited. Many of the existing methods for autonomous navigation require a map to be built beforehand, typically by manually navigating the robot. Navigation without maps, i.e., without any prior information about the environment, is very difficult. We propose to use existing digital street maps for autonomous navigation. Nowadays digital street maps (e.g., those provided by Google Maps) are widely available and used routinely. Reuse of existing maps for robots eliminates extra cost of building maps. One of the difficulties in using existing street maps is data association between a robots observation and the map, because the physical entities that correspond to the boundary lines in the map are unknown. We address this issue by using region annotations such as roads and buildings and prior knowledge. We introduce a probabilistic framework that simultaneously estimates a robots position and the roads boundaries.We evaluated our method in complex urban environments. Our method successfully localized in environments that includes both roadways and pedestrian walkways.


intelligent robots and systems | 2010

Mobile robot localization using stereo vision in outdoor environments under various illumination conditions

Kiyoshi Irie; Tomoaki Yoshida; Masahiro Tomono

This paper proposes a new localization method for outdoor navigation using a stereo camera only. Vision-based navigation in outdoor environments is still challenging because of large illumination changes. To cope with various illumination conditions, we use 2D occupancy grid maps generated from 3D point clouds obtained by a stereo camera. Furthermore, we incorporate salient line segments extracted from the ground into the grid maps. This grid map building is not much affected by illumination conditions. On the grid maps, the robot poses are estimated using a particle filter that combines visual odometry and map-matching. Experimental results showed the effectiveness and robustness of the proposed method under various weather and illumination conditions.


intelligent robots and systems | 2013

Road recognition from a single image using prior information

Kiyoshi Irie; Masahiro Tomono

In this study, we present a novel road recognition method using a single image for mobile robot navigation. Vision-based road recognition in outdoor environments remains a significant challenge. Our approach exploits digital street maps, the robot position, and prior knowledge of the environment. We segment an input image into superpixels, which are grouped into various object classes such as roadway, sidewalk, curb, and wall. We formulate the classification problem as an energy minimization problem and employ graph cuts to estimate the optimal object classes in the image. Although prior information assists recognition, erroneous information can lead to false recognition. Therefore, we incorporate localization into our recognition method to correct errors in robot position. The effectiveness of our method was verified through experiments using real-world urban datasets.


Advanced Robotics | 2016

Dependence maximization localization: a novel approach to 2D street-map-based robot localization

Kiyoshi Irie; Masashi Sugiyama; Masahiro Tomono

Recently, localization methods based on detailed maps constructed using simultaneous localization and mapping have been widely used for mobile robot navigation. However, the cost of building such maps increases rapidly with expansion of the target environment. Here, we consider the problem of localization of a mobile robot based on existing 2D street maps. Although a large amount of research on this topic has been reported, the majority of the previous studies have focused on car-like vehicles that navigate on roadways; thus, the efficacy of such methods for sidewalks is not yet known. In this paper, we propose a novel localization approach that can be applied to sidewalks. Whereas roadways are typically marked, e.g. by white lines, sidewalks are not and, therefore, road boundary detection is not straightforward. Thus, obtaining exact correspondence between sensor data and a street map is complex. Our approach to overcoming this difficulty is to maximize the statistical dependence between the sensor data and the map, and localization is achieved through maximization of a mutual-information-based criterion. Our method employs a computationally efficient estimator of squared-loss mutual information, through which we achieve near real-time performance. The efficacy of our method is evaluated through localization experiments using real-world data-sets Graphical Abstract


intelligent robots and systems | 2015

A dependence maximization approach towards street map-based localization

Kiyoshi Irie; Masashi Sugiyama; Masahiro Tomono

In this paper, we present a novel approach to 2D street map-based localization for mobile robots that navigate mainly in urban sidewalk environments. Recently, localization based on the map built by Simultaneous Localization and Mapping (SLAM) has been widely used with great success. However, such methods limit robot navigation to environments whose maps are prebuilt. In other words, robots cannot navigate in environments that they have not previously visited. We aim to relax the restriction by employing existing 2D street maps for localization. Finding an exact match between sensor data and a street map is challenging because, unlike maps built by robots, street maps lack detailed information about the environment (such as height and color). Our approach to coping with this difficulty is to maximize statistical dependence between sensor data and the map, and localization is achieved through maximization of a Mutual Information-based criterion. Our method employs a computationally efficient estimator of Squared-loss Mutual Information through which we achieved near real-time performance. The effectiveness of our method is evaluated through localization experiments using real-world data sets.


conference on automation science and engineering | 2016

Target-less camera-LiDAR extrinsic calibration using a bagged dependence estimator

Kiyoshi Irie; Masashi Sugiyama; Masahiro Tomono

The goal of this study is to achieve automatic extrinsic calibration of a camera-LiDAR system that does not require calibration targets. Calibration through maximization of statistical dependence using mutual information (MI) is a promising approach. However, we observed that existing methods perform poorly on outdoor data sets. Because of their susceptibility to noise, objective functions of previous methods tend to be non-smooth, and gradient-based searches fail in local optima. To overcome these issues, we introduce a novel dependence estimator called bagged least-squares mutual information (BLSMI). BLSMI is a combination of methods composed of a kernel-based dependence estimator and noise reduction by bootstrap aggregating (bagging), which can handle richer features and robustly estimate dependence. We compared ours with previous methods using indoor and outdoor data sets, and observed that our method performed best in terms of calibration accuracy. While previous methods showed degraded performance on outdoor data sets because of the local optima problem, our method exhibited high calibration accuracy both on indoor and outdoor data sets.

Collaboration


Dive into the Kiyoshi Irie's collaboration.

Top Co-Authors

Avatar

Masahiro Tomono

Chiba Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yasuo Hayashibara

Chiba Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hideaki Minakata

Chiba Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tomoaki Yoshida

Chiba Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eiji Koyanagi

Chiba Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hideaki Yamato

Aoyama Gakuin University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masayuki Ando

Chiba Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge