Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Keisuke Yoneda is active.

Publication


Featured researches published by Keisuke Yoneda.


international conference on intelligent transportation systems | 2014

Traffic light recognition in varying illumination using deep learning and saliency map

Vijay John; Keisuke Yoneda; Bin Qi; Zheng Liu; Seiichi Mita

The accurate detection and recognition of traffic lights is important for autonomous vehicle navigation and advanced driver aid systems. In this paper, we present a traffic light recognition algorithm for varying illumination conditions using computer vision and machine learning. More specifically, a convolutional neural network is used to extract and detect features from visual camera images. To improve the recognition accuracy, an on-board GPS sensor is employed to identify the region-of-interest, in the visual image, that contains the traffic light. In addition, a saliency map containing the traffic light location is generated using the normal illumination recognition to assist the recognition under low illumination conditions. The proposed algorithm was evaluated on our data sets acquired in a variety of real world environments and compared with the performance of a baseline traffic signal recognition algorithm. The experimental results demonstrate the high recognition accuracy of the proposed algorithm in varied illumination conditions.


IEEE Transactions on Computational Imaging | 2015

Saliency Map Generation by the Convolutional Neural Network for Real-Time Traffic Light Detection Using Template Matching

Vijay John; Keisuke Yoneda; Zheng Liu; Seiichi Mita

A critical issue in autonomous vehicle navigation and advanced driver assistance systems (ADAS) is the accurate real-time detection of traffic lights. Typically, vision-based sensors are used to detect the traffic light. However, the detection of traffic lights using computer vision, image processing, and learning algorithms is not trivial. The challenges include appearance variations, illumination variations, and reduced appearance information in low illumination conditions. To address these challenges, we present a visual camera-based real-time traffic light detection algorithm, where we identify the spatially constrained region-of-interest in the image containing the traffic light. Given, the identified region-of-interest, we achieve high traffic light detection accuracy with few false positives, even in adverse environments. To perform robust traffic light detection in varying conditions with few false positives, the proposed algorithm consists of two steps, an offline saliency map generation and a real-time traffic light detection. In the offline step, a convolutional neural network, i.e., a deep learning framework, detects and recognizes the traffic lights in the image using region-of-interest information provided by an onboard GPS sensor. The detected traffic light information is then used to generate the saliency maps with a modified multidimensional density-based spatial clustering of applications with noise (M-DBSCAN) algorithm. The generated saliency maps are indexed using the vehicle GPS information. In the real-time step, traffic lights are detected by retrieving relevant saliency maps and performing template matching by using colour information. The proposed algorithm is validated with the datasets acquired in varying conditions and different countries, e.g., USA, Japan, and France. The experimental results report a high detection accuracy with negligible false positives under varied illumination conditions. More importantly, an average computational time of 10 ms/frame is achieved. A detailed parameter analysis is conducted and the observations are summarized and reported in this paper.


ieee intelligent vehicles symposium | 2015

Urban road localization by using multiple layer map matching and line segment matching

Keisuke Yoneda; Chenxi Yang; Seiichi Mita; Tsubasa Okuya; Kenji Muto

In recent years, automated vehicle researches move on to the next stage, that is, auto-driving experiments on public roads. This study focuses on how to realize accurate localization based on the use of Lidar data and precise map. On different roads such as urban roads and expressways, the observed information of surrounding is significantly different. For example, on the urban roads, many buildings can be observed around the upper part of the vehicle. Such observation realizes accurate map matching. On the other hand, the upper part has no specific observation on the expressway. Therefore, it is necessary to observe the lower part for the map matching. To adapt the situation changes, we propose a localization method based on self-adaptive multi-layered scan matching and road line segment matching. The main idea is to effectively match the features observed from different heights and to improve the results by applying the line segment matching in certain scenes. Localization experiments show the ability to estimate accurate vehicle pose in urban driving.


ieee intelligent vehicles symposium | 2015

General behavior and motion model for automated lane change

Hossein Tehrani; Quoc Huy Do; Masumi Egawa; Kenji Muto; Keisuke Yoneda; Seiichi Mita

Lane change maneuver is a cause for many severe highway accidents and automatic lane change has great potentials to reduce the impact of human error and number of accidents. Previous researches mostly tried to find an optimal trajectory and ignore the behavior model. Presented methods can be applied for simple lane change scenario and generally fail for complicated cases or in the presence of time/distance constraints. Through analysis and inspiring of human driver lane change data, we propose a multi segments lane change model to mimic the human driver for challenging scenarios. We also propose a method to convert behavior/motion selection to a time-based pattern recognition problem. We developed a simulation platform in PreScan and evaluated proposed automatic lane change method for challenging scenarios.


Artificial Life and Robotics | 2018

Mono-camera based vehicle localization using lidar intensity map for automated driving

Keisuke Yoneda; Ryo Yanase; Mohammad Aldibaja; Naoki Suganuma; Kei Sato

This paper reports an image-based localization for automated vehicle. The proposed method utilizes a mono-camera and an inertial measurement unit to estimate the vehicle pose. Self-localization is implemented by a map matching technique between the reference digital map and sensor observations. In general, the same types of sensors are used for map data and observations. However, this study is focused on the mono-camera based method using Lidar-based map for the purpose of a low-cost implementation. Image template matching is applied to provide a correlation distribution between the captured image and the predefined orthogonal map. A probability of the vehicle pose is then updated using the obtained correlation. The experiments were carried out for real driving data on an urban road. The results have verified that the proposed method estimates the vehicle position in 0.11[m] positioning errors on real-time.


Artificial Life and Robotics | 2018

Trajectory optimization and state selection for urban automated driving

Keisuke Yoneda; Toshiki Iida; TaeHyon Kim; Ryo Yanase; Mohammad Aldibaja; Naoki Suganuma

The automated driving is an emerging technology in which a car performs recognition, decision making, and control. The decision-making system consists of route planning and trajectory planning. The route planning optimizes the shortest path to the destination like an automotive navigation system. According to static and dynamic obstacles around the vehicle, the trajectory planning generates lateral and longitudinal profiles for vehicle maneuver to drive the given path. This study is focused on the trajectory planning for vehicle maneuver in urban traffic scenes. This paper proposes a trajectory generation method that extends the existing method to generate more natural behavior with small acceleration and deceleration. This paper introduces an intermediate behavior to gradually switch from the velocity keeping to the distance keeping. The proposed method can generate smooth trajectory with small acceleration/deceleration. Numerical experiments show that the vehicle generates smooth behaviors according to surrounding vehicles.


The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) | 2017

Convolutional Neural Network Based Vehicle Turn Signal Recognition

Keisuke Yoneda; Yoshihiro Takagi; Naoki Suganuma

This Automated driving is an emerging technology in which a car performs recognition, decision making, and control. Recognizing surrounding vehicles is a key technology in order to generate a trajectory of ego vehicle. This paper is focused on detecting a turn signal information as one of the drivers intention for surrounding vehicles. Such information helps to predict their behavior in advance especially about lane change and turn left-or-right on intersection. Using their intension, the automated vehicle is able to generate the safety trajectory before they begin to change their behavior. The proposed method recognizes the turn signal for target vehicle based on mono-camera. It detects lighting state using Convolutional Neural Network, and then calculates a flashing frequency using Fast Fourier Transform.


IEICE Transactions on Information and Systems | 2013

A Practical and Optimal Path Planning for Autonomous Parking Using Fast Marching Algorithm and Support Vector Machine

Quoc Huy Do; Seiichi Mita; Keisuke Yoneda


ieee intelligent vehicles symposium | 2018

Vehicle Localization using 76GHz Omnidirectional Millimeter-Wave Radar for Winter Automated Driving*

Keisuke Yoneda; Naoya Hashimoto; Ryo Yanase; Mohammad Aldibaja; Naoki Suganuma


ieee intelligent vehicles symposium | 2018

LIDAR Based Altitude Estimation for Autonomous Vehicles using Elevation Maps

Ryo Yanase; Mohammad Aldibaja; Akisue Kuramoto; Kim Taehyon; Keisuke Yoneda; Naoki Suganuma

Collaboration


Dive into the Keisuke Yoneda's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seiichi Mita

Toyota Technological Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Quoc Huy Do

Toyota Technological Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vijay John

Toyota Technological Institute

View shared research outputs
Top Co-Authors

Avatar

Zheng Liu

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Bin Qi

Toyota Technological Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge