Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yukimasa Tamatsu is active.

Publication


Featured researches published by Yukimasa Tamatsu.


ieee intelligent vehicles symposium | 2004

Solid or not solid: vision for radar target validation

Amir Sole; Ofer Mano; Gideon Stein; Hiroaki Kumon; Yukimasa Tamatsu; Amnon Shashua

In the context of combining radar and vision sensors for a fusion application in dense city traffic situations, one of the major challenges is to be able to validate radar targets. We take a high-level fusion approach assuming that both sensor modalities have the capacity to independently locate and identify targets of interest. In this context, radar targets can either correspond to a vision target- in which case the target is validated without further processing- or not. It is the latter case that drives the focus of this paper. A non-matched radar target can correspond to some solid object which is not part of the objects of interest of the vision sensor (such as a guard-rail) or can be caused by reflections in which case it is a ghost target which does not match any physical object in the real world. We describe a number of computational steps for the decision making of non-matched radar targets. The computations combine both direct motion parallax measurements and indirect motion analysis- which are not sufficient for computing parallax but are nevertheless quite effective- and pattern classification steps for covering situations which motion analysis are weak or ineffective. One of the major advantages of our high-level fusion approach is that it allows the use of simpler (low cost) radar technology to create a combined high performance system.


ieee intelligent vehicles symposium | 2007

Recognition of foggy conditions by in-vehicle camera and millimeter wave radar

Kenji Mori; Tomokazu Takahashi; Ichiro Ide; Hiroshi Murase; Takayuki Miyahara; Yukimasa Tamatsu

Recently driving support techniques using in-vehicle sensors have attracted much attention and have been applied to practical systems. We focus on supporting drivers in poor visibility conditions. Fog is one of the causes that lead to lack of visibility. In this paper, we propose a method of judging fog density using in-vehicle camera images and millimeter-wave (mm-W) radar data. This method determines fog density by evaluating both the visibility of a preceding vehicle and distance to it. Experiments showed that judgments made by the proposed method achieved a recognition rate of 84% when compared to the ground truth obtained by human judgments.


international conference on innovative computing, information and control | 2006

Visibility Estimation in Foggy Conditions by In-Vehicle Camera and Radar

Kenji Mori; Terutoshi Kato; Tomokazu Takahashi; Ichiro Ide; Hiroshi Murase; Takayuki Miyahara; Yukimasa Tamatsu

We propose a method of judging fog density by using in-vehicle camera images and millimeter-wave (mm-W) radar data. This method determines fog density by evaluating both the visibility of a preceding vehicle and the distance to it. Experiments revealed that judgments made by the proposed method achieved an 85% precision rate compared to that mode by human subjects


ieee intelligent vehicles symposium | 2007

Measurement of Visibility Conditions toward Smart Driver Assistance for Traffic Signals

Fumika Kimura; Tomokazu Takahashi; Yoshito Mekada; Ichiro Ide; Hiroshi Murase; Takayuki Miyahara; Yukimasa Tamatsu

We propose a method to recognize the visibility of traffic signals from a drivers perspective. The more that driver assistance systems are equipped for practical use, the more information that is being provided for drivers. So each information provision system should select appropriate information based on the situation. Our goal is to realize a system that recognizes the visibility of traffic signals from images taken by in-vehicle cameras and appropriately provides information to drivers. In this paper, we propose a method to measure visibility by two criterions, detectability and discriminability. Each index is computed using image processing techniques. Experiments using actual images showed that the proposed indices correspond well to human perception.


international conference on innovative computing, information and control | 2006

Raindrop Detection from In-Vehicle Video Camera Images for Rainfall Judgment

Hiroyuki Kurihata; Tomokazu Takahashi; Yoshito Mekada; Ichiro Ide; Hiroshi Murase; Yukimasa Tamatsu; Takayuki Miyahara

In this paper, we propose a method to detect raindrops from in-vehicle camera images and recognize rainfall using time-series information. We aim to improve the accuracy of raindrop detection by averaging the test images and frame-matching the result of raindrop detection in multiple adjoining frames. According to an evaluation experiment, raindrops were detected precisely enough for automatic wiper control by the proposed method


ieee intelligent vehicles symposium | 2010

Estimation of traffic sign visibility toward smart driver assistance

Keisuke Doman; Daisuke Deguchi; Tomokazu Takahashi; Yoshito Mekada; Ichiro Ide; Hiroshi Murase; Yukimasa Tamatsu

We propose a visibility estimation method for traffic signs as part of work for realization of nuisance-free driving safety support systems. Recently, the number of driving safety support systems in a car has been increasing. As a result, it is becoming important to select appropriate information from them for safe and comfortable driving because too much information may cause driver distraction and may increase the risk of a traffic accident. One of the approaches to avoid such a problem is to alert the driver only with information which could easily be missed. Therefore, to realize such a system, we focus on estimating the visibility of traffic signs. The proposed method is a model-based method that estimates the visibility of traffic signs focusing on the difference of image features between a traffic sign and its surrounding region. In this paper, we investigate the performance of the proposed method and show its effectiveness.


ieee intelligent vehicles symposium | 2011

Estimation of traffic sign visibility considering temporal environmental changes for smart driver assistance

Keisuke Doman; Daisuke Deguchi; Tomokazu Takahashi; Yoshito Mekada; Ichiro Ide; Hiroshi Murase; Yukimasa Tamatsu

We propose a visibility estimation method for traffic signs considering temporal environmental changes, as a part of work for the realization of nuisance-free driver assistance systems. Recently, the number of driver assistance systems in a vehicle is increasing. Accordingly, it is becoming important to sort out appropriate information provided from them, because providing too much information may cause driver distraction. To solve such a problem, we focus on a visibility estimation method for controlling the information according to the visibility of a traffic sign. The proposed method sequentially captures a traffic sign by an in-vehicle camera, and estimates its accumulative visibility by integrating a series of instantaneous visibility. By this way, even if the environmental conditions may change temporally and complicatedly, we can still accurately estimate the visibility that the driver perceives in an actual traffic scene. We also investigate the performance of the proposed method and show its effectiveness.


international conference on big data | 2016

Predicting statistics of asynchronous SGD parameters for a large-scale distributed deep learning system on GPU supercomputers

Yosuke Oyama; Akihiro Nomura; Ikuro Sato; Hiroki Nishimura; Yukimasa Tamatsu; Satoshi Matsuoka

Many studies have shown that Deep Convolutional Neural Networks (DCNNs) exhibit great accuracies given large training datasets in image recognition tasks. Optimization technique known as asynchronous mini-batch Stochastic Gradient Descent (SGD) is widely used for deep learning because it gives fast training speed and good recognition accuracies, while it may increases generalization error if training parameters are in inappropriate ranges. We propose a performance model of a distributed DCNN training system called SPRINT that uses asynchronous GPU processing based on mini-batch SGD. The model considers the probability distribution of mini-batch size and gradient staleness that are the core parameters of asynchronous SGD training. Our performance model takes DCNN architecture and machine specifications as input parameters, and predicts time to sweep entire dataset, mini-batch size and staleness with 5%, 9% and 19% error in average respectively on several supercomputers with up to thousands of GPUs. Experimental results on two different supercomputers show that our model can steadily choose the fastest machine configuration that nearly meets a target mini-batch size.


international conference on intelligent transportation systems | 2012

Visibility estimation of traffic signals under rainy weather conditions for smart driving support

Ryuhei Sato; Keisuke Domany; Daisuke Deguchi; Yoshito Mekada; Ichiro Ide; Hiroshi Murase; Yukimasa Tamatsu

The aim of this work is to support a driver by notifying the information of traffic signals in accordance with their visibility. To avoid traffic accidents, the driver should detect and recognize surrounding objects, especially traffic signals. However, when driving a vehicle under rainy weather conditions, it is difficult for drivers to detect or to recognize objects existing in the road environment in comparison with fine weather conditions. Therefore, this paper proposes a method for estimating the visibility of traffic signals for drivers under rainy weather conditions by image processing. The proposed method is based on the concept of visual noise known in the field of cognitive science, and extracts two types of visual noise features which ware considered that they affect the visibility of traffic signals. We expect to improve the accuracy of visibility estimation by combining the visual noise features with the texture feature introduced in a previous work. Experimental results showed that the proposed method could estimate the visibility of traffic signals more accurately under rainy weather conditions.


intelligent vehicles symposium | 2005

ACC in consideration of visibility with sensor fusion technology under the concept of TACS

Hiroaki Kumon; Yukimasa Tamatsu; Takashi Ogawa; Ichiro Masaki

ACC (adaptive cruise control) system maintains the distance to preceding vehicle instead of drivers. However, drivers may feel uncomfortable, especially under bad visibility conditions, because, for example, a driver tends to keep a longer distance when a large preceding truck blocks its frontal view, whereas current ACC tries to keep the same distance despite of such condition. This paper proposes a novel Visibility-ACC (V-ACC) system, which enhances current ACC system with considering such driver behavior under the concept of TACS (Three-layer Architecture for Comfort and Safety). A driver model is built from driver behavior database. A new distance control algorithm for the V-ACC system is constructed based on the driver model and a newly developed sensor fusion system. Vehicle test was performed to evaluate its advantage.

Collaboration


Dive into the Yukimasa Tamatsu's collaboration.

Researchain Logo
Decentralizing Knowledge