Mitsuhiko Ohta
Toyota
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mitsuhiko Ohta.
Information Visualization | 2002
Arata Takahashi; Yoshiki Ninomiya; Mitsuhiko Ohta; M. Nishida; M. Takayama
In this paper a new lane detection method is proposed for a highway lane departure warning system which hardware cost is reduced by using an existing wide-angle back monitor camera and an embedded CPU on the market. The extended Hough transform and a birds-eye view image are effectively applied to perform realtime lane detection. The method was built on an embedded CPU (Hitachi SH2E) as software. Then the extended Hough transform was executed at less than 66 milliseconds.
international conference on intelligent transportation systems | 1999
Arata Takahashi; Yoshiki Ninomiya; Mitsuhiko Ohta; Koichi Tange
We describe a lane detection method using RVP-I (Real-time Voting Processor-I) for a driver assistance system and an auto driving system. For robust lane detection in various environments, we propose a method based on complete search in a parameter space. The performance of the proposed method has been investigated on 1000 road images of bad conditions. A 95% detection rate is obtained in comparison with 81% of our previous method. Thus the proposed method is effective for robustness of lane detection.
international conference on data engineering | 2005
Mineki Soga; Takeo Kato; Mitsuhiko Ohta; Yoshiki Ninomiya
This paper presents a method of pedestrian detection for automobile applications, based on stereo vision. Vision based pedestrian detection presents a difficulty due to the diversity of appearances. The proposed method overcomes the difficulty mainly through the following two contributions. Firstly, employing four directional features with the classifier increases the robustness against small affine deformation of objects. Secondly, by merging classification and tracking, robustness against temporal change of appearance is improved when considering temporal continuity of classification score. The experiments, performed on approximately 16 minutes of video sequences, confirmed that the method can detect pedestrians with low false detection.
Sensors | 2016
Isamu Takai; Hiroyuki Matsubara; Mineki Soga; Mitsuhiko Ohta; Masaru Ogawa; Tatsuya Yamashita
A single-photon avalanche diode (SPAD) with enhanced near-infrared (NIR) sensitivity has been developed, based on 0.18 μm CMOS technology, for use in future automotive light detection and ranging (LIDAR) systems. The newly proposed SPAD operating in Geiger mode achieves a high NIR photon detection efficiency (PDE) without compromising the fill factor (FF) and a low breakdown voltage of approximately 20.5 V. These properties are obtained by employing two custom layers that are designed to provide a full-depletion layer with a high electric field profile. Experimental evaluation of the proposed SPAD reveals an FF of 33.1% and a PDE of 19.4% at 870 nm, which is the laser wavelength of our LIDAR system. The dark count rate (DCR) measurements shows that DCR levels of the proposed SPAD have a small effect on the ranging performance, even if the worst DCR (12.7 kcps) SPAD among the test samples is used. Furthermore, with an eye toward vehicle installations, the DCR is measured over a wide temperature range of 25–132 °C. The ranging experiment demonstrates that target distances are successfully measured in the distance range of 50–180 cm.
Sensors | 2018
Seigo Ito; Shigeyoshi Hiratsuka; Mitsuhiko Ohta; Hiroyuki Matsubara; Masaru Ogawa
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.
Archive | 2008
Kozo Kato; Mitsuhiko Ohta
IEICE Transactions on Information and Systems | 2010
Hui Cao; Koichiro Yamaguchi; Mitsuhiko Ohta; Takashi Naito; Yoshiki Ninomiya
Systems and Computers in Japan | 2004
Yoshiki Ninomiya; Arata Takahashi; Mitsuhiko Ohta
Archive | 2009
Kozo Kato; Mitsuhiko Ohta
SAE World Congress & Exhibition | 2007
Masayuki Usami; Kenichi Ohue; Hideo Ikai; Mitsuhiko Ohta; Tomoyasu Tamaoki