Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hiroyuki Matsubara is active.

Publication


Featured researches published by Hiroyuki Matsubara.


IEEE Journal of Solid-state Circuits | 2013

A 100-m Range 10-Frame/s 340

Cristiano Niclass; Mineki Soga; Hiroyuki Matsubara; Satoru Kato; Manabu Kagami

This paper introduces a single-photon detection technique for time-of-flight distance ranging based on the temporal and spatial correlation of photons. A proof-of-concept prototype achieving depth imaging up to 100 meters with a resolution of 340 × 96 pixels at 10 frames/s was implemented. At the core of the system, a sensor chip comprising 32 macro-pixels based on an array of single-photon avalanche diodes featuring an optical fill factor of 70% was fabricated in a 0.18-μm CMOS. The chip also comprises an array of 32 circuits capable of generating precise triggers upon correlation events as well as of sampling the number of photons involved in each correlation event, and an array of 32 12-b time-to-digital converters. Characterization of the TDC array led to -0.52 LSB and 0.73 LSB of differential and integral nonlinearities, respectively. Quantitative evaluation of the TOF sensor under strong solar background light, i.e., 80 klux, revealed a repeatability error better than 10 cm throughout the distance range of 100 m, thus leading to a relative precision of 0.1%. In the same condition, the relative nonlinearity error was 0.37%. In order to show the suitability of our approach in a real-world situation, experimental results in which the depth sensor was operated in a typical traffic scenario are also reported.


Optics Express | 2012

\,\times\,

Cristiano Niclass; Kota Ito; Mineki Soga; Hiroyuki Matsubara; Isao Aoyagi; Satoru Kato; Manabu Kagami

We introduce an optical time-of-flight image sensor taking advantage of a MEMS-based laser scanning device. Unlike previous approaches, our concept benefits from the high timing resolution and the digital signal flexibility of single-photon pixels in CMOS to allow for a nearly ideal cooperation between the image sensor and the scanning device. This technique enables a high signal-to-background light ratio to be obtained, while simultaneously relaxing the constraint on size of the MEMS mirror. These conditions are critical for devising practical and low-cost depth sensors intended to operate in uncontrolled environments, such as outdoors. A proof-of-concept prototype capable of operating in real-time was implemented. This paper focuses on the design and characterization of a 256 x 64-pixel image sensor, which also comprises an event-driven readout circuit, an array of 64 row-level high-throughput time-to-digital converters, and a 16 Gbit/s global readout circuit. Quantitative evaluation of the sensor under 2 klux of background light revealed a repeatability error of 13.5 cm throughout the distance range of 20 meters.


IEEE Journal of Solid-state Circuits | 2014

96-Pixel Time-of-Flight Depth Sensor in 0.18-

Cristiano Niclass; Mineki Soga; Hiroyuki Matsubara; Masaru Ogawa; Manabu Kagami

With the emerging need for high-resolution light detection and ranging (LIDAR) technologies in advanced driver assistance systems (ADAS), we introduce a system-on-a-chip (SoC) that performs time-correlated single-photon counting and complete digital signal processing for a time-of-flight (TOF) sensor. At the core of the 0.18-μm CMOS SoC, we utilize linear arrays of 16 TOF and 32 intensity-only macro-pixels based on single-photon avalanche diodes in an original look-ahead concept, thus acquiring active TOF and passive intensity images simultaneously. The SoC also comprises an array of circuits capable of generating precise triggers upon spatiotemporal correlation events, an array of 64 12-b time-to-digital converters, and 768 kb of SRAM memory. The SoC provides the system-level electronics with a serial and low-bit-rate digital interface for: 1) multi-echo distance; 2) distance reliability; 3) intensity; and 4) passive-only intensity, thus mitigating system-level complexity and cost. A proof-of-concept prototype that achieves depth imaging up to 100 m with a resolution of 202 × 96 pixels at 10 frames/s has been implemented. Quantitative evaluation of the TOF sensor under strong solar background illuminance, i.e., 70 klux, revealed a repeatability error of 14.2 cm throughout the distance range of 100 m, thus leading to a relative precision of 0.14%. Under the same conditions, the relative nonlinearity error was 0.11%. In order to show the suitability of our approach for ADAS-related applications, experimental results in which the depth sensor was operated in typical traffic situations have also been reported.


IEEE Photonics Journal | 2013

\mu\hbox{m}

Kota Ito; Cristiano Niclass; Isao Aoyagi; Hiroyuki Matsubara; Mineki Soga; Satoru Kato; Mitsutoshi Maeda; Manabu Kagami

This paper reports on a light detection and ranging (LIDAR) system that incorporates a microelectromechanical-system (MEMS) mirror scanner and a single-photon imager. The proposed architecture enables a high signal-to-background ratio due to pixel-level synchronization of the single-photon imager and the MEMS mirror. It also allows the receiving optics to feature a large aperture, yet utilizing a small MEMS device. The MEMS actuator achieves a mechanical scanning amplitude of ±4° horizontally and ±3° vertically, while the field of view of the overall sensor is 45 by 110. Distance images were acquired outdoors in order to qualitatively evaluate our sensor imaging capabilities. Quantitative ranging performance characterization carried out under 10 klx of ambient light revealed a precision of 14.5 cm throughout the distance range to 25 m, thus leading to a relative precision of 0.58%.


Sensors | 2016

CMOS

Isamu Takai; Hiroyuki Matsubara; Mineki Soga; Mitsuhiko Ohta; Masaru Ogawa; Tatsuya Yamashita

A single-photon avalanche diode (SPAD) with enhanced near-infrared (NIR) sensitivity has been developed, based on 0.18 μm CMOS technology, for use in future automotive light detection and ranging (LIDAR) systems. The newly proposed SPAD operating in Geiger mode achieves a high NIR photon detection efficiency (PDE) without compromising the fill factor (FF) and a low breakdown voltage of approximately 20.5 V. These properties are obtained by employing two custom layers that are designed to provide a full-depletion layer with a high electric field profile. Experimental evaluation of the proposed SPAD reveals an FF of 33.1% and a PDE of 19.4% at 870 nm, which is the laser wavelength of our LIDAR system. The dark count rate (DCR) measurements shows that DCR levels of the proposed SPAD have a small effect on the ranging performance, even if the worst DCR (12.7 kcps) SPAD among the test samples is used. Furthermore, with an eye toward vehicle installations, the DCR is measured over a wide temperature range of 25–132 °C. The ranging experiment demonstrates that target distances are successfully measured in the distance range of 50–180 cm.


european solid-state circuits conference | 2011

Design and characterization of a 256x64-pixel single-photon imager in CMOS for a MEMS-based laser scanning time-of-flight sensor

Cristiano Niclass; Mineki Soga; Hiroyuki Matsubara; Satoru Kato

This paper introduces a high-performance optical depth sensor in a 0.18μm CMOS technology. At the core of the sensor, macro pixels consisting of 6×2 single-photon detectors enable accurate and selective time-of-flight measurements by taking advantage of temporal and spatial correlations of photons. An array of 32 high-throughput time-to-digital converters allows for the digitization of time-of-flight data with a resolution of 208ps within a range of 853ns, thus resolving distances up to 128 meters. Quantitative characterization of the chip sensor is reported. Depth map data acquired in a real-world situation illustrates the effectiveness of the approach in a road traffic environment.


IEEE Transactions on Intelligent Transportation Systems | 2013

A 0.18-

Xuesong Mao; Daisuke Inoue; Hiroyuki Matsubara; Manabu Kagami

Laser radar provides better spatial resolution than millimeter-wave radar (MWR) due to the high directivity of the laser beam. However, commercial in-car laser radar approximates the target speed by a range differentiation method, which has the problems of excessive time consumption and the introduction of large errors. In this paper, a new Doppler laser radar scheme for simultaneously measuring target range and speed in automotive applications is presented. The scheme includes a new laser radar architecture, a new method to modulate the transmitted signal, and a method for calculating the range and speed from the signal returned from the target. The length of the transmitted signal is several microseconds, giving the possibility of realizing a high scan speed in automotive applications. In addition, simulations based on Simulink/Matlab were carried out to validate the proposed scheme. Finally, an experimental demonstration of simultaneous range and speed measurements was performed. The moving target for the experiment was a highly reflecting sheet attached to an electric grinder.


Proceedings of SPIE | 2011

\mu

Daisuke Inoue; Tadashi Ichikawa; Hiroyuki Matsubara; Xueon Mao; Mitsutoshi Maeda; Chie Nagashima; Manabu Kagami

We developed a LIDAR system with a sensor head as small as 22 cc, in spite of the inclusion of a scanning mechanism. This LIDAR system not only has a small body, but is also highly sensitive. Our LIDAR system is based on time-of-flight measurements, and it incorporates an optical fiber. The main feature of our system is the utilization of optical amplifiers for both the transmitter and the receiver, and the optical amplifiers enabled us to exceed the detection limit of thermal noise. In conventional LIDAR systems the detection limit is determined by thermal noise, because the avalanche photo-diodes (APD) and trans-impedance amplifiers (TIA) that they use detect the received signals directly. In the case of our LIDAR system, received signal is amplified by an optical fiber amplifier in front of the photo diode and the TIA. Therefore, our LIDAR system can boost the signal level before the weak incoming signal is depleted by thermal noise. There are conditions under which the noise figure for the combination of an optical fiber amplifier and a photo diode is superior to the noise figure for an avalanche photo diode. We optimized the gain of the optical fiber amplifier and TIA in our LIDAR system such that it is capable of detecting a single photon. As a result, the detection limit of our LIDAR system is determined by shot noise. This small and highly sensitive measurement technology shows great potential for use in LIDAR with an optical preamplifier.


international solid-state circuits conference | 2013

m CMOS SoC for a 100-m-Range 10-Frame/s 200

Cristiano Niclass; Mineki Soga; Hiroyuki Matsubara; Masaru Ogawa; Manabu Kagami

A number of potentially low-cost time-of-flight (ToF) 3D image sensors aiming at consumer electronics applications have recently appeared in CMOS. Diffused-light sensors taking advantage of SPAD pixels, conventional and pinned-photodiode lock-in pixels demonstrate centimeter-ranging performance in distances of typically up to 6m, and with the exception of, under low background light (BG) conditions. In those approaches, however, performance tends to rapidly deteriorate in severe BG conditions, such as outdoors, and long-distance ranges have yet to be reported. Another common limitation is their inability to cope with multi-echo target environments. Higher optical signal-to-background ratio (SBR), and hence better performance, is typically achieved by laser-scanning approaches, e.g. employing polygonal or MEMS mirrors. With the emerging need for high-resolution light detection and ranging (LIDAR) technologies in advanced driving-assistance systems (ADAS), we introduce an SoC that performs time-correlated single-photon counting (TCSPC) and complete DSP for a 100m-range ToF sensor. The chip provides the system-level electronics with a serial and low-bit-rate digital interface for: multi-echo distance, distance reliability, intensity, and BG-only intensity, thus mitigating system-level complexity and cost.


Sensors | 2018

\,\times\,

Seigo Ito; Shigeyoshi Hiratsuka; Mitsuhiko Ohta; Hiroyuki Matsubara; Masaru Ogawa

We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.

Collaboration


Dive into the Hiroyuki Matsubara's collaboration.

Researchain Logo
Decentralizing Knowledge