Ichiro Masaki
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ichiro Masaki.
IEEE Transactions on Intelligent Transportation Systems | 2002
Takeo Kato; Yoshiki Ninomiya; Ichiro Masaki
In order to avoid collision with an object that blocks the course of a vehicle, measuring the distance to it and detecting positions of its side boundaries, are necessary. In the paper, an object detection method achieved by the fusion of millimeter-wave radar and a single video camera is proposed. We consider the method as the least expensive solution because at least one camera is necessary for lane marking detection. In the method, the distance is measured by the radar, and the boundaries are found from an image sequence, based on a motion stereo technique with the help of the distance measured by the radar. Since the method does not depend on the appearance of objects, it is capable of detecting not only an automobile but also other objects. Object detection by the method was confirmed through an experiment. In the experiment, both a stationary and a moving object were detected and a pedestrian as well as a vehicle was detected.
IEEE Journal of Solid-state Circuits | 2004
Pablo M. Acosta-Serafini; Ichiro Masaki; Charles G. Sodini
Many tasks performed by machine vision systems involve processing of natural scenes with large intra-frame illumination ratios. Thus, wide dynamic range visible spectrum image sensors are required to achieve adequate processing performance and reliability. An image sensor implementing an algorithm that linearly increases the illumination dynamic range of solid-state pixels is presented. Optimal exposure is achieved with a predictive pixel saturation decision that allows for multiple integration intervals of different duration to run concurrently for different pixels while keeping the sensor frame rate constant. A proof-of-concept chip was fabricated in a 0.18-/spl mu/m CMOS process. Added functionality to standard imagers is mainly concentrated off-pixel so fill factor is not sacrificed. Measured data corroborates the algorithm functionality.
intelligent vehicles symposium | 2003
Yajun Fang; Keiichi Yamada; Y. Ninomiya; Berthold K. P. Horn; Ichiro Masaki
In order to improve the safety of night driving, automatic pedestrian detection has received more and more attraction. Since reliability is the most important issue in these systems, multi-dimensional-feature-based segmentation and classification needs to be introduced, and each axis should be efficient and be as much independent (to each other) as possible. To choose effective multi-dimensional features for infrared-image-based detection, the paper first investigates the possibilities of reusing available features for visible images by analyzing the different properties of infrared images and visible images. To take advantage of unique properties of infrared images, we propose the following novel features: special projection feature for segmentation, and two-axis pixel-distribution feature for classification. The segmentation based on new features does not depend on many assumptions and is shape-independent, thus avoiding brute-force multiple templates and multi-scale pyramid searching. The novel classification features include histogram feature and inertial feature that are independent and complimentary, thus the two-dimensional fusion-based classification significantly improves detection accuracy. These proposed features are independent from conventional pixel-array feature, and can be further fused with other general pedestrian detection features to improve simplicity, speed, and reliability.
IEEE Intelligent Systems & Their Applications | 1998
Ichiro Masaki
ITS can use machine vision to detect lane markings, vehicles, pedestrians, road signs, traffic conditions, traffic incidents, and even driver drowsiness. Challenges include making machine-vision systems less expensive, more compact, and more robust in various weather and traffic conditions.
ieee intelligent vehicles symposium | 2010
Berkin Bilgic; Berthold K. P. Horn; Ichiro Masaki
We present an integral image algorithm that can run in real-time on a Graphics Processing Unit (GPU). Our system exploits the parallelisms in computation via the NIVIDA CUDA programming model, which is a software platform for solving non-graphics problems in a massively parallel high-performance fashion. This implementation makes use of the work-efficient scan algorithm that is explicated elsewhere. Treating the rows and the columns of the target image as independent input arrays for the scan algorithm, our method manages to expose a second level of parallelism in the problem. We compare the performance of the parallel approach running on the GPU with the sequential CPU implementation across a range of image sizes and report a speed up by a factor of 8 for a 4 megapixel input. We further investigate the impact of using packed vector type data on the performance, as well as the effect of double precision arithmetic on the GPU.
IEEE Transactions on Intelligent Transportation Systems | 2002
Yajun Fang; Ichiro Masaki; Berthold K. P. Horn
Dynamic environment interpretation is of special interest for intelligent vehicle systems. It is expected to provide lane information, target depth, and the image positions of targets within given depth ranges. Typical segmentation algorithms cannot solve the problems satisfactorily, especially under the high-speed requirements of a real-time environment. Furthermore, the variation of image positions and sizes of targets creates difficulties for tracking. In this paper, we propose a sensor-fusion method that can make use of coarse target depth information to segment target locations in video images. Coarse depth ranges can be provided by radar systems or by a vision-based algorithm introduced in the paper. The new segmentation method offers more accuracy and robustness while decreasing the computational load.
ieee intelligent vehicles symposium | 2010
Berkin Bilgic; Berthold K. P. Horn; Ichiro Masaki
We investigate a fast pedestrian localization framework that integrates the cascade-of-rejectors approach with the Histograms of Oriented Gradients (HoG) features on a data parallel architecture. The salient features of humans are captured by HoG blocks of variable sizes and locations which are chosen by the AdaBoost algorithm from a large set of possible blocks. We use the integral image representation for histogram computation and a rejection cascade in a sliding-windows manner, both of which can be implemented in a data parallel fashion. Utilizing the NVIDIA CUDA framework to realize this method on a Graphics Processing Unit (GPU), we report a speed up by a factor of 13 over our CPU implementation. For a 1280×960 image our parallel technique attains a processing speed of 2.5 to 8 frames per second depending on the image scanning density, which is similar to the recent GPU implementation of the original HoG algorithm in [3].
international conference on vlsi design | 1998
Charles G. Sodini; Jeffrey C. Gealow; Zubair A. Talib; Ichiro Masaki
Typical low-level image processing tasks require thousands of operations per pixel for each input image. The structure of the tasks suggests employing an array of processing elements, one per pixel, sharing instructions issued by a single controller. To build pixel-parallel image processing hardware for microcomputer systems, large processing element arrays must be produced at low cost. Integrated circuit designers have had tremendous success creating dense and inexpensive semiconductor memories. They handcraft circuits to perform essential functions using very little silicon area, then replicate the circuits to form large memory arrays. This paper shows how the same technique may be applied to create a dense integrated processing element array.
custom integrated circuits conference | 1997
David A. Martin; Hae-Seung Lee; Ichiro Masaki
A programmable analog arithmetic circuit which can perform addition, subtraction, multiplication, and division at 7 bits of resolution is presented. This circuit is used as the ALU for a mixed-signal array processor designed for early vision applications. The analog arithmetic circuit enables the processor to operate with the low power and low area of a dedicated analog circuit while retaining the flexibility of a digital processor. The processor was tested with an edge detection algorithm and a sub-pixel resolution algorithm. A 1 cm square array of the mixed-signal processor cells in 0.8 /spl mu/m CMOS with a 5 V power supply would dissipate 1 W at 420 MIPS.
IEEE Transactions on Intelligent Transportation Systems | 2004
Pablo M. Acosta-Serafini; Ichiro Masaki; Charles G. Sodini
Many tasks performed by intelligent transportation systems involve processing of natural scenes. Wide dynamic range visible spectrum image sensors are required to achieve optimal processing performance. Recently several approaches that rely on the multiple sampling technique to achieve wide dynamic range have been presented. The behavior of a novel predictive variant of these algorithms is analyzed together with its key assumptions and limits to its performance and flexibility.