Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masataka Kagesawa is active.

Publication


Featured researches published by Masataka Kagesawa.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2004

Transparent surface modeling from a pair of polarization images

Daisuke Miyazaki; Masataka Kagesawa; Katsushi Ikeuchi

We propose a method for measuring surface shapes of transparent objects by using a polarizing filter. Generally, the light reflected from an object is partially polarized. The degree of polarization depends upon the incident angle, which, in turn, depends upon the surface normal. Therefore, we can obtain surface normals of objects by observing the degree of polarization at each surface point. Unfortunately, the correspondence between the degree of polarization and the surface normal is not one to one. Hence, to obtain the correct surface normal, we have to solve the ambiguity problem. In this paper, we introduce a method to solve the ambiguity by comparing the polarization data in two objects, i.e., normal position and tilted with small angle position. We also discuss the geometrical features of the object surface and propose a method for matching two sets of polarization data at identical points on the object surface.


international conference on intelligent transportation systems | 1999

Recognizing vehicle in infra-red images using IMAP parallel vision board

Masataka Kagesawa; Shinichi Ueno; Katsushi Ikeuchi

We describe a method to recognize vehicles, in particular to recognize which make it is and which type it is. Our system employs infra-red images so that we can use the same algorithm in day time and at night. The algorithm is based on vector quantization, originally proposed by Krumm (1997), and is implemented on the IMAP parallel image processing board. Our system makes the compressed database of local features, for the algorithm, of a target vehicle from given training images in advance, and then matches a set of local features in the input image with those in the training images for recognition. This method has the following three advantages: it can detect if part of the target vehicles is occluded; it can detect if the target vehicle is translated due to running out of lanes; and we do not need to segment a vehicle part from the input images. Through outdoor experiments, we have confirmed these advantages.


IEEE Transactions on Intelligent Transportation Systems | 2001

Recognizing vehicles in infrared images using IMAP parallel vision board

Masataka Kagesawa; Shinichi Ueno; Katsushi Ikeuchi

Describes a method for vehicle recognition, in particular for recognizing make and model. Our system takes into account the fact that vehicles of the same make and model number come in different colors; it employs infrared (IR) images, thereby eliminating color differences. The use of IR images also enables us to use the same algorithm both day and night. This ability is particularly important because the algorithm must be able to locate many feature points, especially at night. Our algorithm is based on a configuration of local features. For the algorithm, our system first makes a compressed database of local features of a target vehicle from training images given in advance; the system then matches a set of local features in the input image with those in the training images for recognition. This method has the following three advantages: (1) it can detect even if part of the target vehicle is occluded; (2) it can detect even if the target vehicle is translated due to running out of the lanes; and (3) it does not require us to segment a vehicle part from input images. We have two implementations of the algorithm. One is referred to as the eigenwindow method, while the other is called the vector-quantization method. The former method is good at recognition, but is not very fast. The latter method is not very good at recognition but it is suitable for an IMAP parallel image-processing board; hence, it can be fast. In both implementations, the above-mentioned advantages have been confirmed by performing outdoor experiments.


computer vision and pattern recognition | 2009

Fusion of a camera and a laser range sensor for vehicle recognition

Shirmila Mohottala; Shintaro Ono; Masataka Kagesawa; Katsushi Ikeuchi

This paper presents a system that fuses data from a vision sensor and a laser sensor for detection and classification. Fusion of a vision sensor and a laser range sensor enables us to obtain 3D information of an object together with its textures, offering high reliability and robustness to outdoor conditions. To evaluate the performance of the system, it is applied to recognition of on-street parked vehicles scanned from a moving probe vehicle. The evaluation experiments show obviously successful results, with a detection rate of 100% and an accuracy over 95% in recognizing four vehicle classes.


intelligent robots and systems | 1999

Local-feature based vehicle recognition in infra-red images using parallel vision board

Masataka Kagesawa; Shinichi Ueno; I. Kasushi

The paper describes a method for vehicle recognition, in particular, for recognizing a vehicles make and model. Our system employs infra-red images so that we can use the same algorithm both day and night. Originally, the algorithm was the eigen-window method based on local features, but it has been changed to a vector quantization based algorithm which was originally proposed by J. Krumm (1997), to implement on an IMAP parallel image processing board. Any of these systems, based on both the eigen-window method and the vector quantization method, make a compressed database of local features for the algorithm of a target vehicle from given training images in advance; the system then matches a set of local features in the input image with those in training images for recognition. This method has the following three advantages: (1) it can detect even if part of the target vehicle is occluded; (2) it can detect even if the target vehicle is translated due to running out of lanes; (3) it does not require us to segment a vehicle from input images. The above advantages have been confirmed by performing outdoor experiments.


virtual reality continuum and its applications in industry | 2010

Outdoor gallery and its photometric issues

Katsushi Ikeuchi; Takeshi Oishi; Masataka Kagesawa; Atsuhiko Banno; Rei Kawakami; Tetsuya Kakuta; Yasuhide Okamoto; Boun Vinh Lu

We have been developing an outdoor gallery system in Asukakyo. Asukakyo is one of the ancient capitals, which is well-known to has lots of temples, palaces and buildings. Nevertheless, most of the assets have been deteriorated after more than fourteen centuries. The outdoor gallery system introduces the virtual appearance of ancient Asukakyo to visitors at the original site with the help of Mixed Reality (MR). To reconstruct the virtual Asukakyo in the outdoor gallery system, it is necessary to handle occlusion problem in synthesizing virtual objects correctly into the real scene with respect to existing foregrounds and shadows. Furthermore, outdoor environment makes the task more difficult due to the unpredictable illumination changes. This paper proposes novel outdoor illumination constraints for resolving the foreground occlusion problem in outdoor environment for the outdoor gallery system. The constraints can be also integrated into a probabilistic model of multiple cues for a better segmentation of the foreground. In addition, we introduce an effective method to resolve the shadow occlusion problem by using shadow detection and recasting with a spherical vision camera. We have applied the method in our outdoor gallery system in Asukakyo and verified the effectiveness of the method.


intelligent vehicles symposium | 2005

Sustainable ITS project overview: mixed reality traffic experiment space under interactive traffic environment for ITS

Katsushi Ikeuchi; Masao Kuwahara; Yoshihiro Suda; Yoshihisa Tanaka; Edward Chung; Takahiro Suzuki; Masataka Kagesawa; Shinji Tanaka; Isao Nishikawa; Yoshiyuki Takahashi; Ryota Horiguchi; Tomoyoshi Shiraishi; Hisatomo Hanabusa; Hiroshi Kawasaki; Hiroki Ishikawa; Katsuyuki Maruoka; Ken Honda; Makoto Furukawa; Makoto Kano; Hideki Ueno; Yoshikazu Ohba; Yoshihito Mashiyama; Toshihiko Oda; Keiichi Kenmotsu; Takatsugu Yamamoto; M.O. Masaaki; Mayumi Sakai; Motomu Tsuji

In this paper, we show the outline of our research on interactive traffic environment, under which we are developing mixed reality traffic experiment space. In order to develop sophisticated ITS applications, it is very important to analyze human factor, but there is few method to obtain human factor, thus, we have decided to create interactive traffic environment under which we can obtain human factors. As the first stage of our research, we have developed mixed reality traffic experiment space. It consists of real observation laboratory part and virtual experiment laboratory part. In the former part, we gather various real raw data to retrieve real environment model. In the later part, we present users more realistic driving environment based on the model. Based on this experiment space, we are going to proceed to the next stages, in which we design and evaluate sustainable ITS applications. For this purpose we have started sustainable ITS project at the university of Tokyo, collaborated among industry, government, and university.


international symposium on mixed and augmented reality | 2005

Driving view simulation synthesizing virtual geometry and real images in an experimental mixed-reality traffic space

Shintaro Ono; Koichi Ogawara; Masataka Kagesawa; Hiroshi Kawasaki; Masaaki Onuki; Ken Honda; Katsushi Ikeuchi

We propose an efficient and effective image generation system for an experimental mixed-reality traffic space. Our enhanced traffic/driving simulation system represents the view through a hybrid that combines virtual geometry with real images to realize high photo-reality with little human cost. Images for datasets are captured from the real world, and the view for the simulation system is created by synthesizing image datasets - with a conventional driving simulator.


ieee intelligent transportation systems | 2000

Local-feature based vehicle class recognition in infra-red images using IMAP parallel vision board

Masataka Kagesawa; Arihiro Nakamura; Katsushi Ikeuchi; Hiroaki Saito

This paper describes a method for classifying vehicle types, for example, small vehicles, sedans, and buses. For this classification, our system, based on local-feature configuration, needs many local features; thus it employs infrared images to enable us to use the same algorithm both day and night, and to eliminate concern about colors of vehicles. The algorithm is based on our previous work, which is a generalization of the eigenwindow method. This method has the following three advantages: (1) It cast detect even in cases where parts of the vehicles are occluded. (2) It can detect even if vehicles are translated due to running out of the lanes. (3) It does not require us to segment vehicle areas from input images. We developed a vehicle segmentation system using B-snake technique to obtain many training images. We then implemented our algorithm on the IMAP-vision board in order to verify the above advantages of our vehicle classification method by performing outdoor experiments.


vehicle navigation and information systems conference | 1994

Simulation of road traffic management system with dynamic information

Masataka Kagesawa; Sadao Takaba

In intelligent road traffic management systems various dynamic road traffic information is available so that the behavior of traffic flow is affected by the information. An object-oriented road traffic software simulator has been developed as a tool for studying the flow. As an application of the simulator, some examples of intelligent traffic simulation on it is introduced and their results are reported. One example is signal control on a small network compared with conventional control which minimises total delay time and the control which guarantees drivers maximum delay time. The other is the effect of parking availability information for three parking areas.<<ETX>>

Collaboration


Dive into the Masataka Kagesawa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge