Takeo Miyasaka
Chukyo University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Takeo Miyasaka.
international conference on pattern recognition | 2000
Takeo Miyasaka; Kazuhiro Kuroda; Makoto Hirose; Kazuo Araki
We have developed a new type of 3D measurement system which enabled one to obtain successive 3D range data at video rate with an error within /spl plusmn/0.3%. In this paper, we reconstruct realistic colored 3D surface model and 3D animation of the moving target from range images obtained by our 3D measurement system. We synthesize video images with the range images obtained by our system. A video camera is fixed on our 3D measurement system and takes video images synchronizing with the 3D measurement by it. Then, the measured 3D points are mapped onto the respective video images by means of the coordinate system transformation from the coordinate system of the 3D measurement system to that of the video camera and perspective transformation. Thus, the color and brightness of the corresponding pixel are attributed to the mapped measured 3D point. Finally, the realistic colored 3D image and animation are reconstructed through the texture mapping technique.
electronic imaging | 2006
Masanori Fukuda; Takeo Miyasaka; Kazuo Araki
We developed a 3D measurement system consists of a camera and a projector which can be calibrated easily at short times. In this system, the camera and the projector are calibrated in advance with Zhangs calibration method. The measurement procedure in this system is as follows. A calibrated camera and a calibrated projector are put in front of the calibration plane. Then the relative pose between the camera and the projector can be computed by projecting a number of light patterns from the projector onto the calibration plane and taking those images with the camera. And this system performs a 3D measurement with the gray code pattern projection. Since this system can be calibrated easily, this system does not need to be fixed exactly and the configuration of this system, which is the baseline and the measurement range, can be changed freely depending on the target and the purpose. This system can obtain a range data in a high accuracy of an error about 0.1% in spite of the fact that this system can be calibrated easily.
electronic imaging | 2006
Takeo Miyasaka; Kazuo Araki
We describe a three-dimensional measurement system which can acquire not only three-dimensional shapes of target objects but also these surface reflectance parameters. The system is constructed by one or some digital cameras, digital projector, and a computer which controls cameras and projectors. For 3-D geometrical reconstruction, we use well known gray code structured light method. The method projects gray code light patterns from the projector and obtain illuminated scenes by cameras. We add additional light patterns for surface reflectance measurement. These patterns are all white and gray light pattern. To recover complete shape of the target object, the object is measured from various viewpoints repeatedly, or measured repeatedly from fixed viewpoint while be moving by hand or turn table. To end the measurement, relative positions of each obtained range data are calculated by ICP algorithm. For each small region of the target object surface, we calculate reflectance parameters from surface normals, viewpoint (camera viewpoint), and light position (the projector viewpoint). Enough sampling of these three information sources are obtained for each small surface, we estimate reflectance parameters for each surface points. We demonstrate this geometrical and reflectance measurement method by experiments for fewer objects.
Intelligent Robots and Computer Vision XX: Algorithms, Techniques, and Active Vision | 2001
Yasuyuki Matsui; Takeo Miyasaka; Makoto Hirose; Kazuo Araki
In this research, the recognition of gesture in 3D space is examined by using serial range images obtained by a real-time 3D measurement system developed in our laboratory. Using this system, it is possible to obtain time sequences of range, intensity and color data for a moving object in real-time without assigning markers to the targets. At first, gestures are tracked in 2D space by calculating 2D flow vectors at each points using an ordinal optical flow estimation method, based on time sequences of the intensity data. Then, location of each point after 2D movement is detected on the x-y plane using thus obtained 2D flow vectors. Depth information of each point after movement is then obtained from the range data and 3D flow vectors are assigned to each point. Time sequences of thus obtained 3D flow vectors allow us to track the 3D movement of the target. So, based on time sequences of 3D flow vectors of the targets, it is possible to classify the movement of the targets using continuous DP matching technique. This tracking of 3D movement using time sequences of 3D flow vectors may be applicable for a robust gesture recognition system.
Archive | 2000
Takeo Miyasaka; Kazuhiro Kuroda; Makoto Hirose; Kazuo Araki
Computer Graphics and Imaging | 2003
Takeo Miyasaka; Kazuo Araki
Archive | 2000
Makoto Hirose; Takeo Miyasaka; Kazuhiro Kuroda; Kazuo Araki
Proceedings of the Annual Conference of the Institute of Image Electronics Engineers of Japan Proceedings of the 31st Annual Conference of the Institute of Image Electronics Engineers of Japan 2003 | 2003
Takashi Arai; Takeo Miyasaka; Kazuo Araki
Proceedings of the Annual Conference of the Institute of Image Electronics Engineers of Japan Proceedings of the 31st Annual Conference of the Institute of Image Electronics Engineers of Japan 2003 | 2003
Atsushi Oohashi; Takeo Miyasaka; Makoto Hirose; Kazuo Araki
Computer Graphics and Imaging | 2003
Makoto Hirose; Takeo Miyasaka; Kazuo Araki