Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Atsuhiko Banno is active.

Publication


Featured researches published by Atsuhiko Banno.


International Journal of Computer Vision | 2008

Flying Laser Range Sensor for Large-Scale Site-Modeling and Its Applications in Bayon Digital Archival Project

Atsuhiko Banno; Tomohito Masuda; Takeshi Oishi; Katsushi Ikeuchi

Abstract We have been conducting a project to digitize the Bayon temple, located at the center of Angkor-Thom in the kingdom of Cambodia. This is a huge structure, more than 150 meters long on all sides and up to 45 meters high. Digitizing such a large-scale object in fine detail requires developing new types of sensors for obtaining data of various kinds related to irregular positions such as the very high parts of the structure occluded from the ground. In this article, we present a sensing system with a moving platform, referred to as the Flying Laser Range Sensor (FLRS), for obtaining data related to these high structures from above them. The FLRS, suspended beneath a balloon, can be maneuvered freely in the sky and can measure structures invisible from the ground. The obtained data, however, has some distortion due to the movement of the sensor during the scanning process. In order to remedy this issue, we have developed several new rectification algorithms for the FLRS. One method is an extension of the 3D alignment algorithm to estimate not only rotation and translation but also motion parameters. This algorithm compares range data of overlapping regions from ground-based sensors and our FLRS. Another method accurately estimates the FLRS’s position by combining range data and image sequences from a video camera mounted on the FLRS. We evaluate these algorithms using a IS-based method and verify that both methods achieve much higher accuracy than previous methods.


international conference on computer vision | 2009

Disparity map refinement and 3D surface smoothing via Directed Anisotropic Diffusion

Atsuhiko Banno; Katsushi Ikeuchi

We propose a new binocular stereo algorithm and 3D reconstruction method from multiple disparity images. First, we present an accurate binocular stereo algorithm. In our algorithm, we use neither color segmentation nor plane fitting methods, which are common techniques among many algorithms nominated in the Middlebury ranking. These methods assume that the 3D world consists of a collection of planes and that each segment of a disparity map obeys a plane equation. We exclude these assumptions and introduce a Directed Anisotropic Diffusion technique for refining a disparity map. Second, we show a method to fill some holes in a distance map and smooth the reconstructed 3D surfaces by using another type of Anisotropic Diffusion technique. The evaluation results on the Middlebury datasets show that our stereo algorithm is competitive with other algorithms that adopt plane fitting methods. We present an experiment that shows the high accuracy of a reconstructed 3D model using our method, and the effectiveness and practicality of our proposed method in a real environment.


Computer Vision and Image Understanding | 2010

Omnidirectional texturing based on robust 3D registration through Euclidean reconstruction from two spherical images

Atsuhiko Banno; Katsushi Ikeuchi

We propose a semi-automatic omnidirectional texturing method that maps a spherical image onto a dense 3D model obtained by a range sensor. For accurate texturing, accurate estimation of the extrinsic parameters is inevitable. In order to estimate these parameters, we propose a robust 3D registration-based method between a dense range data set and a sparse spherical image stereo data set. For measuring the distances between the two data sets, we introduce generalized distances taking account of 3D error distributions of the stereo data. To reconstruct 3D models by images, we use two spherical images taken at arbitrary positions in arbitrary poses. Then, we propose a novel rectification method for spherical images that is derived from E matrix and facilitates the estimation of the disparities. The experimental results show that the proposed method can map the spherical image onto the dense 3D models effectively and accurately.


international conference on computer vision | 2005

Shape recovery of 3D data obtained from a moving range sensor by using image sequences

Atsuhiko Banno; Katsushi Ikeuchi

For a large object, scanning from the air is one of the most efficient methods of obtaining 3D data. In the case of large cultural heritage objects, there are some difficulties in scanning with respect to safety and efficiency. To remedy these problems, we have been developing a novel 3D measurement system, the floating laser range sensor (FLRS), in which a range sensor is suspended beneath a balloon. The obtained data, however, have some distortions due to sensor-movements during the scanning process. In this paper, we propose a method to recover 3D range data obtained by a moving laser range sensor. This method is applicable not only to our FLRS, but also to a general moving range sensor. Using image sequences from a video camera mounted on the FLRS enables us to estimate the motion of the FLRS without any physical sensors such as gyros or GPS. In the first stage, the initial values of camera motion parameters are estimated by full-perspective factorization. The next stage refines camera motion parameters using the relationships between camera images and range data distortion. Finally, by using the refined parameters, the distorted range data are recovered. In addition, our method is applicable with an uncalibrated video camera and range sensor system. We applied this method to an actual scanning project, and the results showed the effectiveness of our method.


Computer Vision and Image Understanding | 2011

Disparity map refinement and 3D surface smoothing via directed anisotropic diffusion

Atsuhiko Banno; Katsushi Ikeuchi

We propose a new binocular stereo algorithm and 3D reconstruction method from multiple disparity images. First, we present an accurate binocular stereo algorithm. In our algorithm, we use neither color segmentation nor plane fitting methods, which are common techniques among many algorithms nominated in the Middlebury ranking. These methods assume that the 3D world consists of a collection of planes and that each segment of a disparity map obeys a plane equation. We exclude these assumptions and introduce a Directed Anisotropic Diffusion technique for refining a disparity map. Second, we show a method to fill some holes in a distance map and smooth the reconstructed 3D surfaces by using another type of Anisotropic Diffusion technique. The evaluation results on the Middlebury datasets show that our stereo algorithm is competitive with other algorithms that adopt plane fitting methods. We present an experiment that shows the high accuracy of a reconstructed 3D model using our method, and the effectiveness and practicality of our proposed method in a real environment.


Information Sciences | 2012

Estimation of F-Matrix and image rectification by double quaternion

Atsuhiko Banno; Katsushi Ikeuchi

Fundamental Matrix, or F-Matrix, is one of the most important and elemental tools in the field of computer vision. In conventional methods for estimating the F-Matrix, an eight-point algorithm is adopted. First, an approximate F-Matrix is calculated by a linear solver using at least eight corresponding pairs. Since this linear optimization method excludes an essential property, the rank 2 constraint, a method based on a singular value decomposition (SVD) is applied to impose the constraint. This last step with SVD, however, provides additional noise in the F-Matrix. Several methods introduce parameterizations taking into account the rank 2 constraint and optimized nonlinearly without SVD. In this paper, we propose a novel parameterization for the nonlinear optimization which includes this constraint. We adopt double quaternion (DQ) and a scalar as the parameter set. Experimental results show that the nonlinear optimization with our parameterization is competitive with other parameterization methods. Moreover, through the proposed parameterization, we can obtain two transformations for the two input images. These transformations lead to a novel method to estimate epipolar lines and to rectify the image pairs. This rectification method can deal with any image pairs in the same manner whether the epipoles are inside or outside the images.


virtual reality continuum and its applications in industry | 2010

Outdoor gallery and its photometric issues

Katsushi Ikeuchi; Takeshi Oishi; Masataka Kagesawa; Atsuhiko Banno; Rei Kawakami; Tetsuya Kakuta; Yasuhide Okamoto; Boun Vinh Lu

We have been developing an outdoor gallery system in Asukakyo. Asukakyo is one of the ancient capitals, which is well-known to has lots of temples, palaces and buildings. Nevertheless, most of the assets have been deteriorated after more than fourteen centuries. The outdoor gallery system introduces the virtual appearance of ancient Asukakyo to visitors at the original site with the help of Mixed Reality (MR). To reconstruct the virtual Asukakyo in the outdoor gallery system, it is necessary to handle occlusion problem in synthesizing virtual objects correctly into the real scene with respect to existing foregrounds and shadows. Furthermore, outdoor environment makes the task more difficult due to the unpredictable illumination changes. This paper proposes novel outdoor illumination constraints for resolving the foreground occlusion problem in outdoor environment for the outdoor gallery system. The constraints can be also integrated into a probabilistic model of multiple cues for a better segmentation of the foreground. In addition, we introduce an effective method to resolve the shadow occlusion problem by using shadow detection and recasting with a spherical vision camera. We have applied the method in our outdoor gallery system in Asukakyo and verified the effectiveness of the method.


virtual reality software and technology | 2012

Achieving robust alignment for outdoor mixed reality using 3D range data

Masaki Inaba; Atsuhiko Banno; Takeshi Oishi; Katsushi Ikeuchi

Mixed reality (MR) technology can be applied to various applications such as architecture, advertising, and navigation systems, so the desire to utilize MR in outdoor environments has been increasing. In order to utilize MR, it is necessary to achieve alignment super imposing virtual contents in the desired position. However, because light changes continually in outdoor environments, and the appearance of real objects changes also, in some cases the previous image-based alignment methods do not work well. In this paper, a robust image-based alignment method to be used in outdoor environments is proposed. In the proposed method, the albedo of real objects is estimated using 3D shapes of these objects in advance, and the appearance is reproduced from the albedo and current light environment. The appearance of real objects and reproduced image becomes close, so a robust image-based alignment is achieved.


International Journal of Intelligent Transportation Systems Research | 2010

Image-Based Ego-Motion Estimation Using On-Vehicle Omnidirectional Camera

Ryota Matsuhisa; Shintaro Ono; Hiroshi Kawasaki; Atsuhiko Banno; Katsushi Ikeuchi

The estimation of the motion of the sensor, as well as a 3D shape of a scene, has been extensively researched, especially for Virtual Reality (VR) and Robotics systems. To achieve this estimation, a system that consists of a laser range sensor, Global Positioning System (GPS), and Gyro sensor has been proposed, actually constructed, and used. However, it is usually difficult to produce a precise and detailed estimation of the 3D shape because of the limited ability of each sensor. The Structure from Motion (SfM) method is widely known for estimation purposes, and the method can estimate those parameters in pixel order. However, the SfM method is frequently unstable because of dependency on initial parameters and also because of noise. In this paper, we propose a SfM method for omnidirectional image sequences using both factorization and a bundle adjustment method to achieve high accuracy and robustness.


intelligent robots and systems | 2005

Motion estimation of a moving range sensor by image sequences and distorted range data

Atsuhiko Banno; Kazuhide Hasegawa; Katsushi Ikeuchi

For a large scale object, scanning from the air is one of the most efficient methods of obtaining 3D data. In the case of large cultural heritage objects, there are some difficulties in scanning them with respect to safety and efficiency. To remedy these problems, we have been developing a novel 3D measurement system, the floating laser range sensor (FLRS), in which a rage sensor is suspended beneath a balloon. The obtained data, however, have some distortion due to the intra-scanning movement. In this paper, we propose a method to recover 3D range data obtained by a moving laser range sensor; this method is applicable not only to our FLRS, but also to a general moving range sensor. Using image sequences from a video camera mounted on the FLRS enables us to estimate the motion of the FLRS without any physical sensors such as gyros and GPS. At first, the initial values of camera motion parameters are estimated by perspective factorization. The next stage refines camera motion parameters using the relationships between camera images and the range data distortion. Finally, by using the refined parameter, the distorted range data are recovered. We applied this method to an actual scanning project and the results showed the effectiveness of our method.

Collaboration


Dive into the Atsuhiko Banno's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge