Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takafumi Taketomi is active.

Publication


Featured researches published by Takafumi Taketomi.


Computers & Graphics | 2011

Mobile Augmented Reality: Real-time and accurate extrinsic camera parameter estimation using feature landmark database for augmented reality

Takafumi Taketomi; Tomokazu Sato; Naokazu Yokoya

In the field of augmented reality (AR), many kinds of vision-based extrinsic camera parameter estimation methods have been proposed to achieve geometric registration between real and virtual worlds. Previously, a feature landmark-based camera parameter estimation method was proposed. This is an effective method for implementing outdoor AR applications because a feature landmark database can be automatically constructed using the structure-from-motion (SfM) technique. However, the previous method cannot work in real time because it entails a high computational cost or matching landmarks in a database with image features in an input image. In addition, the accuracy of estimated camera parameters is insufficient for applications that need to overlay CG objects at a position close to the users viewpoint. This is because it is difficult to compensate for visual pattern change of close landmarks when only the sparse depth information obtained by the SfM is available. In this paper, we achieve fast and accurate feature landmark-based camera parameter estimation by adopting the following approaches. First, the number of matching candidates is reduced to achieve fast camera parameter estimation by tentative camera parameter estimation and by assigning priorities to landmarks. Second, image templates of landmarks are adequately compensated for by considering the local 3-D structure of a landmark using the dense depth information obtained by a laser range sensor. To demonstrate the effectiveness of the proposed method, we developed some AR applications using the proposed method.


international conference on pattern recognition | 2010

Extrinsic Camera Parameter Estimation Using Video Images and GPS Considering GPS Positioning Accuracy

Hideyuki Kume; Takafumi Taketomi; Tomokazu Sato; Naokazu Yokoya

This paper proposes a method for estimating extrinsic camera parameters using video images and position data acquired by GPS. In conventional methods, the accuracy of the estimated camera position largely depends on the accuracy of GPS positioning data because they assume that GPS position error is very small or normally distributed. However, the actual error of GPS positioning easily grows to the 10m level and the distribution of these errors is changed depending on satellite positions and conditions of the environment. In order to achieve more accurate camera positioning in outdoor environments, in this study, we have employed a simple assumption that true GPS position exists within a certain range from the observed GPS position and the size of the range depends on the GPS positioning accuracy. Concretely, the proposed method estimates camera parameters by minimizing an energy function that is defined by using the reprojection error and the penalty term for GPS positioning.


Ipsj Transactions on Computer Vision and Applications | 2017

Visual SLAM algorithms: a survey from 2010 to 2016

Takafumi Taketomi; Hideaki Uchiyama; Sei Ikeda

SLAM is an abbreviation for simultaneous localization and mapping, which is a technique for estimating sensor motion and reconstructing structure in an unknown environment. Especially, Simultaneous Localization and Mapping (SLAM) using cameras is referred to as visual SLAM (vSLAM) because it is based on visual information only. vSLAM can be used as a fundamental technology for various types of applications and has been discussed in the field of computer vision, augmented reality, and robotics in the literature. This paper aims to categorize and summarize recent vSLAM algorithms proposed in different research communities from both technical and historical points of views. Especially, we focus on vSLAM algorithms proposed mainly from 2010 to 2016 because major advance occurred in that period. The technical categories are summarized as follows: feature-based, direct, and RGB-D camera-based approaches.


international conference on pattern recognition | 2008

Real-time camera position and posture estimation using a feature landmark database with priorities

Takafumi Taketomi; Tomokazu Sato; Naokazu Yokoya

In the field of computer vision, many kinds of camera parameter estimation methods have been proposed. As one of these methods, an extrinsic camera parameter estimation method that uses pre-constructed feature landmark database has been studied. In this method, extrinsic camera parameters of video images are estimated from correspondences between landmarks and image features. Although this method can work in a large outdoor environment, its computational cost in matching process is expensive and it cannot work in real-time. In this paper, to achieve real-time camera parameter estimation, the number of matching candidates are reduced by using priorities of landmarks that are determined from previously captured video sequences.


international conference on advanced learning technologies | 2013

Authoring Augmented Reality Learning Experiences as Learning Objects

Marc Ericson C. Santos; Goshiro Yamamoto; Takafumi Taketomi; Jun Miyazaki; Hirokazu Kato

Engineers and educators alike have prototyped a variety of augmented reality learning experiences (ARLEs). However, adapting ARLEs in educational practice would require an interdisciplinary approach that considers learning theory, pedagogy and instructional design. To address this requirement, we model ARLEs as learning objects by outlining the necessary components, and we propose a participatory design to demonstrate the authoring process of an augmented reality learning object (ARLO). ARLOs can be made useful in many scenarios if teachers are empowered to edit its context elements, content and instructional activity. Lastly, we point to the research questions entailed in modeling ARLEs as ARLOs.


international conference on human-computer interaction | 2009

A Novel Approach to On-Site Camera Calibration and Tracking for MR Pre-visualization Procedure

Wataru Toishita; Yutaka Momoda; Ryuhei Tenmoku; Fumihisa Shibata; Hideyuki Tamura; Takafumi Taketomi; Tomokazu Sato; Naokazu Yokoya

This paper presents camera calibration and tracking method for mixed reality based pre-visualization system for filmmaking. The proposed calibration method collects environmental information required for tracking efficiently since the rough camera path and target environment are known before actual shooting. Previous camera tracking methods using natural feature are suitable for outdoor environment. However, it takes large human cost to construct the database. Our proposed method reduces the cost of calibration process by using fiducial markers. Fiducial markers are used as reference points and feature landmark database is constructed automatically. In shooting phase, moreover, the speed and robustness of tracking are improved by using SIFT descriptor.


international conference on image processing | 2013

Detection of 3D points on moving objects from point cloud data for 3D modeling of outdoor environments

Tsunetake Kanatani; Hideyuki Kume; Takafumi Taketomi; Tomokazu Sato; Naokazu Yokoya

A 3D modeling technique for an urban environment can be applied to several applications such as landscape simulations, navigational systems, and mixed reality systems. In this field, the target environment is first measured using several types of sensors (laser rangefinders, cameras, GPS sensors, and gyroscopes). A 3D model of the environment is then constructed based on the results of the 3D measurements. In this 3D modeling process, 3D points that exist on moving objects become obstacles or outliers to enable the construction of an accurate 3D model. To solve this problem, we propose a method for detecting 3D points on moving objects from 3D point cloud data based on photometric consistency and knowledge of the road environment. In our method, 3D points on moving objects are detected based on luminance variations obtained by projecting 3D points onto omnidirectional images. After detecting 3D the points based on evaluation value, the points are detected using prior information of the road environment.


ieee virtual reality conference | 2009

Real-time geometric registration using feature landmark database for augmented reality applications

Takafumi Taketomi; Tomokazu Sato; Naokazu Yokoya

In the field of augmented reality, it is important to solve a geometric registration problem between real and virtual worlds. To solve this problem, many kinds of image based online camera parameter estimation methods have been proposed. As one of these methods, we have been proposed a feature landmark based camera parameter estimation method. In this method, extrinsic camera parameters are estimated from corresponding landmarks and image features. Although the method can work in large and complex environments, our previous method cannot work in real-time due to high computational cost in matching process. Additionally, initial camera parameters for the first frame must be given manually. In this study, we realize real-time and manual-initialization free camera parameter estimation based on feature landmark database. To reduce the computational cost of the matching process, the number of matching candidates is reduced by using priorities of landmarks that are determined from previously captured video sequences. Initial camera parameter for the first frame is determined by a voting scheme for the target space using matching candidates. To demonstrate the effectiveness of the proposed method, applications of landmark based real-time camera parameter estimation are demonstrated in outdoor environments.


2012 14th Symposium on Virtual and Augmented Reality | 2012

Local Quadrics Surface Approximation for Real-Time Tracking of Textureless 3D Rigid Curved Objects

Marina Atsumi Oikawa; Takafumi Taketomi; Goshiro Yamamoto; Makoto Fujisawa; Toshiyuki Amano; Jun Miyazaki; Hirokazu Kato

This paper addresses the problem of tracking textureless rigid curved objects. A common approach uses polygonal meshes to represent curved objects and use them inside an edge-based tracking system. However, in order to accurately recover their shape, high quality meshes are required, creating a trade-off between computational efficiency and tracking accuracy. To solve this issue, we suggest the use of quadrics for each patch in the mesh to give local approximations of the object shape. The novelty of our research lies in using curves that represent the quadrics projection in the current viewpoint for distance evaluation instead of using the standard method which compares edges from mesh and detected edges in the video image. This representation allows to considerably reduce the level of detail of the polygonal mesh and led us to the development of a novel method for evaluating the distance between projected and detected features. The experiments results show the comparison between our approach and the traditional method using sparse and dense meshes. They are presented using both synthetic and real image data.


IEEE Transactions on Visualization and Computer Graphics | 2018

Handheld Guides in Inspection Tasks: Augmented Reality versus Picture

Jarkko Polvi; Takafumi Taketomi; Atsunori Moteki; Toshiyuki Yoshitake; Toshiyuki Fukuoka; Goshiro Yamamoto; Christian Sandor; Hirokazu Kato

Inspection tasks focus on observation of the environment and are required in many industrial domains. Inspectors usually execute these tasks by using a guide such as a paper manual, and directly observing the environment. The effort required to match the information in a guide with the information in an environment and the constant gaze shifts required between the two can severely lower the work efficiency of inspector in performing his/her tasks. Augmented reality (AR) allows the information in a guide to be overlaid directly on an environment. This can decrease the amount of effort required for information matching, thus increasing work efficiency. AR guides on head-mounted displays (HMDs) have been shown to increase efficiency. Handheld AR (HAR) is not as efficient as HMD-AR in terms of manipulability, but is more practical and features better information input and sharing capabilities. In this study, we compared two handheld guides: an AR interface that shows 3D registered annotations, that is, annotations having a fixed 3D position in the AR environment, and a non-AR picture interface that displays non-registered annotations on static images. We focused on inspection tasks that involve high information density and require the user to move, as well as to perform several viewpoint alignments. The results of our comparative evaluation showed that use of the AR interface resulted in lower task completion times, fewer errors, fewer gaze shifts, and a lower subjective workload. We are the first to present findings of a comparative study of an HAR and a picture interface when used in tasks that require the user to move and execute viewpoint alignments, focusing only on direct observation. Our findings can be useful for AR practitioners and psychology researchers.

Collaboration


Dive into the Takafumi Taketomi's collaboration.

Top Co-Authors

Avatar

Hirokazu Kato

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Goshiro Yamamoto

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Christian Sandor

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jun Miyazaki

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Plopski

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Damien Constantine Rompapas

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Marc Ericson C. Santos

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Sei Ikeda

Nara Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge