Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takashi Okuma is active.

Publication


Featured researches published by Takashi Okuma.


ieee virtual reality conference | 2000

A stereoscopic video see-through augmented reality system based on real-time vision-based registration

Masayuki Kanbara; Takashi Okuma; Haruo Takemura; Naokazu Yokoya

In an augmented reality system, it is required to obtain the position and orientation of the users viewpoint in order to display the composed image while maintaining a correct registration between the real and virtual worlds. All the procedures must be done in real time. This paper proposes a method for augmented reality with a stereo vision sensor and a video see-through head-mounted display (HMD). It can synchronize the display timing between the virtual and real worlds so that the alignment error is reduced. The method calculates camera parameters from three markers in image sequences captured by a pair of stereo cameras mounted on the HMD. In addition, it estimates the real-world depth from a pair of stereo images in order to generate a composed image maintaining consistent occlusions between real and virtual objects. The depth estimation region is efficiently limited by calculating the position of the virtual object by using the camera parameters. Finally, we have developed a video see-through augmented reality system which mainly consists of a pair of stereo cameras mounted on the HMD and a standard graphics workstation. The feasibility of the system has been successfully demonstrated with experiments.


soft computing and pattern recognition | 2009

Economic and Synergistic Pedestrian Tracking System for Indoor Environments

Tomoya Ishikawa; Masakatsu Kourogi; Takashi Okuma; Takeshi Kurata

We describe an indoor pedestrian tracking system that can economically improve the tracking performance and the quality and value of services by incorporating other services synergistically. Our tracking system utilizes existing infrastructures to be used for security services such as surveillance cameras and active RFID tags which are sparsely put in place and generally used in plants, offices, and commercial facilities for estimating user’s walking parameters and correction of tracking errors without significantly increasing costs, and realizes the improvement of the tracking performance by them. Furthermore, by sharing not only surveillance videos and RFID signals from security services but also the tracking information and models from 3D environment modeling services among services, each service can enhance the quality and value of the service and relatively reduce the costs for creating the data and realizing functions.


international conference on pattern recognition | 1998

An augmented reality system using a real-time vision based registration

Takashi Okuma; Kiyoshi Kiyokawa; Haruo Takemura; Naokazu Yokoya

Describes a prototype of an augmented reality system using vision-based registration. In order to build an augmented reality system with video see-through image composition, camera parameters for generating virtual objects must be obtained at video-rate. The system estimates camera parameters from four known markers in an image sequence captured by a small CCD camera mounted on a HMD (head mounted display). Virtual objects are overlaid upon a captured image sequence in real-time.


international conference on pattern recognition | 2000

Real-time camera parameter estimation from images for a mixed reality system

Takashi Okuma; Katsuhiko Sakaue; Haruo Takemura; Naokazu Yokoya

This paper describes a method of estimating the position and orientation of a camera for constructing a mixed reality (MR) system. In an MR system 3D virtual objects should be merged into a 3D real environment at a right position in real time. To acquire the users viewing position and orientation is the main technical problem of constructing an MR system. The users viewpoint can be determined by estimating the position and orientation of a camera using images taken at the viewpoint. Our method estimates the camera pose using screen coordinates of captured color fiducial markers whose 3D positions are known. The method consists of three algorithms for perspective n-points problems and rises each algorithm selectively. The method also estimates the screen coordinates of untracked markers that are occluded or are out of the view. It has been found that an experimental MR system that is based on the proposed method can seamlessly merge 3D virtual objects into a 3D real environment at right position in real-time and allows users to look around an area in which numbers are placed.


international conference on multimedia computing and systems | 1999

Real-time composition of stereo images for video see-through augmented reality

Masayuki Kanbara; Takashi Okuma; Haruo Takemura; Naokazu Yokoya

This paper describes a method of stereo image composition for video see-through augmented reality. In order to implement an augmented reality system, we must acquire the position and orientation of the users viewpoint to display the composed image maintaining correct registration of real and virtual worlds. All the procedures must be done in real-time. We have built a prototype augmented reality system that adopts the combination of a vision-based tracking technique and a video see-through head mounted display (HMD). Display-timing is synchronized between the real and virtual environments, so that an alignment error is reduced. The system calculates camera parameters from three markers among which physical relationships are unknown in image sequences captured by a pair of stereo cameras mounted on the HMD. In addition, the users hands are regarded as real-world objects that may occlude virtual objects; the system estimates the depth of hands in images and generates a composed image maintaining consistent occlusions between the hands and virtual objects.


international symposium on mixed and augmented reality | 2015

[POSTER] Road Maintenance MR System Using LRF and PDR

Ching-Tzun Chang; Ryosuke Ichikari; Koji Makita; Takashi Okuma; Takeshi Kurata

We have been developing a mixed reality system to support road maintenance using overlaid visual aids. Such a system requires a positioning method that can provide sub-meter accuracy and function even if the appearance of the road surface changes significantly caused by many factors such as construction phase, time and weather. Therefore, we are developing a real-time worker positioning method that can be applied to these situation by integrating laser range finder (LRF) and pedestrian dead-reckoning (PDR) data. In the field, multiple workers move around the workspace. Therefore, it is necessary to determine corresponding pairs of PDR-based and LRF-based trajectories by identifying similar trajectories. In this study, we propose a method to calculate the similarity between trajectories and a procedure to integrate corresponding pairs of trajectories to acquire the position and movement direction of a worker.


Archive | 1999

Stereo vision based video see-through mixed reality

Naokazu Yokoya; Hiroshi Takemura; Takashi Okuma; Masayuki Kanbara


Archive | 2002

Device and method for authenticating password

Takekazu Katou; Masakatsu Korogi; Takeshi Kurata; Takashi Okuma; 加藤 丈和; 大隈 隆史; 興梠 正克; 蔵田 武志


Archive | 2003

Location information processor

Takekazu Katou; Masakatsu Korogi; Takeshi Kurata; Takashi Okuma; 丈和 加藤; 隆史 大隈; 正克 興梠; 武志 蔵田


Archive | 2003

Attitude angle processing unit and attitude angle processing method

Takekazu Katou; Masakatsu Korogi; Takeshi Kurata; Takashi Okuma; Nobuchika Sakata; 丈和 加藤; 隆史 大隈; 正克 興梠; 武志 蔵田; 信親 酒田

Collaboration


Dive into the Takashi Okuma's collaboration.

Top Co-Authors

Avatar

Takeshi Kurata

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomoya Ishikawa

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Koji Makita

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Naokazu Yokoya

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Haruo Takemura

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Laurence Nigay

Joseph Fourier University

View shared research outputs
Top Co-Authors

Avatar

Katsuhiko Sakaue

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Masayuki Kanbara

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge