Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fumio Okura is active.

Publication


Featured researches published by Fumio Okura.


Multimedia Tools and Applications | 2017

Addressing temporal inconsistency in indirect augmented reality

Fumio Okura; Takayuki Akaguma; Tomokazu Sato; Naokazu Yokoya

Indirect augmented reality (IAR) employs a unique approach to achieve high-quality synthesis of the real world and the virtual world, unlike traditional augmented reality (AR), which superimposes virtual objects in real time. IAR uses pre-captured omnidirectional images and offline superimposition of virtual objects for achieving jitter- and drift-free geometric registration as well as high-quality photometric registration. However, one drawback of IAR is the inconsistency between the real world and the pre-captured image. In this paper, we present a new classification of IAR inconsistencies and analyze the effect of these inconsistencies on the IAR experience. Accordingly, we propose a novel IAR system that reflects real-world illumination changes by selecting an appropriate image from among multiple pre-captured images obtained under various illumination conditions. The results of experiments conducted at an actual historical site show that the consideration of real-world illumination changes improves the realism of the IAR experience.


intelligent robots and systems | 2013

Teleoperation of mobile robots by generating augmented free-viewpoint images

Fumio Okura; Yuko Ueda; Tomokazu Sato; Naokazu Yokoya

This paper proposes a teleoperation interface by which an operator can control a robot from freely configured viewpoints using realistic images of the physical world. The viewpoints generated by the proposed interface provide human operators with intuitive control using a head-mounted display and head tracker, and assist them to grasp the environment surrounding the robot. A state-of-the-art free-viewpoint image generation technique is employed to generate the scene presented to the operator. In addition, an augmented reality technique is used to superimpose a 3D model of the robot onto the generated scenes. Through evaluations under virtual and physical environments, we confirmed that the proposed interface improves the accuracy of teleoperation.


eurographics | 2015

Unifying color and texture transfer for predictive appearance manipulation

Fumio Okura; Kenneth Vanhoey; Adrien Bousseau; Alexei A. Efros; George Drettakis

Recent color transfer methods use local information to learn the transformation from a source to an exemplar image, and then transfer this appearance change to a target image. These solutions achieve very successful results for general mood changes, e.g., changing the appearance of an image from “sunny” to “overcast”. However, such methods have a hard time creating new image content, such as leaves on a bare tree. Texture transfer, on the other hand, can synthesize such content but tends to destroy image structure. We propose the first algorithm that unifies color and texture transfer, outperforming both by leveraging their respective strengths. A key novelty in our approach resides in teasing apart appearance changes that can be modeled simply as changes in color versus those that require new image content to be generated. Our method starts with an analysis phase which evaluates the success of color transfer by comparing the exemplar with the source. This analysis then drives a selective, iterative texture transfer algorithm that simultaneously predicts the success of color transfer on the target and synthesizes new content where needed. We demonstrate our unified algorithm by transferring large temporal changes between photographs, such as change of season – e.g., leaves on bare trees or piles of snow on a street – and flooding.


ACM Journal on Computing and Cultural Heritage | 2015

Mixed-Reality World Exploration Using Image-Based Rendering

Fumio Okura; Masayuki Kanbara; Naokazu Yokoya

This article describes a Mixed-Reality (MR) application that superimposes lost buildings of a historical site onto real scenes virtualized using spherical aerial images. The proposed application is set at a UNESCO World Heritage site in Japan, and is based on a novel framework that supports the photorealistic superimposition of virtual objects onto virtualized real scenes. The proposed framework utilizes Image-Based Rendering (IBR), which enables users to freely change their viewpoint in a real-world virtualization constructed using precaptured images. This framework combines the offline rendering of virtual objects and IBR to take advantage of the higher quality of offline rendering without the additional computational cost of online processing; that is, it incurs only the cost of online lightweight IBR, which is simplified through the pregeneration of structured viewpoints (e.g., at grid points).


international symposium on mixed and augmented reality | 2010

Augmented telepresence using autopilot airship and omni-directional camera

Fumio Okura; Masayuki Kanbara; Naokazu Yokoya

This study is concerned with a large-scale telepresence system based on remote control of mobile robot or aerial vehicle. The proposed system provides a user with not only view of remote site but also related information by AR technique. Such systems are referred to as augmented telepresence in this paper. Aerial imagery can capture a wider area at once than image capturing from the ground. However, it is difficult for a user to change position and direction of viewpoint freely because of the difficulty in remote control and limitation of hardware. To overcome these problems, the proposed system uses an autopilot airship to support changing users viewpoint and employs an omni-directional camera for changing viewing direction easily. This paper describes hardware configuration for aerial imagery, an approach for overlaying virtual objects, and automatic control of the airship, as well as experimental results using a prototype system.


international conference on biometrics | 2016

Gait collector: An automatic gait data collection system in conjunction with an experience-based long-run exhibition

Yasushi Makihara; Fumio Okura; Ikuhisa Mitsugami; Masataka Niwa; Chihiro Aoki; Atsuyuki Suzuki; Daigo Muramatsu; Yasushi Yagi

Biometric data collection is an important first step toward biometrics research practice, although it is a considerably laborious task, particularly for behavioral biometrics such as gait. We therefore propose an automatic gait data collection system in conjunction with an experience-based exhibition. In the exhibition, participants enjoy an attractive online demonstration of state-of-the-art video-based gait analysis comprising intuitive gait feature measurement and gait-based age estimation while we simultaneously collect their gait data along with informed consent. At the time of this publication, we are holding the exhibition in association with a science museum and have successfully collected the gait data of 47,615 subjects over 246 days, which has already exceeded the size of the largest existing gait database in the world.


Ipsj Transactions on Computer Vision and Applications | 2014

Background Estimation for a Single Omnidirectional Image Sequence Captured with a Moving Camera

Norihiko Kawai; Naoya Inoue; Tomokazu Sato; Fumio Okura; Yuta Nakashima; Naokazu Yokoya

This paper proposes a background estimation method from a single omnidirectional image sequence for removing undesired regions such as moving objects, specular regions, and uncaptured regions caused by the cam- eras blind spot without manual specification. The proposed method aligns multiple frames using a reconstructed 3D model of the environment and generates background images by minimizing an energy function for selecting a frame for each pixel. In the energy function, we introduce patch similarity and camera positions to remove undesired regions more correctly and generate high-resolution images. In experiments, we demonstrate the effectiveness of the proposed method by comparing the result given by the proposed method with those from conventional approaches.


international conference on computer graphics and interactive techniques | 2013

Mobile AR using pre-captured omnidirectional images

Takayuki Akaguma; Fumio Okura; Tomokazu Sato; Naokazu Yokoya

In the field of augmented reality (AR), geometric and photometric registration is routinely achieved in real time. However, real-time geometric registration often leads to misalignment (e.g., jitter and drift) due to the error from camera pose estimation. Due to limited resources on mobile devices, it is also difficult to implement state-of-the-art techniques for photometric registration on mobile AR systems. In order to solve these problems, we developed a mobile AR system in a significantly different way from conventional systems. In this system, captured omnidirectional images and virtual objects are registered geometrically and photometrically in an offline rendering process. The appropriate part of the prerendered omnidirectional AR image is shown to a user through a mobile device with online registration between the real world and the pre-captured image. In order to investigate the validity of our new framework for mobile AR, we conducted experiments using the prototype system on a real site in Todai-ji Temple, a famous world cultural heritage site in Japan.


international conference on multimedia and expo | 2012

Full Spherical High Dynamic Range Imaging from the Sky

Fumio Okura; Masayuki Kanbara; Naokazu Yokoya

This paper describes a method for acquiring full spherical high dynamic range (HDR) images with no missing areas by using two omni directional cameras mounted on the top and bottom of an unmanned airship. The full spherical HDR images are generated by combining multiple omni directional images that are captured with different shutter speeds. The images generated are intended for uses in telepresence, augmented telepresence, and image-based lighting.


international symposium on mixed and augmented reality | 2014

Indirect augmented reality considering real-world illumination change

Fumio Okura; Takayuki Akaguma; Tomokazu Sato; Naokazu Yokoya

Indirect augmented reality (IAR) utilizes pre-captured omnidirectional images and offline superimposition of virtual objects for achieving high-quality geometric and photometric registration. Meanwhile, IAR causes inconsistency between the real world and the pre-captured image. This paper describes the first-ever study focusing on the temporal inconsistency issue in IAR. We propose a novel IAR system which reflects real-world illumination change by selecting an appropriate image from a set of images pre-captured under various illumination. Results of a public experiment show that the proposed system can improve the realism in IAR.

Collaboration


Dive into the Fumio Okura's collaboration.

Top Co-Authors

Avatar

Naokazu Yokoya

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Masayuki Kanbara

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takayuki Akaguma

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Norihiko Kawai

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge