Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Akihiko Iketani is active.

Publication


Featured researches published by Akihiko Iketani.


electronic imaging | 2004

High-resolution video mosaicing for documents and photos by estimating camera motion

Tomokazu Sato; Sei Ikeda; Masayuki Kanbara; Akihiko Iketani; Noboru Nakajima; Naokazu Yokoya; Keiji Yamada

Recently, document and photograph digitization from a paper is very important for digital archiving and personal data transmission through the internet. Though many people wish to digitize documents on a paper easily, now heavy and large image scanners are required to obtain high quality digitization. To realize easy and high quality digitization of documents and photographs, we propose a novel digitization method that uses a movie captured by a hand-held camera. In our method, first, 6-DOF(Degree Of Freedom) position and posture parameters of the mobile camera are estimated in each frame by tracking image features automatically. Next, re-appearing feature points in the image sequence are detected and stitched for minimizing accumulated estimation errors. Finally, all the images are merged as a high-resolution mosaic image using the optimized parameters. Experiments have successfully demonstrated the feasibility of the proposed method. Our prototype system can acquire initial estimates of extrinsic camera parameters in real-time with capturing images.


international symposium on mixed and augmented reality | 2010

Task support system by displaying instructional video onto AR workspace

Michihiko Goto; Yuko Uematsu; Hideo Saito; Shuji Senda; Akihiko Iketani

This paper presents an instructional support system based on augmented reality (AR). This system helps a user to work intuitively by overlaying visual information in the same way of a navigation system. In usual AR systems, the contents to be overlaid onto real space are created with 3D Computer Graphics. In most cases, such contents are newly created according to applications. However, there are many 2D videos that show how to take apart or build electric appliances and PCs, how to cook, etc. Therefore, our system employs such existing 2D videos as instructional videos. By transforming an instructional video to display, according to the users view, and by overlaying the video onto the users view space, the proposed system intuitively provides the user with visual guidance. In order to avoid the problem that the display of the instructional video and the users view may be visually confused, we add various visual effects to the instructional video, such as transparency and enhancement of contours. By dividing the instructional video into sections according to the operations to be carried out in order to complete a certain task, we ensure that the user can interactively move to the next step in the instructional video after a certain operation is completed. Therefore, the user can carry on with the task at his/her own pace. In the usability test, users evaluated the use of the instructional video in our system through two tasks: a task involving building blocks and an origami task. As a result, we found that a users visibility improves when the instructional video is transformed to display according to his/her view. Further, for the evaluation of visual effects, we can classify these effects according to the task and obtain the guideline for the use of our system as an instructional support system for performing various other tasks.


asian conference on computer vision | 2007

Video mosaicing based on structure from motion for distortion-free document digitization

Akihiko Iketani; Tomokazu Sato; Sei Ikeda; Masayuki Kanbara; Noboru Nakajima; Naokazu Yokoya

This paper presents a novel video mosaicing method capable of generating a geometric distortion-free mosaic image using a hand-held camera. For a document composed of curved pages, mosaic images of virtually flattened pages are generated. The process of our method is composed of two stages : real-time stage and off-line stage. In the realtime stage, image features are automatically tracked on the input images, and the viewpoint of each image as well as the 3-D position of each image feature are estimated by a structure-from-motion technique. In the offline stage, the estimated viewpoint and 3-D position of each feature are refined and utilized to generate a geometric distortion-free mosaic image. We demonstrate our prototype system on curved documents to show the feasibility of our approach.


asian conference on computer vision | 2010

Image inpainting based on probabilistic structure estimation

Takashi Shibata; Akihiko Iketani; Shuji Senda

A novel inpainting method based on probabilistic structure estimation has been developed. The method consists of two steps. First, an initial image, which captures rough structure and colors in the missing region, is estimated. This image is generated by probabilistically interpolating the gradient inside the missing region, and then by flooding the colors on the boundary into the missing region using Markov Random Field. Second, by locally replacing the missing region with local patches similar to both the adjacent patches and the initial image, the inpainted image is synthesized. Since the patch replacement process is guided by the initial image, the inpainted image is guaranteed to preserve the underlying structure. This also enables patches to be replaced in a greedy manner, i.e. without optimization. Experiments show the proposed method outperforms previous methods in terms of both subjective image quality and computational speed.


international conference on pattern recognition | 2006

Video Mosaicing for Curved Documents Based on Structure from Motion

Akihiko Iketani; Tomokazu Sato; Sei Ikeda; Masayuki Kanbara; Noboru Nakajima; Naokazu Yokoya

Various methods for video mosaicing have been already investigated by many researchers. Most of these methods, however, assume that the target object is flat or very far from the camera to avoid the disparity problem. This paper describes a novel video mosaicing method for curved documents based on 3-D reconstruction. With the proposed method, the mosaic image of the geometrically restored target document is generated, even if the document has a curved surface. Experiments on curved documents have shown the feasibility of the proposed method.


european conference on computer vision | 2014

Visualization of Temperature Change Using RGB-D Camera and Thermal Camera

Wataru Nakagawa; Kazuki Matsumoto; Francois de Sorbier; Maki Sugimoto; Hideo Saito; Shuji Senda; Takashi Shibata; Akihiko Iketani

In this paper, we present a system for visualizing temperature changes in a scene using an RGB-D camera coupled with a thermal camera. This system has applications in the context of maintenance of power equipments. We propose a two-stage approach made of with an offline and an online phases. During the first stage, after the calibration, we generate a 3D reconstruction of the scene with the color and the thermal data. We then apply the Viewpoint Generative Learning (VGL) method on the colored 3D model for creating a database of descriptors obtained from features robust to strong viewpoint changes. During the second online phase we compare the descriptors extracted from the current view against the ones in the database for estimating the pose of the camera. In this situation, we can display the current thermal data and compare it with the data saved during the offline phase.


asian conference on computer vision | 2012

Single image super resolution reconstruction in perturbed exemplar sub-space

Takashi Shibata; Akihiko Iketani; Shuji Senda

This paper presents a novel single image super resolution method that reconstructs a super resolution image in an exemplar sub-space. The proposed method first synthesizes LR patches by perturbing the image formation model, and stores them in a dictionary. An SR image is generated by replacing the input image patchwise with an HR patch in the dictionary whose LR patch best matches the input. The abundance of the exemplars enables the proposed method to synthesize SR images within the exemplar sub-space. This gives numerous advantages over the previous methods, such as the robustness against noise. Experiments on documents images show the proposed method outperforms previous methods not only in image quality, but also in recognition rate, namely about 30% higher than the previous methods.


international symposium on mixed and augmented reality | 2014

[DEMO] RGB-D-T camera system for AR display of temperature change

Kazuki Matsumoto; Wataru Nakagawa; Francois de Sorbier; Maki Sugimoto; Hideo Saito; Shuji Senda; Takashi Shibata; Akihiko Iketani

The anomalies of power equipment can be founded using temperature changes compared to its normal state. In this paper we present a system for visualizing temperature changes in a scene using a thermal 3D model. Our approach is based on two precomputed 3D models of the target scene achieved with a RGB-D camera coupled with the thermal camera. The first model contains the RGB information, while the second one contains the thermal information. For comparing the status of the temperature between the model and the current time, we accurately estimate the pose of the camera by finding keypoint correspondences between the current view and the RGB 3D model. Knowing the pose of the camera, we are then able to compare the thermal 3D model with the current status of the temperature from any viewpoint.


asian conference on computer vision | 2006

Super-Resolved video mosaicing for documents based on extrinsic camera parameter estimation

Akihiko Iketani; Tomokazu Sato; Sei Ikeda; Masayuki Kanbara; Noboru Nakajima; Naokazu Yokoya

This paper describes a novel video mosaicing method based on extrinsic camera parameter estimation. With our method, a mosaic image without perspective distortion can be generated, even if none of the input image plane is parallel to the target document. Thus, users no longer have to take special care in holding the camera so that the image plane in the reference frame is parallel to the target. First, extrinsic camera parameters are estimated by tracking image features. Next, by utilizing re-appearing features, estimated extrinsic camera parameters are globally optimized to minimize the estimation error in the whole input sequence. Finally, all the images are projected onto the mosaic image plane, and a super-resolved mosaic image is generated by applying an iterative back projection algorithm. Experiments have successfully demonstrated the feasibility of the proposed method.


Archive | 2004

Image combining system, image combining method, and program

Akihiko Iketani; Noboru Nakajima; Tomokazu Sato; Sei Ikeda; Masayuki Kanbara; Naokazu Yokoya

Collaboration


Dive into the Akihiko Iketani's collaboration.

Researchain Logo
Decentralizing Knowledge