Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shuji Senda is active.

Publication


Featured researches published by Shuji Senda.


international conference on document analysis and recognition | 2001

A maximum-likelihood approach to segmentation-based recognition of unconstrained handwriting text

Shuji Senda; Keiji Yamada

We propose a maximum-likelihood approach to segmentation-based recognition of unconstrained handwriting text. The segmentation scores and recognition scores are transformed into posterior probabilities, and the likelihood function which is composed of both these probabilities and character n-gram probabilities is derived from the Bayesian theorem. The recognition result which maximizes the function can be obtained by Viterbi search. Experiments have shown that the proposed likelihood function is effective in the recognition of online Japanese text.


international symposium on mixed and augmented reality | 2010

Task support system by displaying instructional video onto AR workspace

Michihiko Goto; Yuko Uematsu; Hideo Saito; Shuji Senda; Akihiko Iketani

This paper presents an instructional support system based on augmented reality (AR). This system helps a user to work intuitively by overlaying visual information in the same way of a navigation system. In usual AR systems, the contents to be overlaid onto real space are created with 3D Computer Graphics. In most cases, such contents are newly created according to applications. However, there are many 2D videos that show how to take apart or build electric appliances and PCs, how to cook, etc. Therefore, our system employs such existing 2D videos as instructional videos. By transforming an instructional video to display, according to the users view, and by overlaying the video onto the users view space, the proposed system intuitively provides the user with visual guidance. In order to avoid the problem that the display of the instructional video and the users view may be visually confused, we add various visual effects to the instructional video, such as transparency and enhancement of contours. By dividing the instructional video into sections according to the operations to be carried out in order to complete a certain task, we ensure that the user can interactively move to the next step in the instructional video after a certain operation is completed. Therefore, the user can carry on with the task at his/her own pace. In the usability test, users evaluated the use of the instructional video in our system through two tasks: a task involving building blocks and an origami task. As a result, we found that a users visibility improves when the instructional video is transformed to display according to his/her view. Further, for the evaluation of visual effects, we can classify these effects according to the task and obtain the guideline for the use of our system as an instructional support system for performing various other tasks.


asian conference on computer vision | 2010

Image inpainting based on probabilistic structure estimation

Takashi Shibata; Akihiko Iketani; Shuji Senda

A novel inpainting method based on probabilistic structure estimation has been developed. The method consists of two steps. First, an initial image, which captures rough structure and colors in the missing region, is estimated. This image is generated by probabilistically interpolating the gradient inside the missing region, and then by flooding the colors on the boundary into the missing region using Markov Random Field. Second, by locally replacing the missing region with local patches similar to both the adjacent patches and the initial image, the inpainted image is synthesized. Since the patch replacement process is guided by the initial image, the inpainted image is guaranteed to preserve the underlying structure. This also enables patches to be replaced in a greedy manner, i.e. without optimization. Experiments show the proposed method outperforms previous methods in terms of both subjective image quality and computational speed.


pervasive computing and communications | 2004

Camera-typing interface for ubiquitous information services

Shuji Senda; Kyosuke Nishiyama; Toshiyuki Asahi; Keiji Yamada

We propose a new technology called the camera-typing interface, which can read printed characters such as URLs with a low-resolution camera. It realizes a type of ubiquitous information service using smart phones with cameras. This method includes two main advantages: automatic concatenation of sequential shots and automatic error correction by re-shooting. The automatic concatenation enables a user to take segmented images of a character string, thus a low-resolution camera can be used as the interface device. The automatic error correction enables the user to correct misrecognized characters just by retaking images around them, providing an easy and natural way of error correction. We present two experimental results to prove effectiveness of our method. Both results indicate that our proposed method is helpful.


international geoscience and remote sensing symposium | 2007

A fast progressive lossless image compression method for space and satellite images

Jun Takada; Shuji Senda; Hiroki Hihara; Masahiro Hamai; Takeshi Oshima; Shinji Hagino

This paper presents a fast lossless image compression method for space and satellite images. The method, which we call HIREW, is based on hierarchical interpolating prediction and adaptive Golomb-Rice coding, and achieves 7-35 times faster compression than existing methods such as JPEG2000 and JPEG-LS, at similar compression ratios. Additionally, unlike JPEG-LS, it supports additional features such as progressive decompression using resolution scaling. An implementation of this codec will be used in the Japan Aerospace Exploration Agency (JAXA)s Venus Climate Orbiter mission (PLANET-C).


european conference on computer vision | 2014

Visualization of Temperature Change Using RGB-D Camera and Thermal Camera

Wataru Nakagawa; Kazuki Matsumoto; Francois de Sorbier; Maki Sugimoto; Hideo Saito; Shuji Senda; Takashi Shibata; Akihiko Iketani

In this paper, we present a system for visualizing temperature changes in a scene using an RGB-D camera coupled with a thermal camera. This system has applications in the context of maintenance of power equipments. We propose a two-stage approach made of with an offline and an online phases. During the first stage, after the calibration, we generate a 3D reconstruction of the scene with the color and the thermal data. We then apply the Viewpoint Generative Learning (VGL) method on the colored 3D model for creating a database of descriptors obtained from features robust to strong viewpoint changes. During the second online phase we compare the descriptors extracted from the current view against the ones in the database for estimating the pose of the camera. In this situation, we can display the current thermal data and compare it with the data saved during the offline phase.


asian conference on computer vision | 2012

Single image super resolution reconstruction in perturbed exemplar sub-space

Takashi Shibata; Akihiko Iketani; Shuji Senda

This paper presents a novel single image super resolution method that reconstructs a super resolution image in an exemplar sub-space. The proposed method first synthesizes LR patches by perturbing the image formation model, and stores them in a dictionary. An SR image is generated by replacing the input image patchwise with an HR patch in the dictionary whose LR patch best matches the input. The abundance of the exemplars enables the proposed method to synthesize SR images within the exemplar sub-space. This gives numerous advantages over the previous methods, such as the robustness against noise. Experiments on documents images show the proposed method outperforms previous methods not only in image quality, but also in recognition rate, namely about 30% higher than the previous methods.


international symposium on mixed and augmented reality | 2014

[DEMO] RGB-D-T camera system for AR display of temperature change

Kazuki Matsumoto; Wataru Nakagawa; Francois de Sorbier; Maki Sugimoto; Hideo Saito; Shuji Senda; Takashi Shibata; Akihiko Iketani

The anomalies of power equipment can be founded using temperature changes compared to its normal state. In this paper we present a system for visualizing temperature changes in a scene using a thermal 3D model. Our approach is based on two precomputed 3D models of the target scene achieved with a RGB-D camera coupled with the thermal camera. The first model contains the RGB information, while the second one contains the thermal information. For comparing the status of the temperature between the model and the current time, we accurately estimate the pose of the camera by finding keypoint correspondences between the current view and the RGB 3D model. Knowing the pose of the camera, we are then able to compare the thermal 3D model with the current status of the temperature from any viewpoint.


Archive | 2003

Character input device, character input method and character input program

Kyosuke Nishiyama; Shuji Senda


Archive | 2008

Screen data transmitting system, screen data transmitting server, screen data transmitting method and program recording medium

Akitake Mitsuhashi; Shuji Senda

Collaboration


Dive into the Shuji Senda's collaboration.

Researchain Logo
Decentralizing Knowledge