Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Atsunori Moteki is active.

Publication


Featured researches published by Atsunori Moteki.


user interface software and technology | 2011

Gesture keyboard requiring only one camera

Taichi Murase; Atsunori Moteki; Noriaki Ozawa; Nobuyuki Hara; Takehiro Nakai; Katsuhito Fujimoto

In this paper, we propose a novel gesture-based virtual keyboard (Gesture Keyboard) of QWERTY key layout requiring only one camera. Gesture Keyboard tracks the users fingers and recognizes gestures as the input, and each virtual key of it follows a corresponding finger. Therefore, it is possible to input characters at the users preferred hand position even if displacing hands during inputting. Because Gesture Keyboard requires only one camera to obtain sensor information, keyboard-less devices can feature it easily.


augmented human international conference | 2012

Gesture keyboard with a machine learning requiring only one camera

Taichi Murase; Atsunori Moteki; Genta Suzuki; Takahiro Nakai; Nobuyuki Hara; Takahiro Matsuda

In this paper, the authors propose a novel gesture-based virtual keyboard (Gesture Keyboard) that uses a standard QWERTY keyboard layout, and requires only one camera, and employs a machine learning technique. Gesture Keyboard tracks the users fingers and recognizes finger motions to judge keys input in the horizontal direction. Real-Adaboost (Adaptive Boosting), a machine learning technique, uses HOG (Histograms of Oriented Gradients) features in an image of the users hands to estimate keys in the depth direction. Each virtual key follows a corresponding finger, so it is possible to input characters at the users preferred hand position even if the user displaces his hands while inputting data. Additionally, because Gesture Keyboard requires only one camera, keyboard-less devices can implement this system easily. We show the effectiveness of utilizing a machine learning technique for estimating depth.


IEEE Transactions on Visualization and Computer Graphics | 2018

Handheld Guides in Inspection Tasks: Augmented Reality versus Picture

Jarkko Polvi; Takafumi Taketomi; Atsunori Moteki; Toshiyuki Yoshitake; Toshiyuki Fukuoka; Goshiro Yamamoto; Christian Sandor; Hirokazu Kato

Inspection tasks focus on observation of the environment and are required in many industrial domains. Inspectors usually execute these tasks by using a guide such as a paper manual, and directly observing the environment. The effort required to match the information in a guide with the information in an environment and the constant gaze shifts required between the two can severely lower the work efficiency of inspector in performing his/her tasks. Augmented reality (AR) allows the information in a guide to be overlaid directly on an environment. This can decrease the amount of effort required for information matching, thus increasing work efficiency. AR guides on head-mounted displays (HMDs) have been shown to increase efficiency. Handheld AR (HAR) is not as efficient as HMD-AR in terms of manipulability, but is more practical and features better information input and sharing capabilities. In this study, we compared two handheld guides: an AR interface that shows 3D registered annotations, that is, annotations having a fixed 3D position in the AR environment, and a non-AR picture interface that displays non-registered annotations on static images. We focused on inspection tasks that involve high information density and require the user to move, as well as to perform several viewpoint alignments. The results of our comparative evaluation showed that use of the AR interface resulted in lower task completion times, fewer errors, fewer gaze shifts, and a lower subjective workload. We are the first to present findings of a comparative study of an HAR and a picture interface when used in tasks that require the user to move and execute viewpoint alignments, focusing only on direct observation. Our findings can be useful for AR practitioners and psychology researchers.


symposium on 3d user interfaces | 2012

Poster: Head gesture 3D interface using a head mounted camera

Atsunori Moteki; Nobuyuki Hara; Taichi Murase; Noriaki Ozawa; Takehiro Nakai; Takahiro Matsuda; Katsuhito Fujimoto

In this paper, we propose a real world UI that uses head gestures. This UI detects user head motion obtained in images by head mounted camera (HMC). It estimates the relative position and distance between a users head and objects user is viewing. To prevent erroneous judgment, a head-specific motion model is applied in gesture recognition. As a feedback to the user, detailed object information is displayed on head mounted display (HMD). This UI allows hands-free interaction with surrounding objects. We show the UIs effectiveness by experiments.


ieee virtual reality conference | 2016

Fast and accurate relocalization for keyframe-based SLAM using geometric model selection

Atsunori Moteki; Nobuyasu Yamaguchi; Ayu Karasudani; Toshiyuki Yoshitake

In this paper, we propose a relocalization method for keyframe-based SLAM that enables real-time and accurate recovery from tracking failures. To realize an AR-based application in a real world situation, not only accurate camera tracking but also fast and accurate relocalization from tracking failure is required. The previous keyframe-based relocalization methods have some drawbacks with regard to speed and accuracy. The proposed relocalization method selects two algorithms adaptively depending on the relative camera pose between a current frame and a target keyframe. In addition, it estimates a degree of false matches to speed up RANSAC-based model estimation. We present effectiveness of our method by an evaluation using public tracking dataset.


Archive | 2013

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND A COMPUTER-READABLE NON-TRANSITORY MEDIUM

Nobuyuki Hara; Katsuhito Fujimoto; Taichi Murase; Atsunori Moteki


Archive | 2012

Device and method for detecting finger position

Taichi Murase; Nobuyuki Hara; Atsunori Moteki


Archive | 2013

Character input method and information processing apparatus

Taichi Murase; Nobuyuki Hara; Atsunori Moteki; Takahiro Matsuda; Katsuhito Fujimoto


Archive | 2013

Method for inputting character and information processing apparatus

Taichi Murase; Nobuyuki Hara; Atsunori Moteki; Takahiro Matsuda


Archive | 2015

ORIENTATION ESTIMATION APPARATUS, ORIENTATION ESTIMATION METHOD, AND COMPUTER-READABLE RECORDING MEDIUM STORING ORIENTATION ESTIMATION COMPUTER PROGRAM

Atsunori Moteki; Nobuyasu Yamaguchi; Takahiro Matsuda

Collaboration


Dive into the Atsunori Moteki's collaboration.

Researchain Logo
Decentralizing Knowledge