Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Taichi Murase is active.

Publication


Featured researches published by Taichi Murase.


user interface software and technology | 2011

Gesture keyboard requiring only one camera

Taichi Murase; Atsunori Moteki; Noriaki Ozawa; Nobuyuki Hara; Takehiro Nakai; Katsuhito Fujimoto

In this paper, we propose a novel gesture-based virtual keyboard (Gesture Keyboard) of QWERTY key layout requiring only one camera. Gesture Keyboard tracks the users fingers and recognizes gestures as the input, and each virtual key of it follows a corresponding finger. Therefore, it is possible to input characters at the users preferred hand position even if displacing hands during inputting. Because Gesture Keyboard requires only one camera to obtain sensor information, keyboard-less devices can feature it easily.


augmented human international conference | 2012

Gesture keyboard with a machine learning requiring only one camera

Taichi Murase; Atsunori Moteki; Genta Suzuki; Takahiro Nakai; Nobuyuki Hara; Takahiro Matsuda

In this paper, the authors propose a novel gesture-based virtual keyboard (Gesture Keyboard) that uses a standard QWERTY keyboard layout, and requires only one camera, and employs a machine learning technique. Gesture Keyboard tracks the users fingers and recognizes finger motions to judge keys input in the horizontal direction. Real-Adaboost (Adaptive Boosting), a machine learning technique, uses HOG (Histograms of Oriented Gradients) features in an image of the users hands to estimate keys in the depth direction. Each virtual key follows a corresponding finger, so it is possible to input characters at the users preferred hand position even if the user displaces his hands while inputting data. Additionally, because Gesture Keyboard requires only one camera, keyboard-less devices can implement this system easily. We show the effectiveness of utilizing a machine learning technique for estimating depth.


intelligent user interfaces | 2016

Projecting Recorded Expert Hands at Real Size, at Real Speed, and onto Real Objects for Manual Work

Genta Suzuki; Taichi Murase; Yusaku Fujii

Expert manual workers in factories assemble more efficiently than novices because their movements are optimized for the tasks. In this paper, we present an approach to projecting the hand movements of experts at real size, and real speed and onto real objects in order to match the manual work movements of novices to those of experts. We prototyped a projector-camera system, which projects the virtual hands of experts. We conducted a user study in which users worked after watching experts work under two conditions: using a display and using our prototype system. The results show our prototype users worked more precisely and felt the tasks were easier. User ratings also show our prototype users watched videos of experts more fixedly, memorized them more clearly and distinctly tried to work in the same way shown in the videos as compared with display users.


augmented human international conference | 2016

Projection Based Virtual Tablets System Involving Robust Tracking of Rectangular Objects and Hands

Yasushi Sugama; Taichi Murase; Yusaku Fujii

We propose a novel projection based markerless AR system that realizes multiple virtual tablets. This system detects position and posture of any rectangular objects, projects GUI to these objects, and detects touch gesture on objects. As a result, by using this system, we do not need any smart devices, but only a mere rectangular object, e.g. tissue box, book, cushion, table and so on. It does not really matter whether the tablet computer is in the living room for browsing the internet, for playing games, or for controlling consumer devices e.g. TV and air-conditioner. In order to realize this system, we developed a novel algorithm to detect arbitrary rectangular objects. This can recognize position and posture of rectangular object without markers. We measured error in the case of overlapping, as a result, experimental result shows our algorithm is more robust than existing algorithms.


symposium on 3d user interfaces | 2012

Poster: Head gesture 3D interface using a head mounted camera

Atsunori Moteki; Nobuyuki Hara; Taichi Murase; Noriaki Ozawa; Takehiro Nakai; Takahiro Matsuda; Katsuhito Fujimoto

In this paper, we propose a real world UI that uses head gestures. This UI detects user head motion obtained in images by head mounted camera (HMC). It estimates the relative position and distance between a users head and objects user is viewing. To prevent erroneous judgment, a head-specific motion model is applied in gesture recognition. As a feedback to the user, detailed object information is displayed on head mounted display (HMD). This UI allows hands-free interaction with surrounding objects. We show the UIs effectiveness by experiments.


international conference on consumer electronics | 2017

Projection-based AR book system involving book posture detection and robust page recognition

Yasushi Sugama; Taichi Murase

We propose a novel Projection-based AR book system and the required algorithms. There are several studies on projection-based AR books; however, users always have to place books on a desktop because affine-invariant recognition is not exploited. However, the most comfortable position is achieved when the user holds the book with both hands. In order to solve this problem and facilitate comfortable AR book reading, we develop a novel book-posture detection system without markers, which relies on the affine-invariant recognition method.


Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces | 2016

Pose Estimation for a Cuboid with Regular Patterns in an Interactive Assembly-support Projection System

Yuya Obinata; Genta Suzuki; Taichi Murase; Yusaku Fujii

Workers in factories often have to stop an operation to confirm various assembly instructions, for example, component numbers and/or the location to place a component; this is particularly the case with exceptional or inexperienced operations in a mixed-flow production line. These types of operation interruptions are one of the most significant factors linked to a decreasing productivity rate. In this study, we propose a novel method that estimates, in real time, the pose of a manufactured product on a production line without any augmented reality (AR) markers. This system projects instructions and/or component positions to help a worker process production information quickly. In this paper, we produced assembly-support system experimentally using projection-based AR. We develop a highly accurate object pose estimation method for manufactured products. The result of this experimental evaluation indicates that the combination of ORB and our algorithm can detect an objects pose more precisely than ORB only. We also develop an algorithm that is robust even if a part of an object is occluded by a workers hand. We consider that this system helps workers understand instructions and component positions without the need to stop and confirm assembly instructions, thus enabling more efficient operation of tasks.


Archive | 2013

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND A COMPUTER-READABLE NON-TRANSITORY MEDIUM

Nobuyuki Hara; Katsuhito Fujimoto; Taichi Murase; Atsunori Moteki


Archive | 2012

Device and method for detecting finger position

Taichi Murase; Nobuyuki Hara; Atsunori Moteki


Archive | 2013

Character input method and information processing apparatus

Taichi Murase; Nobuyuki Hara; Atsunori Moteki; Takahiro Matsuda; Katsuhito Fujimoto

Collaboration


Dive into the Taichi Murase's collaboration.

Researchain Logo
Decentralizing Knowledge