Taichi Murase
Fujitsu
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Taichi Murase.
user interface software and technology | 2011
Taichi Murase; Atsunori Moteki; Noriaki Ozawa; Nobuyuki Hara; Takehiro Nakai; Katsuhito Fujimoto
In this paper, we propose a novel gesture-based virtual keyboard (Gesture Keyboard) of QWERTY key layout requiring only one camera. Gesture Keyboard tracks the users fingers and recognizes gestures as the input, and each virtual key of it follows a corresponding finger. Therefore, it is possible to input characters at the users preferred hand position even if displacing hands during inputting. Because Gesture Keyboard requires only one camera to obtain sensor information, keyboard-less devices can feature it easily.
augmented human international conference | 2012
Taichi Murase; Atsunori Moteki; Genta Suzuki; Takahiro Nakai; Nobuyuki Hara; Takahiro Matsuda
In this paper, the authors propose a novel gesture-based virtual keyboard (Gesture Keyboard) that uses a standard QWERTY keyboard layout, and requires only one camera, and employs a machine learning technique. Gesture Keyboard tracks the users fingers and recognizes finger motions to judge keys input in the horizontal direction. Real-Adaboost (Adaptive Boosting), a machine learning technique, uses HOG (Histograms of Oriented Gradients) features in an image of the users hands to estimate keys in the depth direction. Each virtual key follows a corresponding finger, so it is possible to input characters at the users preferred hand position even if the user displaces his hands while inputting data. Additionally, because Gesture Keyboard requires only one camera, keyboard-less devices can implement this system easily. We show the effectiveness of utilizing a machine learning technique for estimating depth.
intelligent user interfaces | 2016
Genta Suzuki; Taichi Murase; Yusaku Fujii
Expert manual workers in factories assemble more efficiently than novices because their movements are optimized for the tasks. In this paper, we present an approach to projecting the hand movements of experts at real size, and real speed and onto real objects in order to match the manual work movements of novices to those of experts. We prototyped a projector-camera system, which projects the virtual hands of experts. We conducted a user study in which users worked after watching experts work under two conditions: using a display and using our prototype system. The results show our prototype users worked more precisely and felt the tasks were easier. User ratings also show our prototype users watched videos of experts more fixedly, memorized them more clearly and distinctly tried to work in the same way shown in the videos as compared with display users.
augmented human international conference | 2016
Yasushi Sugama; Taichi Murase; Yusaku Fujii
We propose a novel projection based markerless AR system that realizes multiple virtual tablets. This system detects position and posture of any rectangular objects, projects GUI to these objects, and detects touch gesture on objects. As a result, by using this system, we do not need any smart devices, but only a mere rectangular object, e.g. tissue box, book, cushion, table and so on. It does not really matter whether the tablet computer is in the living room for browsing the internet, for playing games, or for controlling consumer devices e.g. TV and air-conditioner. In order to realize this system, we developed a novel algorithm to detect arbitrary rectangular objects. This can recognize position and posture of rectangular object without markers. We measured error in the case of overlapping, as a result, experimental result shows our algorithm is more robust than existing algorithms.
symposium on 3d user interfaces | 2012
Atsunori Moteki; Nobuyuki Hara; Taichi Murase; Noriaki Ozawa; Takehiro Nakai; Takahiro Matsuda; Katsuhito Fujimoto
In this paper, we propose a real world UI that uses head gestures. This UI detects user head motion obtained in images by head mounted camera (HMC). It estimates the relative position and distance between a users head and objects user is viewing. To prevent erroneous judgment, a head-specific motion model is applied in gesture recognition. As a feedback to the user, detailed object information is displayed on head mounted display (HMD). This UI allows hands-free interaction with surrounding objects. We show the UIs effectiveness by experiments.
international conference on consumer electronics | 2017
Yasushi Sugama; Taichi Murase
We propose a novel Projection-based AR book system and the required algorithms. There are several studies on projection-based AR books; however, users always have to place books on a desktop because affine-invariant recognition is not exploited. However, the most comfortable position is achieved when the user holds the book with both hands. In order to solve this problem and facilitate comfortable AR book reading, we develop a novel book-posture detection system without markers, which relies on the affine-invariant recognition method.
Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces | 2016
Yuya Obinata; Genta Suzuki; Taichi Murase; Yusaku Fujii
Workers in factories often have to stop an operation to confirm various assembly instructions, for example, component numbers and/or the location to place a component; this is particularly the case with exceptional or inexperienced operations in a mixed-flow production line. These types of operation interruptions are one of the most significant factors linked to a decreasing productivity rate. In this study, we propose a novel method that estimates, in real time, the pose of a manufactured product on a production line without any augmented reality (AR) markers. This system projects instructions and/or component positions to help a worker process production information quickly. In this paper, we produced assembly-support system experimentally using projection-based AR. We develop a highly accurate object pose estimation method for manufactured products. The result of this experimental evaluation indicates that the combination of ORB and our algorithm can detect an objects pose more precisely than ORB only. We also develop an algorithm that is robust even if a part of an object is occluded by a workers hand. We consider that this system helps workers understand instructions and component positions without the need to stop and confirm assembly instructions, thus enabling more efficient operation of tasks.
Archive | 2013
Nobuyuki Hara; Katsuhito Fujimoto; Taichi Murase; Atsunori Moteki
Archive | 2012
Taichi Murase; Nobuyuki Hara; Atsunori Moteki
Archive | 2013
Taichi Murase; Nobuyuki Hara; Atsunori Moteki; Takahiro Matsuda; Katsuhito Fujimoto