Genta Suzuki
Fujitsu
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Genta Suzuki.
human computer interaction with mobile devices and services | 2008
Genta Suzuki; Nobuyasu Yamaguchi; Shigeyoshi Nakamura; Hirotaka Chiba
This demonstration shows novel interaction between mobile devices and their nearby devices using digital images on a mobile display. The interaction needs neither special hardware for communication, nor additional software for existing mobile devices. In the interaction, users of mobile devices can easily interact with nearby devices by three procedures: 1) selecting an image, 2) displaying an image in full-screen mode, or 3) holding the display over a camera connected to the device. The key technology of the interaction is FPcode (Fine Picture code) which is a kind of steganography and can be invisibly embedded in printed images. To enable FPcode to be applied to images on mobile displays, we have developed a camera control method for reducing the influence of moire stripes on the display and variations in display brightness. In the demonstration, we show an application using FPcode on a mobile display.
augmented human international conference | 2012
Taichi Murase; Atsunori Moteki; Genta Suzuki; Takahiro Nakai; Nobuyuki Hara; Takahiro Matsuda
In this paper, the authors propose a novel gesture-based virtual keyboard (Gesture Keyboard) that uses a standard QWERTY keyboard layout, and requires only one camera, and employs a machine learning technique. Gesture Keyboard tracks the users fingers and recognizes finger motions to judge keys input in the horizontal direction. Real-Adaboost (Adaptive Boosting), a machine learning technique, uses HOG (Histograms of Oriented Gradients) features in an image of the users hands to estimate keys in the depth direction. Each virtual key follows a corresponding finger, so it is possible to input characters at the users preferred hand position even if the user displaces his hands while inputting data. Additionally, because Gesture Keyboard requires only one camera, keyboard-less devices can implement this system easily. We show the effectiveness of utilizing a machine learning technique for estimating depth.
intelligent user interfaces | 2016
Genta Suzuki; Taichi Murase; Yusaku Fujii
Expert manual workers in factories assemble more efficiently than novices because their movements are optimized for the tasks. In this paper, we present an approach to projecting the hand movements of experts at real size, and real speed and onto real objects in order to match the manual work movements of novices to those of experts. We prototyped a projector-camera system, which projects the virtual hands of experts. We conducted a user study in which users worked after watching experts work under two conditions: using a display and using our prototype system. The results show our prototype users worked more precisely and felt the tasks were easier. User ratings also show our prototype users watched videos of experts more fixedly, memorized them more clearly and distinctly tried to work in the same way shown in the videos as compared with display users.
human computer interaction with mobile devices and services | 2012
Genta Suzuki; Scott R. Klemmer
Remote support for physical tasks often takes a longer time and involves more mistakes than on-site support. The reason for this is that it is difficult for remote supporters to know what happened at the site and show how to operate briefly. In this paper, we propose a remote support system named TeleTorchlight that works between a tablet and a mobile camera projector unit. The system expands traditional voice chat based remote support by providing a method to draw instructions to physical object directly from remote supporters. On-site workers can easily understand instructions from remote supporters and can efficiently learn tasks using our system. Remote supporters also provide instructions showing where to operate and how to operate.
Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces | 2016
Yuya Obinata; Genta Suzuki; Taichi Murase; Yusaku Fujii
Workers in factories often have to stop an operation to confirm various assembly instructions, for example, component numbers and/or the location to place a component; this is particularly the case with exceptional or inexperienced operations in a mixed-flow production line. These types of operation interruptions are one of the most significant factors linked to a decreasing productivity rate. In this study, we propose a novel method that estimates, in real time, the pose of a manufactured product on a production line without any augmented reality (AR) markers. This system projects instructions and/or component positions to help a worker process production information quickly. In this paper, we produced assembly-support system experimentally using projection-based AR. We develop a highly accurate object pose estimation method for manufactured products. The result of this experimental evaluation indicates that the combination of ORB and our algorithm can detect an objects pose more precisely than ORB only. We also develop an algorithm that is robust even if a part of an object is occluded by a workers hand. We consider that this system helps workers understand instructions and component positions without the need to stop and confirm assembly instructions, thus enabling more efficient operation of tasks.
Archive | 2007
Genta Suzuki; Kenichiro Sakai; Tsugio Noda
Archive | 2013
Genta Suzuki
Archive | 2008
Genta Suzuki; Kenichiro Sakai; Hirotaka Chiba
Archive | 2012
Genta Suzuki; Hirotaka Chiba; Kenichiro Sakai
Archive | 2009
Genta Suzuki; Hirotaka Chiba