Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Buntarou Shizuki is active.

Publication


Featured researches published by Buntarou Shizuki.


user interface software and technology | 2013

Touch & activate: adding interactivity to existing objects using active acoustic sensing

Makoto Ono; Buntarou Shizuki; Jiro Tanaka

In this paper, we present a novel acoustic touch sensing technique called Touch & Activate. It recognizes a rich context of touches including grasp on existing objects by attaching only a vibration speaker and a piezo-electric microphone paired as a sensor. It provides easy hardware configuration for prototyping interactive objects that have touch input capability. We conducted a controlled experiment to measure the accuracy and trade-off between the accuracy and number of training rounds for our technique. From its results, per-user recognition accuracies with five touch gestures for a plastic toy as a simple example and six hand postures for the posture recognition as a complex example were 99.6% and 86.3%, respectively. Walk up user recognition accuracies for the two applications were 97.8% and 71.2%, respectively. Since the results of our experiment showed a promising accuracy for the recognition of touch gestures and hand postures, Touch & Activate should be feasible for prototype interactive objects that have touch input capability.


advanced visual interfaces | 2006

Laser pointer interaction techniques using peripheral areas of screens

Buntarou Shizuki; Takaomi Hisamatsu; Shin Takahashi; Jiro Tanaka

This paper presents new interaction techniques that use a laser pointer to directly manipulate applications displayed on a large screen. The techniques are based on goal crossing, and the key is that the goals of crossing are the four peripheral screen areas, which are extremely large. This makes it very easy for users to execute commands, and the crossing-based interaction enables users to execute fast and continuous commands.


ieee symposium on visual languages | 1997

Supporting design patterns in a visual parallel data-flow programming environment

Masashi Toyoda; Buntarou Shizuki; Shin Takahashi; Satoshi Matsuoka; Etsuya Shibayama

We propose the notion of a visual design pattern (VDP), which is a visual abstraction representing design aspects in parallel data-flow programs. VDP serves as a flexible and high-level structure of reuse for visual parallel programming. We introduced the support for this notion into the visual parallel programming environment, KLIEG, allowing definition and use of patterns with a simple and easy to use interface.


human factors in computing systems | 2016

B2B-Swipe: Swipe Gesture for Rectangular Smartwatches from a Bezel to a Bezel

Yuki Kubo; Buntarou Shizuki; Jiro Tanaka

We present B2B-Swipe, a single-finger swipe gesture for a rectangular smartwatch that starts at a bezel and ends at a bezel to enrich input vocabulary. There are 16 possible B2B-Swipes because a rectangular smartwatch has four bezels. Moreover, B2B-Swipe can be implemented with a single-touch screen with no additional hardware. Our study shows that B2B-Swipe can co-exist with Bezel Swipe and Flick, with an error rate of 3.7% under the sighted condition and 8.0% under the eyes-free condition. Furthermore, B2B-Swipe is potentially accurate (i.e., the error rates were 0% and 0.6% under the sighted and eyes-free conditions) if the system uses only B2B-Swipes for touch gestures.


tangible and embedded interaction | 2015

Sensing Touch Force using Active Acoustic Sensing

Makoto Ono; Buntarou Shizuki; Jiro Tanaka

We present a lightweight technique with which creators can prototype force-sensitive objects by attaching a pair of piezoelectric elements: one a vibration speaker and one a contact microphone. The key idea behind our technique is that touch force, in addition to the way the object is touched, can also be observed as different resonant frequency spectra. We also show that recognition of a touch and estimation of the touch force can be implemented by using the combination of support vector classification (SVC) and support vector regression (SVR). An experiment with an additional pressure sensor revealed that our technique might perform well in estimating touch force. We also show a tool for machine learning based on our technique that uses an animated guide, allowing creators to give the system both the training data and the labels for training machine learning needed for dealing with continuous-valued output such as SVR.


human factors in computing systems | 2014

Vibrainput: two-step PIN entry system based on vibration and visual information

Takuro Kuribara; Buntarou Shizuki; Jiro Tanaka

Current standard PIN entry systems for mobile devices are not safe to shoulder surfing. In this paper, we present VibraInput, a two-step PIN entry system based on the combination of vibration and visual information for mobile devices. This system only uses four vibration patterns, with which users enter a digit by two distinct selections. We believe that this design secures PIN entry, and allows users to easily remember and recognize the patterns. Moreover, it can be implemented on current off-the-shelf mobile devices. We designed two kinds of prototypes of VibraInput. The experiment shows that the mean failure rate is 4.0%; moreover, the system shows good security properties.


human computer interaction with mobile devices and services | 2013

No-look flick: single-handed and eyes-free japanese text input system on touch screens of mobile devices

Yoshitomo Fukatsu; Buntarou Shizuki; Jiro Tanaka

We present a single-handed and eyes-free Japanese kana text input system on touch screens of mobile devices. We first conducted preliminary experiments to investigate the accuracy with which subjects could single-handedly point to and flick without using their eyes. We found from the results that users can point at a screen that was divided into 2 x 2 with 100% accuracy and that users can flick at a 2 x 2 grid without using their eyes with 96.1% accuracy using our algorithm for flick recognition. The system used kana letter input based on two-stroke input with three keys to enable accurate eyes-free typing. First, users flick for consonant input, and then similarly flick for vowel input. We conducted a long-term user study to measure basic text entry speed and error rate performance under eyes-free conditions, and readability of transcribed phrases. As a result, the mean text entry speed was 51.2 characters per minute (cpm) in the 10th session of the user study and the mean error rate was 0.6% of all characters. The mean text entry speed was 33.9 cpm in the 11th session, which was conducted under totally eyes-free conditions and the mean error rate was 4.8% of all characters. We not only measured cpm and error rate, but also measured error rate of reading, which we devised as a novel metric to measure how accurately users can read transcribed phrases. The mean error rate of reading in the 11th session was 5.7% of all phrases.


international symposium on multimedia | 2008

Browsing 3D Media Using Cylindrical Multi-touch Interface

Buntarou Shizuki; Masaki Naito; Jiro Tanaka

We describe interaction techniques for browsing 3D media using our cylindrical multi-touch interface (CMTI). CMTI uses a cylinder wall as its controlling surface. Because the control area of the interface is in cylindrical polar coordinates, the use of the depth along the surface enables the user to interact in 3D space even though the controlling surface is still a 2D surface. Moreover, the nature of the multi-touch interface enables the user to manipulate objects and the viewpoint with both bare hands. The user can easily examine complex 3D media using the interaction techniques described with the power of CMTI.


australasian computer-human interaction conference | 2015

Back-of-Device Interaction based on the Range of Motion of the Index Finger

Hiroyuki Hakoda; Yoshitomo Fukatsu; Buntarou Shizuki; Jiro Tanaka

We show a back-of-device (BoD) interaction based on the range of motion of the index finger to improve the usability of a touchscreen mobile device held in one hand. To design this interaction, we conducted two experiments to investigate the range of motion of the index finger on the back of mobile devices. On the basis of those results, we designed a prototype system that has a hole in the back, where users perform our BoD interaction by covering the hole with their index finger. This design provides them with the tactile feedback provided by a hole, and allows users to naturally control the front and back simultaneously.


international conference on human-computer interaction | 2013

Long-Term Study of a Software Keyboard That Places Keys at Positions of Fingers and Their Surroundings

Yuki Kuno; Buntarou Shizuki; Jiro Tanaka

In this paper, we present a software keyboard called Leyboard that enables users to type faster. Leyboard makes typing easier by placing keys at the positions of fingers and their surroundings. To this end, Leyboard automatically adjusts its key positions and sizes to users’ hands. This design allows users to type faster and more accurately than using ordinary software keyboards, the keys of which are unperceptive. We have implemented a prototype and have performed a long-term user study. The study has proved the usefulness of Leyboard and its pros and cons.

Collaboration


Dive into the Buntarou Shizuki's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Motoki Miura

Japan Advanced Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuki Kubo

University of Tsukuba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge