Kiyohiko Abe
Kanto Gakuin University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kiyohiko Abe.
international conference on universal access in human computer interaction | 2007
Kiyohiko Abe; Shoichi Ohi; Minoru Ohyama
We have developed an eye-gaze input system for people with severe physical disabilities such as amyotrophic lateral sclerosis. The system utilizes a personal computer and a home video camera to detect eye gaze under natural light. It also compensates for measurement errors caused by head movements; in other words, it can detect the eye gaze with a high degree of accuracy. We have also developed a new gaze selection method based on the eye movement history of a user. Using this method, users can rapidly input text using eye gazes.
international conference on human computer interaction | 2009
Kiyohiko Abe; Shoichi Ohi; Minoru Ohyama
We propose a new eye blink detection method that uses NTSC video cameras. This method utilizes split-interlaced images of the eye. These split images are odd- and even-field images in the NTSC format and are generated from NTSC frames (interlaced images). The proposed method yields a time resolution that is double that in the NTSC format; that is, the detailed temporal change that occurs during the process of eye blinking can be measured. To verify the accuracy of the proposed method, experiments are performed using a high-speed digital video camera. Furthermore, results obtained using the NTSC camera were compared with those obtained using the high-speed digital video camera. We also report experimental results for comparing measurements made by the NTSC camera and the high-speed digital video camera.
international conference on human computer interaction | 2011
Kiyohiko Abe; Shoichi Ohi; Minoru Ohyama
We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS). The system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. Our practical eye-gaze input system is capable of classifying the horizontal eye-gaze of users with a high degree of accuracy. However, it can only detect three directions of vertical eye-gaze. If the detection resolution in the vertical direction is increased, more indicators will be displayed on the screen. To increase the resolution of vertical eye-gaze detection, we apply a limbus tracking method, which is also the conventional method used for horizontal eye-gaze detection. In this paper, we present a new eye-gaze detection method by image analysis using the limbus tracking method. We also report the experimental results of our new method.
international conference on human-computer interaction | 2015
Hironobu Sato; Kiyohiko Abe; Shoichi Ohi; Minoru Ohyama
Several input systems using eye blinking for communication with the severely disabled have been proposed. Eye blinking is either voluntary or involuntary. Previously, we developed an image analysis method yielding an open-eye area as a measurement value. We can extract a blinking wave pattern using statistical parameters yielded from the measurement values. Based on this method, we also proposed an automatic classification method for both involuntary blinking and one type of voluntary blinking. In this paper, we aim to classify a new type of voluntary blinking in addition to the two previous known types. For classifying these three blinking types, a new feature parameter is proposed. In addition, we propose a new classification method based on the measurement results. Our experimental results indicate a successful classification rate of approximately 95 % for a sample of seven subjects using our new classification method between involuntary blinking and two types of voluntary blinking.
international conference on engineering psychology and cognitive ergonomics | 2013
Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama
Human eye blinks include voluntary (conscious) blinks and involuntary (unconscious) blinks. If voluntary blinks can be detected automatically, then input decisions can be made when voluntary blinks occur. Previously, we proposed a novel eye blink detection method using a Hi-Vision video camera. This method utilizes split interlaced images of the eye, which are generated from 1080i Hi-Vision format images. The proposed method yields a time resolution that is twice as high as that of the 1080i Hi-Vision format. We refer to this approach as the frame-splitting method. In this paper, we propose a new method for automatically classifying eye blink types on the basis of specific characteristics using the frame-splitting method.
international conference on human-computer interaction | 2015
Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama
We have developed an eye-gaze input system for people with severe physical disabilities. The system utilizes a personal computer and a home video camera to detect eye-gaze under natural light, and users can easily move the mouse cursor to any point on the screen to which they direct their gaze. We constructed this system by first confirming a large difference in the duration of voluntary (conscious) and involuntary (unconscious) blinks through a precursor experiment. Consequently, on the basis of the results obtained, we developed our eye-gaze input interface, which uses the information received from voluntary blinks. More specifically, users can decide on their input by performing voluntary blinks as substitutes for mouse clicks. In this paper, we discuss the eye-gaze and blink information input interface developed and the results of evaluations conducted.
Usability and Accessibility Focused Requirements Engineering (UsARE), 2014 IEEE 2nd International Workshop on | 2014
Shogo Matsuno; Naoaki Itakura; Minoru Ohyama; Shoichi Ohi; Kiyohiko Abe
This paper presents the results of the analysis of trends in the occurrence of eyeblinks for devising new input channels in handheld and wearable information devices. However, engineering a system that can distinguish between voluntary and spontaneous blinks is difficult. The study analyzes trends in the occurrence of eyeblinks of 50 subjects to classify blink types via experiments. However, noticeable differences between voluntary and spontaneous blinks exist for each subject. Three types of trends based on shape feature parameters (duration and amplitude) of eyeblinks were discovered. This study determines that the system can automatically and effectively classify voluntary and spontaneous eyeblinks.
1st and 2nd International Workshop on Usability- and Accessibility-Focused Requirements Engineering (UsARE 2012 / UsARE 2014) | 2012
Shogo Matsuno; Minoru Ohyama; Kiyohiko Abe; Shoichi Ohi; Naoaki Itakura
In this paper, we propose and evaluate a new conscious eyeblink differentiation method, comprising an algorithm that takes into account differences in individuals, for use in a prospective eyeblink user interface. The proposed method uses a frame-splitting technique that improves the time resolution by splitting a single interlaced image into two fields—even and odd. Measuring eyeblinks with sufficient accuracy using a conventional NTSC video camera (30 fps) is difficult. However, the proposed method uses eyeblink amplitude as well as eyeblink duration as distinction thresholds. Further, the algorithm automatically differentiates eyeblinks by considering individual differences and selecting a large parameter of significance in each user. The results of evaluation experiments conducted using 30 subjects indicate that the proposed method automatically differentiates conscious eyeblinks with an accuracy rate of 83.6 % on average. These results indicate that automatic differentiation of conscious eyeblinks using a conventional video camera incorporated with our proposed method is feasible.
Archive | 2016
Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama
Recently, a novel human-machine interface, the eye-gaze input system, has been reported. This system is operated solely through the user’s eye movements. Using this system, many communication-aid systems have been developed for people suffering from severe physical disabilities, such as amyotrophic lateral sclerosis (ALS). We observed that many such people can perform only very limited head movements. Therefore, we designed an eye-gaze input system that requires no special tracing devices to track the user’s head movement. The proposed system involves the use of a personal computer (PC) and home video camera to detect the users’ eye gaze through image analysis under natural light. Eye-gaze detection methods that use natural light require only daily-life devices, such as home video cameras and PCs. However, the accuracy of these systems is frequently low, and therefore, they are capable of classifying only a few indicators. In contrast, our proposed system can detect eye gaze with high-level accuracy and confidence; that is, users can easily move the mouse cursor to their gazing point. In addition, we developed a classification method for eye blink types using the system’s feature parameters. This method allows the detection of voluntary (conscious) blinks. Thus, users can determine their input by performing voluntary blinks that represent mouse clicking. In this chapter, we present our eye-gaze and blink detection methods. We also discuss the communication-aid systems in which our proposed methods are applied.
ieee region 10 conference | 2014
Kiyohiko Abe; Hironobu Sato; Shoichi Ohi; Minoru Ohyama
Human eye blinks include voluntary (conscious) blinks and involuntary (unconscious) blinks. If the voluntary blinks can be detected automatically, a decision can be made whether to use the eye blink as application input. If the entire eye blink process is captured, the wave pattern of an eye blink can be generated. We have developed a new method for measuring the wave pattern of an eye blink. Based on these wave patterns, feature parameters for eye blink type classification can be estimated. To develop an eye blink input interface suitable for practical use, the interface system utilizes a standard video camera, such as an NTSC-based model. Specifically, this system requires feature parameters that can be estimated by a standard video camera. In addition, if other application programs are executed while the eye-blink detection program is in use, the video capture sampling rate is decreased. In this paper, we present the feature parameters of voluntary and involuntary eye blinks, and discuss the changes that occur when the sampling rate is decreased.