Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Minoru Ohyama is active.

Publication


Featured researches published by Minoru Ohyama.


international conference on universal access in human computer interaction | 2007

An eye-gaze input system using information on eye movement history

Kiyohiko Abe; Shoichi Ohi; Minoru Ohyama

We have developed an eye-gaze input system for people with severe physical disabilities such as amyotrophic lateral sclerosis. The system utilizes a personal computer and a home video camera to detect eye gaze under natural light. It also compensates for measurement errors caused by head movements; in other words, it can detect the eye gaze with a high degree of accuracy. We have also developed a new gaze selection method based on the eye movement history of a user. Using this method, users can rapidly input text using eye gazes.


human factors in computing systems | 2002

Estimating communication context through location information and schedule information: a study with home office workers

Yasuto Nakanishi; Noriko Kitaoka; Katsuya Hakozaki; Minoru Ohyama

We have developed a communication support system that estimates the situation of a person by using the location information of a PHS (Personal Handy phone System) and the schedule information. The system supports communication among dispersed and mobile individuals by using the estimated situation. In this paper, we describe it and a study with a small group of home office workers.


international conference on human computer interaction | 2009

Automatic Method for Measuring Eye Blinks Using Split-Interlaced Images

Kiyohiko Abe; Shoichi Ohi; Minoru Ohyama

We propose a new eye blink detection method that uses NTSC video cameras. This method utilizes split-interlaced images of the eye. These split images are odd- and even-field images in the NTSC format and are generated from NTSC frames (interlaced images). The proposed method yields a time resolution that is double that in the NTSC format; that is, the detailed temporal change that occurs during the process of eye blinking can be measured. To verify the accuracy of the proposed method, experiments are performed using a high-speed digital video camera. Furthermore, results obtained using the NTSC camera were compared with those obtained using the high-speed digital video camera. We also report experimental results for comparing measurements made by the NTSC camera and the high-speed digital video camera.


international conference on human computer interaction | 2011

Eye-gaze detection by image analysis under natural light

Kiyohiko Abe; Shoichi Ohi; Minoru Ohyama

We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS). The system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. Our practical eye-gaze input system is capable of classifying the horizontal eye-gaze of users with a high degree of accuracy. However, it can only detect three directions of vertical eye-gaze. If the detection resolution in the vertical direction is increased, more indicators will be displayed on the screen. To increase the resolution of vertical eye-gaze detection, we apply a limbus tracking method, which is also the conventional method used for horizontal eye-gaze detection. In this paper, we present a new eye-gaze detection method by image analysis using the limbus tracking method. We also report the experimental results of our new method.


international conference on human-computer interaction | 2015

Automatic Classification Between Involuntary and Two Types of Voluntary Blinks Based on an Image Analysis

Hironobu Sato; Kiyohiko Abe; Shoichi Ohi; Minoru Ohyama

Several input systems using eye blinking for communication with the severely disabled have been proposed. Eye blinking is either voluntary or involuntary. Previously, we developed an image analysis method yielding an open-eye area as a measurement value. We can extract a blinking wave pattern using statistical parameters yielded from the measurement values. Based on this method, we also proposed an automatic classification method for both involuntary blinking and one type of voluntary blinking. In this paper, we aim to classify a new type of voluntary blinking in addition to the two previous known types. For classifying these three blinking types, a new feature parameter is proposed. In addition, we propose a new classification method based on the measurement results. Our experimental results indicate a successful classification rate of approximately 95 % for a sample of seven subjects using our new classification method between involuntary blinking and two types of voluntary blinking.


international conference on engineering psychology and cognitive ergonomics | 2013

Automatic Classification of Eye Blink Types Using a Frame-Splitting Method

Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama

Human eye blinks include voluntary (conscious) blinks and involuntary (unconscious) blinks. If voluntary blinks can be detected automatically, then input decisions can be made when voluntary blinks occur. Previously, we proposed a novel eye blink detection method using a Hi-Vision video camera. This method utilizes split interlaced images of the eye, which are generated from 1080i Hi-Vision format images. The proposed method yields a time resolution that is twice as high as that of the 1080i Hi-Vision format. We refer to this approach as the frame-splitting method. In this paper, we propose a new method for automatically classifying eye blink types on the basis of specific characteristics using the frame-splitting method.


international conference on human-computer interaction | 2015

Input Interface Using Eye-Gaze and Blink Information

Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama

We have developed an eye-gaze input system for people with severe physical disabilities. The system utilizes a personal computer and a home video camera to detect eye-gaze under natural light, and users can easily move the mouse cursor to any point on the screen to which they direct their gaze. We constructed this system by first confirming a large difference in the duration of voluntary (conscious) and involuntary (unconscious) blinks through a precursor experiment. Consequently, on the basis of the results obtained, we developed our eye-gaze input interface, which uses the information received from voluntary blinks. More specifically, users can decide on their input by performing voluntary blinks as substitutes for mouse clicks. In this paper, we discuss the eye-gaze and blink information input interface developed and the results of evaluations conducted.


Usability and Accessibility Focused Requirements Engineering (UsARE), 2014 IEEE 2nd International Workshop on | 2014

Analysis of trends in the occurrence of eyeblinks for an eyeblink input interface

Shogo Matsuno; Naoaki Itakura; Minoru Ohyama; Shoichi Ohi; Kiyohiko Abe

This paper presents the results of the analysis of trends in the occurrence of eyeblinks for devising new input channels in handheld and wearable information devices. However, engineering a system that can distinguish between voluntary and spontaneous blinks is difficult. The study analyzes trends in the occurrence of eyeblinks of 50 subjects to classify blink types via experiments. However, noticeable differences between voluntary and spontaneous blinks exist for each subject. Three types of trends based on shape feature parameters (duration and amplitude) of eyeblinks were discovered. This study determines that the system can automatically and effectively classify voluntary and spontaneous eyeblinks.


international conference on human-computer interaction | 2016

Advancement of a To-Do Reminder System Focusing on Context of the User

Masatoshi Tanaka; Keisuke Yoshida; Shogo Matsuno; Minoru Ohyama

In recent years, smartphones have become rapidly popular and their performance has improved remarkably. Therefore, it is possible to estimate user context by using sensors and functions equipped in smartphones. We propose a To-Do reminder system using user indoor position information and moving state. In conventional reminder systems, users have to input the information of place (resolution place). The resolution place is where the To-Do item can be solved and the user receives a reminder. These conventional reminder systems are constructed based on outdoor position information using GPS. In this paper, we propose a new reminder system that makes it unnecessary to input the resolution place. In this newly developed system, we introduce a rule-based system for estimating the resolution place in a To-Do item. The estimation is done based on an object word and a verb, which are included in most tasks in a To-Do list. In addition, we propose an automatic judgment method to determine if a To-Do task has been completed.


2015 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS) | 2015

Determining mobile device indoor and outdoor location in various environments: Estimation of user context

Katsuyoshi Ozaki; Shogo Matsuno; Keisuke Yoshida; Minoru Ohyama

To obtain a persons location information with high accuracy in mobile device, it is necessary for a mobile device to switch its localization method depending on whether the user is indoors or outdoors. We propose a method to determine indoor and outdoor location using only the sensors on a mobile device. To obtain a decision with high accuracy for many devices, the method must consider individual difference between devices. We confirmed that using a majority decision method reduces the influence of individual device difference. Moreover, for highly accurate decisions in various environments, it is necessary to consider the differences in environments, such as large cities surrounded by high-rise buildings versus suburban areas. We measured classification features in different environments and the accuracy of classifier constructed using these features was 99.6%.

Collaboration


Dive into the Minoru Ohyama's collaboration.

Top Co-Authors

Avatar

Kiyohiko Abe

Kanto Gakuin University

View shared research outputs
Top Co-Authors

Avatar

Shoichi Ohi

Tokyo Denki University

View shared research outputs
Top Co-Authors

Avatar

Shogo Matsuno

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Naoaki Itakura

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar

Katsuya Hakozaki

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge