Shogo Matsuno
University of Electro-Communications
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shogo Matsuno.
international conference on human interface and management of information | 2015
Shogo Matsuno; Kota Akehi; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
We have developed a real-time Eye Glance input interface using a Web camera to capture eye gaze inputs. In previous studies, an eye control input interface was developed using an electro-oculograph (EOG) amplified by AC coupling. Our proposed Eye Gesture input interface used a combination of eye movements and did not require the restriction of head movement, unlike conventional eye gaze input methods. However, this method required an input start operation before capturing could commence. This led us to propose the Eye Glance input method that uses a combination of contradirectional eye movements as inputs and avoids the need for start operations. This method required the use of electrodes, which were uncomfortable to attach. The interface was therefore changed to a camera that used facial pictures to record eye movements to realize an improved noncontact and low-restraint interface. The Eye Glance input method measures the directional movement and time required by the eye to move a specified distance using optical flow with OpenCV from Intel. In this study, we analyzed the waveform obtained from eye movements using a purpose-built detection algorithm. In addition, we examined the reasons for detecting a waveform when eye movements failed.
international conference on human-computer interaction | 2015
Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama
We have developed an eye-gaze input system for people with severe physical disabilities. The system utilizes a personal computer and a home video camera to detect eye-gaze under natural light, and users can easily move the mouse cursor to any point on the screen to which they direct their gaze. We constructed this system by first confirming a large difference in the duration of voluntary (conscious) and involuntary (unconscious) blinks through a precursor experiment. Consequently, on the basis of the results obtained, we developed our eye-gaze input interface, which uses the information received from voluntary blinks. More specifically, users can decide on their input by performing voluntary blinks as substitutes for mouse clicks. In this paper, we discuss the eye-gaze and blink information input interface developed and the results of evaluations conducted.
Usability and Accessibility Focused Requirements Engineering (UsARE), 2014 IEEE 2nd International Workshop on | 2014
Shogo Matsuno; Naoaki Itakura; Minoru Ohyama; Shoichi Ohi; Kiyohiko Abe
This paper presents the results of the analysis of trends in the occurrence of eyeblinks for devising new input channels in handheld and wearable information devices. However, engineering a system that can distinguish between voluntary and spontaneous blinks is difficult. The study analyzes trends in the occurrence of eyeblinks of 50 subjects to classify blink types via experiments. However, noticeable differences between voluntary and spontaneous blinks exist for each subject. Three types of trends based on shape feature parameters (duration and amplitude) of eyeblinks were discovered. This study determines that the system can automatically and effectively classify voluntary and spontaneous eyeblinks.
international conference on human-computer interaction | 2016
Shogo Matsuno; Takahiro Terasaki; Shogo Aizawa; Tota Mizuno; Kazuyuki Mito; Naoaki Itakura
This paper proposes a new method for practical skin potential activity (SPA) measurement while driving a car by installing electrodes on the outer periphery of the steering wheel. Evaluating the psychophysiological state of the driver while driving is important for accident prevention. We investigated whether the physiological and psychological state of the driver can be evaluated by measuring SPA while driving. Therefore, we have devised a way to measure SPA measurement by installing electrodes in a handle. Electrodes are made of tin foil and are placed along the outer periphery of the wheel considering that their position while driving is not fixed. The potential difference is increased by changing the impedance through changing the width of electrodes. Moreover we try to experiment using this environment. An experiment to investigate the possibility of measuring SPA using the conventional and the proposed methods were conducted with five healthy adult males. A physical stimulus was applied to the forearm of the subjects. It was found that the proposed method could measure SPA, even though the result was slightly smaller than that of the conventional method of affixing electrodes directly on hands.
international conference on human-computer interaction | 2016
Masatoshi Tanaka; Keisuke Yoshida; Shogo Matsuno; Minoru Ohyama
In recent years, smartphones have become rapidly popular and their performance has improved remarkably. Therefore, it is possible to estimate user context by using sensors and functions equipped in smartphones. We propose a To-Do reminder system using user indoor position information and moving state. In conventional reminder systems, users have to input the information of place (resolution place). The resolution place is where the To-Do item can be solved and the user receives a reminder. These conventional reminder systems are constructed based on outdoor position information using GPS. In this paper, we propose a new reminder system that makes it unnecessary to input the resolution place. In this newly developed system, we introduce a rule-based system for estimating the resolution place in a To-Do item. The estimation is done based on an object word and a verb, which are included in most tasks in a To-Do list. In addition, we propose an automatic judgment method to determine if a To-Do task has been completed.
2015 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS) | 2015
Katsuyoshi Ozaki; Shogo Matsuno; Keisuke Yoshida; Minoru Ohyama
To obtain a persons location information with high accuracy in mobile device, it is necessary for a mobile device to switch its localization method depending on whether the user is indoors or outdoors. We propose a method to determine indoor and outdoor location using only the sensors on a mobile device. To obtain a decision with high accuracy for many devices, the method must consider individual difference between devices. We confirmed that using a majority decision method reduces the influence of individual device difference. Moreover, for highly accurate decisions in various environments, it is necessary to consider the differences in environments, such as large cities surrounded by high-rise buildings versus suburban areas. We measured classification features in different environments and the accuracy of classifier constructed using these features was 99.6%.
1st and 2nd International Workshop on Usability- and Accessibility-Focused Requirements Engineering (UsARE 2012 / UsARE 2014) | 2012
Shogo Matsuno; Minoru Ohyama; Kiyohiko Abe; Shoichi Ohi; Naoaki Itakura
In this paper, we propose and evaluate a new conscious eyeblink differentiation method, comprising an algorithm that takes into account differences in individuals, for use in a prospective eyeblink user interface. The proposed method uses a frame-splitting technique that improves the time resolution by splitting a single interlaced image into two fields—even and odd. Measuring eyeblinks with sufficient accuracy using a conventional NTSC video camera (30 fps) is difficult. However, the proposed method uses eyeblink amplitude as well as eyeblink duration as distinction thresholds. Further, the algorithm automatically differentiates eyeblinks by considering individual differences and selecting a large parameter of significance in each user. The results of evaluation experiments conducted using 30 subjects indicate that the proposed method automatically differentiates conscious eyeblinks with an accuracy rate of 83.6 % on average. These results indicate that automatic differentiation of conscious eyeblinks using a conventional video camera incorporated with our proposed method is feasible.
international conference on user modeling adaptation and personalization | 2018
Shogo Matsuno; Reiji Suzumura; Minoru Ohyama
Tour planning is a difficult task for those who visit unfamiliar city destinations. Furthermore, building an itinerary becomes more difficult as the number of options, which can be incorporated into travel, increases. The authors aim to propose place of interest (POI) according to the narrative strategy of a tour guide to realize a better personalized mobile tour guide system and establish a method to support efficient route scheduling. As a basic stage, we will herein consider a method of naturally collecting context information of users through an interaction between users and information terminals. In addition, we will introduce a POI recommendation application using the context information being developed.
innovative mobile and internet services in ubiquitous computing | 2018
Shogo Matsuno; Masatoshi Tanaka; Keisuke Yoshida; Kota Akehi; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
This paper examines an input method for ocular analysis that incorporates eye-motion and eye-blink features to enable an eye-controlled input interface that functions independent of gaze-position measurement. This was achieved by analyzing the visible light in images captured without using special equipment. We propose applying two methods. One method detects eye motions using optical flow. The other method classifies voluntary eye blinks. The experimental evaluations assessed both identification algorithms simultaneously. Both algorithms were also examined for applicability in an input interface. The results have been consolidated and evaluated. This paper concludes by considering of the future of this topic.
Artificial Life and Robotics | 2018
Shogo Matsuno; Tota Mizuno; Hirotoshi Asano; Kazuyuki Mito; Naoaki Itakura
In this paper, we propose a novel method for evaluating mental workload (MWL) using variances in facial temperature. Moreover, our method aims to evaluate autonomic nerve activity using single facial thermal imaging. The autonomic nervous system is active under MWL. In previous studies, temperature differences between the nasal and forehead portions of the face were used in MWL evaluation and estimation. Hence, nasal skin temperature (NST) is said to be a reliable indicator of autonomic nerve activity. In addition, autonomic nerve activity has little effect on forehead temperature; thus, temperature differences between the nasal and forehead portions of the face have also been demonstrated to be a good indicator of autonomic nerve activity (along with other physiological indicators such as EEG and heart rate). However, these approaches have not considered temperature changes in other parts of the face. Thus, we propose novel method using variances in temperature for the entire face. Our proposed method enables capture of other parts of the face for temperature monitoring, thereby increasing evaluation and estimation accuracy at higher sensitivity levels than conventional methods. Finally, we also examined whether further high-precision evaluation and estimation was feasible. Our results proved that our proposed method is a highly accurate evaluation method compared with results obtained in previous studies using NST.