Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Katsutoshi Masai is active.

Publication


Featured researches published by Katsutoshi Masai.


ubiquitous computing | 2015

Quantifying reading habits: counting how many words you read

Kai Kunze; Katsutoshi Masai; Masahiko Inami; Ömer Sacakli; Marcus Liwicki; Andreas Dengel; Shoya Ishimaru; Koichi Kise

Reading is a very common learning activity, a lot of people perform it everyday even while standing in the subway or waiting in the doctors office. However, we know little about our everyday reading habits, quantifying them enables us to get more insights about better language skills, more effective learning and ultimately critical thinking. This paper presents a first contribution towards establishing a reading log, tracking how much reading you are doing at what time. We present an approach capable of estimating the words read by a user, evaluate it in an user independent approach over 3 experiments with 24 users over 5 different devices (e-ink reader, smartphone, tablet, paper, computer screen). We achieve an error rate as low as 5% (using a medical electrooculography system) or 15% (based on eye movements captured by optical eye tracking) over a total of 30 hours of recording. Our method works for both an optical eye tracking and an Electrooculography system. We provide first indications that the method works also on soon commercially available smart glasses.


intelligent user interfaces | 2016

Facial Expression Recognition in Daily Life by Embedded Photo Reflective Sensors on Smart Eyewear

Katsutoshi Masai; Yuta Sugiura; Masa Ogata; Kai Kunze; Masahiko Inami; Maki Sugimoto

This paper presents a novel smart eyewear that uses embedded photo reflective sensors and machine learning to recognize a wearers facial expressions in daily life. We leverage the skin deformation when wearers change their facial expressions. With small photo reflective sensors, we measure the proximity between the skin surface on a face and the eyewear frame where 17 sensors are integrated. A Support Vector Machine (SVM) algorithm was applied for the sensor information. The sensors can cover various facial muscle movements and can be integrated into everyday glasses. The main contributions of our work are as follows. (1) The eyewear recognizes eight facial expressions (92.8% accuracy for one time use and 78.1% for use on 3 different days). (2) It is designed and implemented considering social acceptability. The device looks like normal eyewear, so users can wear it anytime, anywhere. (3) Initial field trials in daily life were undertaken. Our work is one of the first attempts to recognize and evaluate a variety of facial expressions in the form of an unobtrusive wearable device.


international conference on computer graphics and interactive techniques | 2015

AffectiveWear: toward recognizing facial expression

Katsutoshi Masai; Yuta Sugiura; Masa Ogata; Katsuhiro Suzuki; Fumihiko Nakamura; Sho Shimamura; Kai Kunze; Masahiko Inami; Maki Sugimoto

Facial expression is a powerful way for us to exchange information nonverbally. They can give us insights into how people feel and think. There are a number of works related to facial expression detection in computer vision. However, most works focus on camera-based systems installed in the environment. With this method, it is difficult to track users face if user moves constantly. Moreover, users facial expression can be recognized at only a limited place.


human factors in computing systems | 2016

Empathy Glasses

Katsutoshi Masai; Kai Kunze; Maki Sugimoto; Mark Billinghurst

In this paper, we describe Empathy Glasses, a head worn prototype designed to create an empathic connection between remote collaborators. The main novelty of our system is that it is the first to combine the following technologies together: (1) wearable facial expression capture hardware, (2) eye tracking, (3) a head worn camera, and (4) a see-through head mounted display, with a focus on remote collaboration. Using the system, a local user can send their information and a view of their environment to a remote helper who can send back visual cues on the local users see-through display to help them perform a real world task. A pilot user study was conducted to explore how effective the Empathy Glasses were at supporting remote collaboration. We describe the implications that can be drawn from this user study.


ieee virtual reality conference | 2017

Recognition and mapping of facial expressions to avatar by embedded photo reflective sensors in head mounted display

Katsuhiro Suzuki; Fumihiko Nakamura; Jiu Otsuka; Katsutoshi Masai; Yuta Itoh; Yuta Sugiura; Maki Sugimoto

We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMD allow people to enjoy an immersive Virtual Reality (VR) experience. A virtual avatar can be a representative of the user in the virtual environment. However, the synchronization of the the virtual avatars expressions with those of the HMD user is limited. The major problem of wearing an HMD is that a large portion of the users face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between the sensors and the users face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. We achieved an overall accuracy of 88% in recognizing the facial expressions. Our system can also reproduce facial expression change in real-time through an existing avatar using regression. Consequently, our system enables estimation and reconstruction of facial expressions that correspond to the users emotional changes.


user interface software and technology | 2016

Facial Expression Mapping inside Head Mounted Display by Embedded Optical Sensors

Katsuhiro Suzuki; Fumihiko Nakamura; Jiu Otsuka; Katsutoshi Masai; Yuta Itoh; Yuta Sugiura; Maki Sugimoto

Head Mounted Display (HMD) provides an immersive ex-perience in virtual environments for various purposes such as for games and communication. However, it is difficult to capture facial expression in a HMD-based virtual environ-ment because the upper half of users face is covered up by the HMD. In this paper, we propose a facial expression mapping technology between user and a virtual avatar using embedded optical sensors and machine learning. The distance between each sensor and surface of the face is measured by the optical sensors that are attached inside the HMD. Our system learns the sensor values of each facial expression by neural network and creates a classifier to estimate the current facial expression.


international conference on computer graphics and interactive techniques | 2018

3D facial geometry analysis and estimation using embedded optical sensors on smart eyewear

Nao Asano; Katsutoshi Masai; Yuta Sugiura; Maki Sugimoto

Facial performance capture is used for animation production that projects a performers facial expression to a computer graphics model. Retro-reflective markers and cameras are widely used for the performance capture. To capture expressions, we need to place markers on the performers face and calibrate the intrinsic and extrinsic parameters of cameras in advance. However, the measurable space is limited to the calibrated area. In this study, we propose a system to capture facial performance using a smart eyewear with photo-reflective sensors and machine learning technique. Also, we show a result of principal components analysis of facial geometry to determine a good estimation parameter set.


augmented human international conference | 2018

FaceRubbing: Input Technique by Rubbing Face using Optical Sensors on Smart Eyewear for Facial Expression Recognition

Katsutoshi Masai; Yuta Sugiura; Maki Sugimoto

With the emergence of the wearable devices, the method to make use of the limited input space is required. This paper presents an input technique to a computer by rubbing face using optical sensors on smart eyewear. Since rubbing gesture occurs in daily life, our system enables a subtle interaction between the user and a computer. We used the smart eyewear based on the work by [5]. Although the device is developed for facial expression recognition, our method can recognize rubbing gesture independent from facial expression recognition. The embedded optical sensors measure the skin deformation caused by rubbing on the face. We detect the gestures using principal component analysis (PCA) and peak detection. we classify the area of the gesture with a random forest classifier. The accuracy of detecting rubbing gesture is 97.5%. The classification accuracy of 10 gesture area is 88.7% with user-independent training. The system can open up a new interaction method for smart glasses.


Proceedings of the 1st International Workshop on Multimedia Content Analysis in Sports - MMSports'18 | 2018

Development of a Virtual Environment for Motion Analysis of Tennis Service Returns

Kei Saito; Katsutoshi Masai; Yuta Sugiura; Toshitaka Kimura; Maki Sugimoto

In sports performance analysis, it is important to understand differences between experts and novices in order to train novices in an efficient manner. To understand these differences within the game of tennis, we developed a virtual environment to analyze the responses of experts and novices to services. By capturing actual service motions of an expert, it is possible to reproduce virtualized services in the environment. We did experiments on types and courses of services. As a result, we found differences between experts and novices in preparation, leg movement, take-back returns, and degree of spine twist.


human robot interaction | 2017

RacketAvatar that Expresses Intention of Avatar and User

Katsutoshi Masai; Yuta Sugiura; Michita Imai; Maki Sugimoto

This paper provides a video prototype about a RacketAvatar that expresses the intention through the motion. The avatar can have two characters, the racket itself and a user who has the one. This ambiguous character can create a new relationship between a human and a robot avatar. The merit of animating a racket the user has is that it can communicate through haptic feedback in addition to visual feedback.

Collaboration


Dive into the Katsutoshi Masai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruce H. Thomas

University of South Australia

View shared research outputs
Researchain Logo
Decentralizing Knowledge