Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shoya Ishimaru is active.

Publication


Featured researches published by Shoya Ishimaru.


augmented human international conference | 2014

In the blink of an eye: combining head motion and eye blink frequency for activity recognition with Google Glass

Shoya Ishimaru; Kai Kunze; Koichi Kise; Jens Weppner; Andreas Dengel; Paul Lukowicz; Andreas Bulling

We demonstrate how information about eye blink frequency and head motion patterns derived from Google Glass sensors can be used to distinguish different types of high level activities. While it is well known that eye blink frequency is correlated with user activity, our aim is to show that (1) eye blink frequency data from an unobtrusive, commercial platform which is not a dedicated eye tracker is good enough to be useful and (2) that adding head motion patterns information significantly improves the recognition rates. The method is evaluated on a data set from an experiment containing five activity classes (reading, talking, watching TV, mathematical problem solving, and sawing) of eight participants showing 67% recognition accuracy for eye blinking only and 82% when extended with head motion patterns.


IEEE Pervasive Computing | 2015

Making Regular Eyeglasses Smart

Oliver Amft; Florian Wahl; Shoya Ishimaru; Kai Kunze

The authors discuss the vast application potential of multipurpose smart eyeglasses that integrate into the form factor of traditional spectacles and provide frequently needed sensing and interaction. In combination with software apps running on smart eyeglasses, the authors develop universal assistance systems that remain unobtrusive and thus can support wearers throughout their daily life. They describe a blueprint of the embedded architecture of smart eyeglasses and identify various software app clusters. They discuss findings from using smart eyeglasses prototypes in three case studies: to recognize cognitive workload, quantify reading habits, and monitor light exposure to estimate the circadian phase. This article is part of a special issue on digitally enhanced reality.


ubiquitous computing | 2015

Quantifying reading habits: counting how many words you read

Kai Kunze; Katsutoshi Masai; Masahiko Inami; Ömer Sacakli; Marcus Liwicki; Andreas Dengel; Shoya Ishimaru; Koichi Kise

Reading is a very common learning activity, a lot of people perform it everyday even while standing in the subway or waiting in the doctors office. However, we know little about our everyday reading habits, quantifying them enables us to get more insights about better language skills, more effective learning and ultimately critical thinking. This paper presents a first contribution towards establishing a reading log, tracking how much reading you are doing at what time. We present an approach capable of estimating the words read by a user, evaluate it in an user independent approach over 3 experiments with 24 users over 5 different devices (e-ink reader, smartphone, tablet, paper, computer screen). We achieve an error rate as low as 5% (using a medical electrooculography system) or 15% (based on eye movements captured by optical eye tracking) over a total of 30 hours of recording. Our method works for both an optical eye tracking and an Electrooculography system. We provide first indications that the method works also on soon commercially available smart glasses.


ubiquitous computing | 2014

Smarter eyewear: using commercial EOG glasses for activity recognition

Shoya Ishimaru; Kai Kunze; Yuji Uema; Koichi Kise; Masahiko Inami; Katsuma Tanaka

Smart eyewear computing is a relatively new subcategory in ubiquitous computing research, which has enormous potential. In this paper we present a first evaluation of soon commercially available Electrooculography (EOG) glasses (J!NS MEME) for the use in activity recognition. We discuss the potential of EOG glasses and other smart eye-wear. Afterwards, we show a first signal level assessment of MEME, and present a classification task using the glasses. We are able to distinguish of 4 activities for 2 users (typing, reading, eating and talking) using the sensor data (EOG and acceleration) from the glasses with an accuracy of 70 % for 6 sec. windows and up to 100 % for a 1 minute majority decision. The classification is done user-independent. The results encourage us to further explore the EOG glasses as platform for more complex, real-life activity recognition systems.


ubiquitous computing | 2013

My reading life: towards utilizing eyetracking on unmodified tablets and phones

Kai Kunze; Shoya Ishimaru; Yuzuko Utsumi; Koichi Kise

As reading is an integral part of our knowledge lives, we should know more about our reading activities. This paper introduces a reading application for smart phone and tablets that aims at giving user more quantified information about their reading habits. We present our work towards building an open library for eye tracking on unmodified tablets and smart phones to support some of the applications advanced functionality. We implemented already several eye tracking algorithms from previous work, unfortunately all seem not to be robust enough for our application case. We give an overview about our challenges and potential solutions.


international conference on document analysis and recognition | 2013

Reading Activity Recognition Using an Off-the-Shelf EEG -- Detecting Reading Activities and Distinguishing Genres of Documents

Kai Kunze; Yuki Shiga; Shoya Ishimaru; Koichi Kise

The document analysis community spends substantial resources towards computer recognition of any type of text (e.g. characters, handwriting, document structure etc.). In this paper, we introduce a new paradigm focusing on recognizing the activities and habits of users while they are reading. We describe the differences to the traditional approaches of document analysis. We present initial work towards recognizing reading activities. We report our initial findings using a commercial, dry electrode Electroencephalography (EEG) system. We show the feasibility to distinguish reading tasks for 3 different document genres with one user and near perfect accuracy. Distinguishing reading tasks for 3 different document types we achieve 97 % with user specific training. We present evidence that reading and non-reading related activities can be separated over 3 users using 6 classes, perfectly separating reading from non-reading. A simple EEG system seems promising for distinguishing the reading of different document genres.


document analysis systems | 2016

Semi-automatic Text and Graphics Extraction of Manga Using Eye Tracking Information

Christophe Rigaud; Thanh-Nam Le; Jean-Christophe Burie; Jean-Marc Ogier; Shoya Ishimaru; Motoi Iwata; Koichi Kise

The popularity of storing, distributing and reading comic books electronically has made the task of comics analysis an interesting research problem. Different work have been carried out aiming at understanding their layout structure and the graphic content. However the results are still far from universally applicable, largely due to the huge variety in expression styles and page arrangement, especially in manga (Japanese comics). In this paper, we propose a comic image analysis approach using eye-tracking data recorded during manga reading sessions. As humans are extremely capable of interpreting the structured drawing content, and show different reading behaviors based on the nature of the content, their eye movements follow distinguishable patterns over text or graphic regions. Therefore, eye gaze data can add rich information to the understanding of the manga content. Experimental results show that the fixations and saccades indeed form consistent patterns among readers, and can be used for manga textual and graphical analysis.


Archive | 2018

Augmented Learning on Anticipating Textbooks with Eye Tracking

Shoya Ishimaru; Syed Saqib Bukhari; Carina Heisel; Nicolas Großmann; Pascal Klein; Jochen Kuhn; Andreas Dengel

This paper demonstrates how eye tracking technologies can understand providers to realize a personalized learning. Although curiosity is an important factor for learning, textbooks have been static and constant among various learners. The motivation of our work is to develop a digital textbook which displays contents dynamically based on students’ interests. As interest is a positive predictor of learning, we hypothesize that students’ learning and understanding will improve when they are presented information which is in line with their current cognitive state. As the first step, we investigate students’ reading behaviors with an eye tracker, and propose attention and comprehension prediction approaches. These methods were evaluated on a dataset including eight participants’ readings on a learning material in Physics. We classified participants’ comprehension levels into three classes, novice, intermediate, and expert, indicating significant differences in reading behavior and solving tasks.


international symposium on wearable computers | 2015

MEME: eye wear computing to explore human behavior

Kai Kunze; Katsuma Tanaka; Shoya Ishimaru; Yuji Uema; Koichi Kise; Masahiko Inami

In this demonstration, we focus on eye wear to assist people, sensing their physical, social and mental activities. Detecting and quantifying our behavior can raise awareness towards unhealthy practices. We use J!NS MEME prototypes, smart glasses with integrated electrodes to detect eye movements, in application cases from reading detection over ergonomics to talking recognition for social interaction tracking.


ubiquitous computing | 2014

Shiny: an activity logging platform for Google Glass

Jens Weppner; Andreas Poxrucker; Paul Lukowicz; Shoya Ishimaru; Kai Kunze; Koichi Kise

We describe an activity logging platform for Google Glass based on our previous work. We introduce new multi-modal methods for quick non-disturbing interactions for activity logging control and real time ground truth labeling, consisting of swipe gesture, head gesture and laser pointer tagging methods. The methods are evaluated in user studies towards estimating their effectiveness.

Collaboration


Dive into the Shoya Ishimaru's collaboration.

Top Co-Authors

Avatar

Koichi Kise

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Katsuma Tanaka

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuzuko Utsumi

Osaka Prefecture University

View shared research outputs
Top Co-Authors

Avatar

Andreas Dengel

German Research Centre for Artificial Intelligence

View shared research outputs
Top Co-Authors

Avatar

Carina Heisel

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jochen Kuhn

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge