Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mamoru Iwabuchi is active.

Publication


Featured researches published by Mamoru Iwabuchi.


International Journal of Computer Processing of Languages | 2006

A Cross-Cultural Study on the Interpretation of Picture-Based Sentences

Kenryu Nakamura; Mamoru Iwabuchi; Norman Alm

The purpose of this study is to clarify how Japanese and English speakers interpret picture-based sentences. Two studies were conducted, one with adults and one with children. The main task is to interpret eight picture-based sentences in two word-order conditions, SVO and SOV. These two conditions reflect the natural word orders of Japanese and English, respectively. The results suggest that word order was a more important cue for native English speakers than for native Japanese speakers in interpreting picture-based sentences. English speakers had more difficulty with picture symbols arrayed in SOV order than SVO. Japanese speakers had the most difficulty with missing syntactic particles in both word orders; successful interpretation sometimes depended on the particular meaning of the nouns and verbs in the sentences. This may be because English word order is more fixed, while Japanese word order has greater variance and capitalizes on syntax markers to clarify the meaning of a sentence.


systems man and cybernetics | 2001

A rapid multi-lingual communicator for non-speaking people and others

Norman Alm; Mamoru Iwabuchi; John L. Arnott; Peter N. Andreasen; Kenryu Nakamura

Computer-based systems that are developed to assist people with severe disabilities can often have interesting wider applications. A computer-based communication system has been developed to give non-speaking people multi-lingual capability. It is based on developments in this field in conversational modelling and utterance prediction, making use of pre-stored material. The system could also be used by people whose only communication disadvantage is not being able to speak a foreign language. The system consists of a large store of reusable conversational material duplicated in several languages and a model of conversation which allows the system to link the items together into appropriate sequences. A unique feature of the system is that both the non-speaking person and the communication partner use the communicator in their dialogue. In comparison with a multi-lingual phrase book, the system helped users to have a more natural conversation, and to take more control of the interaction.


international conference on computers for handicapped persons | 2014

Visualizing Motion History for Investigating the Voluntary Movement and Cognition of People with Severe and Multiple Disabilities

Mamoru Iwabuchi; Guang Yang; Kimihiko Taniguchi; Syoudai Sano; Takamitsu Aoki; Kenryu Nakamura

Two case studies were conducted with two children with severe physical and cognitive disabilities in this research, and a computer-vision based technique called Motion History was applied to visualize their movement. By changing the conditions of intervention to the children, the Motion History successfully helped to find their voluntary movement and effective stimuli that attracted their attention. It was concluded that finding the changes of movement is very important for extracting voluntary movement and Motion History is suitable for that purpose. This gives us a greater possibility of evidence-based interaction with people with severe and multiple disabilities.


international conference on computers helping people with special needs | 2018

Study on Automated Audio Descriptions Overlapping Live Television Commentary

Manon Ichiki; Toshihiro Shimizu; Atsushi Imai; Tohru Takagi; Mamoru Iwabuchi; Kiyoshi Kurihara; Taro Miyazaki; Tadashi Kumano; Hiroyuki Kaneko; Shoei Sato; Nobumasa Seiyama; Yuko Yamanouchi; Hideki Sumiyoshi

We are conducting research on “automated audio description (AAD)” which automatically generates audio descriptions from real-time competition data for visually impaired people to enjoy live sports programs. However, there is a problem that AAD overlaps with the live television commentary voice, making it difficult to hear each other’s comment. In this paper, first, we show that the game situation is conveyed effectively when visually impaired persons listen to the AAD alone. Then we state the results of experiments on the following items to solve the overlap issue: (1) There is a difference in optimum volume level between live commentary and AAD, (2) The ease of listening differs depending on the difference in the characteristics of text-to-speech synthesizer for AAD, (3) Playing back AAD through a speaker placed differently from the TV speaker makes both voice sounds easier to listen to. We had clues to solve that depending on the presentation method of AAD, we can make AAD easy to listen to even when AAD overlaps the live television commentary.


EAI Endorsed Transactions on Ambient Systems | 2017

Feasibility Study of Smartphone-Based Tear Volume Measurement System

Yoshiro Okazaki; Tatsuki Takenaga; Taku Miyake; Mamoru Iwabuchi; Toshiyuki Okubo; Norihiko Yokoi

Evaluation of tear volume is important for diagnosing dry eye disease. At the clinical site, dedicated devices such as slit lamp microscopy or Schirmer’s test strip have been used to quantify tear volume by ophthalmologist. However, these devices have access only in medical office and therefore have limited availability for the public. Tear volume changes with environmental, physical or psychological situation. For that reason, measurement of tear volume regardless of location, time or circumstances can be beneficial. In this study, a tear volume measurement system based on the principle of meniscometry was developed and implemented on the smartphone, and its feasibility was evaluated. It was shown that tear meniscus radii of 22 human subjects were calculated to be mean 0.31 (SD 0.06) mm which was within the range of the previous study. The results suggested the feasibility of tear volume measurement system using the smartphone based on the principle of meniscometry as an IoT sensor.


International Conference on IoT Technologies for HealthCare | 2016

Design and Implementation of Smartphone-Based Tear Volume Measurement System

Yoshiro Okazaki; Tatsuki Takenaga; Taku Miyake; Mamoru Iwabuchi; Toshiyuki Okubo; Norihiko Yokoi

Evaluation of tear volume is important for diagnosing dry eye disease. At the clinical site, dedicated devices such as Slit-lamp Microscopy or Meniscometer have been used to quantify tear volume by ophthalmologist. However, these devices have access only in medical office and therefore have limited availability for the public. Tear volume changes with environmental, physical or psychological situation. For that reason, measurement of tear volume regardless of location, time or circumstances can be beneficial not only for healthcare professionals but also for patients. If tear volume could be measured by using smartphone, it is expected that the smartphone could be utilized as an IoT sensor for the healthcare application. In this study, tear volume measurement system was designed and implemented on smartphone. Further application for smartphone as an IoT device will be discussed.


conference on computers and accessibility | 2014

Motion history to improve communication and switch access for people with severe and multiple disabilities

Guang Yang; Mamoru Iwabuchi; Rumi Hirabayashi; Kenryu Nakamura; Kimihiko Taniguchi; Syoudai Sano; Takamitsu Aoki

In this study, a computer-vision based technique called Motion History that visualizes the history of movement of the user, was applied to support communication and switch access for people with severe and multiple disabilities. Seven non-speaking children with severe physical and intellectual disabilities participated in the study, and Motion History successfully helped to investigate their voluntary movement and cognition. In addition, based on the feedback comments of the study, a new system was developed, which used the built-in camera of the tablet PC to observe Motion History, and made the system easier and more mobile to use. One of the features of the system could convert the recognized body movement into a switch control, where a good switch fitting was automatically established based on the motion history.


2013 IEEE Symposium on Computational Intelligence in Rehabilitation and Assistive Technologies (CIRAT) | 2013

Real-time upper-body detection and orientation estimation via depth cues for assistive technology

Guang Yang; Mamoru Iwabuchi; Kenryu Nakamura

Automatic and efficient human pose estimation has great practical value in video surveillance. In this paper, we explore how a consumer depth sensor can assist with upper-body detection and pose estimation more precisely in the field of assistive technology for people with disabilities, and a novel real-time upper-body pose (orientation) estimation method is presented. At first, the Haar cascade based upper-body detection is conducted, and the depth information in a fixed subregion is extracted as the input feature vector. Then, support vector machine (SVM) and naive Bayes classifier are compared for estimating the upper-body orientation. Further, in order to acquire the continuous estimation data during a long time for behavioral analysis, we also adopt the support vector regression (SVR) to train a regression model. The experimental results show the effectiveness of the proposed method.


wireless, mobile and ubiquitous technologies in education | 2012

Mainstream but Specialized: Mobile Technology for Cognitive Support in Education

Mamoru Iwabuchi; Maiko Takahashi; Kenryu Nakamura; E.A. Draffan

In this study, two software development projects were introduced to support timekeeping and reading for students with cognitive disabilities using mainstream mobile technology. In the first project, two versions of a countdown timer were developed that showed the remaining time graphically, by the area size. A unique feature was added to the developed prototypes, preventing the user from unintentionally interrupting the running of the timer. The ebook reader developed in the second project offered students the chance to point to a phrase and have it read aloud with a highlight box around the characters. It was important for the students to have a digital replica of the printed textbook being used at the same time by others in the class. The study highlighted a key consideration for assistive technology development for those with cognitive disabilities: that of the essential balance between technical features and human skills, such as the systems ease of use, look and feel as well as cognitive adaptation, whilst applying mainstream technology to the provision of specialized support. The study also showed that solutions to time and reading difficulties should be considered in relation to available technology and the surroundings of the users.


Lecture Notes in Computer Science | 2003

A multi-lingual augmentative communication system

Norman Alm; Mamoru Iwabuchi; Peter N. Andreasen; Kenryu Nakamura

Collaboration


Dive into the Mamoru Iwabuchi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norihiko Yokoi

Kyoto Prefectural University of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge