Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tomoyuki Nishioka is active.

Publication


Featured researches published by Tomoyuki Nishioka.


PLOS ONE | 2011

Eye Gaze during Observation of Static Faces in Deaf People

Katsumi Watanabe; Tetsuya Matsuda; Tomoyuki Nishioka; Miki Namatame

Knowing where people look when viewing faces provides an objective measure into the part of information entering the visual system as well as into the cognitive strategy involved in facial perception. In the present study, we recorded the eye movements of 20 congenitally deaf (10 male and 10 female) and 23 (11 male and 12 female) normal-hearing Japanese participants while they evaluated the emotional valence of static face stimuli. While no difference was found in the evaluation scores, the eye movements during facial observations differed among participant groups. The deaf group looked at the eyes more frequently and for longer duration than the nose whereas the hearing group focused on the nose (or the central region of face) more than the eyes. These results suggest that the strategy employed to extract visual information when viewing static faces may differ between deaf and hearing people.


international conference on computers for handicapped persons | 2004

A Preparatory Study for Designing Web-Based Educational Materials for the Hearing-Impaired

Miki Namatame; Muneo Kitajima; Tomoyuki Nishioka; Fumihiko Fukamauchi

Our aim is to design web-based interactive educational materials for the hearing-impaired based on their interaction style. We describe the results of an eye-tracking experiment that demonstrates behavioral differences between hearing-impaired and hearing persons when using web-based materials. We found that the hearing-impaired exhibited a smaller strategic scan pattern, and shallower and more intuitive text processing. These findings suggest that the design of web-based educational materials, which currently only consider textual or image substitutes for auditory information, is insufficient for the hearing-impaired.


conference on human interface | 2007

The activation mechanism for dynamically generated procedures in hyperlogo

Nobuhito Yamamoto; Tomoyuki Nishioka

The higher-order programming is one of the attractive and powerful ways of expressing our algorithmic/procedural methods of solving in many application fields. It represents our idea in program form naturally. The authors have studied and implemented the function of handling higher-order programming paradigm in Hyperlogo language system. The handling functions are: to make procedure closures, to handle procedure closures in the same way as numerical value and character strings, and to activate generated procedure closures. The third point of the above functions is focused mainly in this paper. Two ways of activating closures in Hyperlogo are presented: (1) add a name to generated closure, and call it up by its name as the need arises, (2) use the assistant procedure which activates a target procedure. The procedure activate is introduced to acquire the function.


international conference on computers helping people with special needs | 2002

The See-through Head Mount Display as the Information Offering Device for the Hearing Impaired Students

Tomoyuki Nishioka

Augmented reality (or Mixed reality) is a natural extension of virtual reality. Information acquired by external sensors is displayed overlap with physical world. This extend human cognition of physical world.


international conference on computers for handicapped persons | 2004

Head Mounted Display as a Information Guarantee Device for Hearing Impaired Students

Tomoyuki Nishioka

Augmented reality(AR), which extends human cognition, is also useful in providing equal accessibility for disabled. Recently, Head Mounted Displays, which is key device of AR technology, become consumer products and easy to obtain. In this paper, feasibility of these products for guaranteeing information for hearing impared students at lecture in college is examined.


Journal of Advanced Computational Intelligence and Intelligent Informatics | 2017

Development of Web-Based Remote Speech-to-Text Interpretation System captiOnline

Daisuke Wakatsuki; Nobuko Kato; Takeaki Shionome; Sumihiro Kawano; Tomoyuki Nishioka; Ichiro Naito

1. はじめに ISeeeプロジェクト[1]では,「オープンな,誰もが誰かの 助けになる情報保障」のコンセプトを掲げ,情報保障技術 にクラウドソーシング技術を応用し,健常者も障害者も関 係なく,各人の得意なことを活かして,支援し合う環境の 構築を目指している. 著者らは,この技術の適用の場として,2020年の東京オ リンピック,パラリンピックの競技会場を想定している. 多様な特性を持つ人たちが集まる中で,支援を必要として いる人と支援が可能な人とが,ともに楽しめる場の創造が 可能となる.すなわち,不特定多数が支援者(ワーカー) として情報保障を担当することによって,互いに助け合い, 楽しめる場を創造する. 実際に,クラウドソーシングを活用して,複数の非熟練 者で文字化を行う方法も提案されている[2].たとえば,張 ら[3],および白石ら[4]の研究では,情報源(手話)をタス クの単位に分割したクラウドソーシングによる通訳システ ムの実現可能性を提示している.こういった技術は日本語 の手話化や,手話の日本語化だけでなく,書記日本語の音 声化,多国語の通訳にも利用が期待される. 先行研究[5]では,文献調査と障害者スポーツ競技団体へ のアンケート調査を通して,スポーツ実況のクラウドソー


international conference on universal access in human computer interaction | 2013

Handling structural models composed of objects and their mutual relations in the spatial cognition experiments

Nobuhito Yamamoto; Shoko Shiroma; Tomoyuki Nishioka

It is one of the basic approaches to use the graphical representation of problem spaces for the spatial cognition experiments of the hard of hearing students. Virtual items and the virtual space are thought to be used practical both to build up questions and to assemble answers between experimenters and subjects. Objects and their mutual relations are the basic components of structural model that have to be managed for forming problems. The object oriented processing is the significant and useful framework for modern programming languages. An object is theoretically the functional abstract closure. However, the idea of closure can be easily extended to the practical items. In this article, object oriented representation and its applying to constraint relation problem for interactive experiments are discussed.


international conference on computers helping people with special needs | 2012

Meeting support system for the person with hearing impairment using tablet devices and speech recognition

Makoto Kobayashi; Hiroki Minagawa; Tomoyuki Nishioka; Shigeki Miyoshi

In this paper, we propose a support system for hearing impaired person who attends a small meeting in which other members are hearing people. In such a case, to follow a discussion is difficult for him/her. To solve the problem, the system is designed to show what members are speaking in real time. The system consists of tablet devices and a PC as a server. The PC equips speech recognition software and distributes the recognized results to tablets. The main feature of this system is a method to correct initial speech recognition results that is considered not to be perfectly recognized. The method is handwriting over the tablet device written by meeting members themselves, not by supporting staffs. Every meeting member can correct every recognized result in any time. By this means, the system has possibility to be low cost hearing aids because it does not require extra support staffs.


JOURNAL OF THE FLOW VISUALIZATION SOCIETY OF JAPAN | 2004

Visualization Support of Remote Sign Language Interpreting System on Lecture

Ichiro Naito; Nobuko Kato; Hiroki Minagawa; Tomoyuki Nishioka; Hiroshi Murakami; Sumihiro Kawano; Mayumi Shirasawa; Shigeki Miyoshi; Yasushi Ishihara

Recently it has become possible for persons with hearing impairments in remote locations to communicate via sign language using video phones and videoconferencing systems. Video interpreting makes use of videoconferencing technology to allow remote sign language interpreting services to occur without an interpreter on site. In this paper, we describe our experimental system for the remote sign language interpreting services of lecture. And we discuss that a sign language interpreter, in a remote sign language interpreting services, can realize more effective interpretation by visualization support in lecture.


TCT Education of Disabilities | 2002

Computer education and assistive equipment for hearing impaired people

Hiroshi Murakami; Hiroki Minagawa; Tomoyuki Nishioka; Yutaka Shimizu

Collaboration


Dive into the Tomoyuki Nishioka's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nobuko Kato

National University Corporation Tsukuba University of Technology

View shared research outputs
Top Co-Authors

Avatar

Shigeki Miyoshi

National University Corporation Tsukuba University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sumihiro Kawano

National University Corporation Tsukuba University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yasushi Ishihara

National University Corporation Tsukuba University of Technology

View shared research outputs
Top Co-Authors

Avatar

Mayumi Shirasawa

National University Corporation Tsukuba University of Technology

View shared research outputs
Top Co-Authors

Avatar

Muneo Kitajima

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge