Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nobuyuki Hiruma is active.

Publication


Featured researches published by Nobuyuki Hiruma.


Vision Research | 1999

Characteristics of accommodation toward apparent depth.

Tsunehiro Takeda; Keizo Hashimoto; Nobuyuki Hiruma; Yukio Fukui

This paper deals with characteristics of accommodation evoked by perceived depth sensation and the dynamic relationship between accommodation and vergence, applying newly developed optical measurement apparatuses. A total of five subjects looked at three different two-dimensional stimuli and two different three-dimensional stimuli; namely a real image and a stereoscopic image. With regard to the two-dimensional stimuli, a manifest accommodation without any accompanying vergence was found because of an apparent depth sensation even though the target distance was kept constant. With regard to the three-dimensional stimuli, larger accommodation and clear vergence were evoked because of binocular parallax and a stronger depth sensation. As for the stereoscopic image, a manifest overshoot (the accommodation peaked first and receded considerably) was found while the vergence remained constant. On the other hand, the overshoot of accommodation was smaller when subjects were watching the real image. These results reveal that brain depth perception has a higher effect on accommodation than expected. The relationship of accommodation and vergence toward the stereoscopic image suggests a reason why severe visual fatigue is commonly experienced by many viewers using stereoscopic displays. It has also paved the way for the numerical analysis of the oculomotor triad system.


Applied Optics | 1995

Three-dimensional visual stimulator

Tsunehiro Takeda; Yukio Fukui; Keizo Hashimoto; Nobuyuki Hiruma

We describe a newly developed three-dimensional visual stimulator (TVS) that can change independently the directions, distances, sizes, luminance, and varieties of two sets of targets for both eyes. It consists of liquid crystal projectors (LCPs) that generate the flexible images of targets, Badal otometers that change target distances without changing the visual angles, and relay-lens systems that change target directions. A special control program is developed for real-time control of six motors and two LCPs in the TVS together with a three-dimensional optometer III that simultaneously measures eye movement, accommodation, pupil diameter, and head movement. The TVS measurement ranges are as follows: distance, 0 to -20 D; direction, ±16° horizontally and ±15° vertically; size, 0-2° visual angle; and luminance, 10(-2)-10(2) cd/m(2). The target images are refreshed at 60 Hz and speeds with which the target makes a smooth change (ramp stimuli) are as follows: distance, 5 D/s; direction, 30°/s, size, 10°/s. A simple application demonstrates the performance.


PLOS ONE | 2013

Decoding humor experiences from brain activity of people viewing comedy movies.

Yasuhito Sawahata; Kazuteru Komine; Toshiya Morita; Nobuyuki Hiruma

Humans naturally have a sense of humor. Experiencing humor not only encourages social interactions, but also produces positive physiological effects on the human body, such as lowering blood pressure. Recent neuro-imaging studies have shown evidence for distinct mental state changes at work in people experiencing humor. However, the temporal characteristics of these changes remain elusive. In this paper, we objectively measured humor-related mental states from single-trial functional magnetic resonance imaging (fMRI) data obtained while subjects viewed comedy TV programs. Measured fMRI data were labeled on the basis of the lag before or after the viewer’s perception of humor (humor onset) determined by the viewer-reported humor experiences during the fMRI scans. We trained multiple binary classifiers, or decoders, to distinguish between fMRI data obtained at each lag from ones obtained during a neutral state in which subjects were not experiencing humor. As a result, in the right dorsolateral prefrontal cortex and the right temporal area, the decoders showed significant classification accuracies even at two seconds ahead of the humor onsets. Furthermore, given a time series of fMRI data obtained during movie viewing, we found that the decoders with significant performance were also able to predict the upcoming humor events on a volume-by-volume basis. Taking into account the hemodynamic delay, our results suggest that the upcoming humor events are encoded in specific brain areas up to about five seconds before the awareness of experiencing humor. Our results provide evidence that there exists a mental state lasting for a few seconds before actual humor perception, as if a viewer is expecting the future humorous events.


Scientific Reports | 2015

Higher resolution stimulus facilitates depth perception: MT+ plays a significant role in monocular depth perception.

Yoshiaki Tsushima; Kazuteru Komine; Yasuhito Sawahata; Nobuyuki Hiruma

Today, we human beings are facing with high-quality virtual world of a completely new nature. For example, we have a digital display consisting of a high enough resolution that we cannot distinguish from the real world. However, little is known how such high-quality representation contributes to the sense of realness, especially to depth perception. What is the neural mechanism of processing such fine but virtual representation? Here, we psychophysically and physiologically examined the relationship between stimulus resolution and depth perception, with using luminance-contrast (shading) as a monocular depth cue. As a result, we found that a higher resolution stimulus facilitates depth perception even when the stimulus resolution difference is undetectable. This finding is against the traditional cognitive hierarchy of visual information processing that visual input is processed continuously in a bottom-up cascade of cortical regions that analyze increasingly complex information such as depth information. In addition, functional magnetic resonance imaging (fMRI) results reveal that the human middle temporal (MT+) plays a significant role in monocular depth perception. These results might provide us with not only the new insight of our neural mechanism of depth perception but also the future progress of our neural system accompanied by state-of- the-art technologies.


conference on computers and accessibility | 2017

Sign Language Support System for Viewing Sports Programs

Tsubasa Uchida; Taro Miyazaki; Makiko Azuma; Shuichi Umeda; Naoto Kato; Hideki Sumiyoshi; Yuko Yamanouchi; Nobuyuki Hiruma

To expand the services that are based on Japanese Sign Language (JSL) for deaf and hard of hearing people, we developed a support system for viewing sports program. The system provides sign language computer graphics (CG) animations and other auxiliary information such as text, image, and notifications automatically generated from game metadata. Results obtained from gathered opinions showed that the system is effective for understanding the situation when a game is interrupted.


international conference on universal access in human computer interaction | 2007

An evaluation of accessibility of hierarchical data structures in data broadcasting: using tactile interface for visually-impaired people

Takuya Handa; Tadahiro Sakai; Kinji Matsumura; Yasuaki Kanatsugu; Nobuyuki Hiruma; Takayuki Ito

We have been developing a barrier-free information receiving system for the purpose of communicating information in digital broadcasting to visually-impaired people. In the service of data broadcasting in digital broadcasting, many items constitute the menu screen. In this report, presentation methods and access methods using touch in combination with audio for the purpose of effective communication of menu screen structure are briefly explained. Secondly, results of evaluation experiments conducted to obtain a guideline to design hierarchical presentation structures easily accessible by visually impaired people, using the tactile interface in combination with audio presentation were discussed by focusing on the hierarchical structure of menu screens in data broadcasting.


international conference on computers helping people with special needs | 2018

Development and Evaluation of System for Automatically Generating Sign-Language CG Animation Using Meteorological Information

Makiko Azuma; Nobuyuki Hiruma; Hideki Sumiyoshi; Tsubasa Uchida; Taro Miyazaki; Shuichi Umeda; Naoto Kato; Yuko Yamanouchi

People who are born with hearing difficulties often use sign language as their mother tongue. The vocabulary and grammar of sign language differ from those of aural languages, so it is important to convey information in sign language. To expand sign-language services, we developed a system that accurately generates Japanese-Sign-Language computer-graphics (CG) animation from weather data. The system reads weather-forecast data coded in XML format distributed by the Japan Meteorological Agency and automatically generates CG animation clips to present them in sign language. We conducted two experiments to evaluate the system’s performance in conveying weather information in sign-language CG animation to deaf participants, one was a comprehension evaluation to answer multiple-choice questions on the content and the other was a subjective evaluation on how easy the sign language was to understand and how natural it was on a 5-point scale (1: not understandable and unnatural and 5: understandable and natural). The overall percentage of correct answers was 96.5%. In the subjective evaluation, the average understandability was 4.43, and average naturalness was 4.13, suggesting that the participants highly appreciated the quality of sign-language CG animation. We also published a website in 2017 on which anyone can evaluate such animation regarding the latest weather forecast.


conference on computers and accessibility | 2018

Evaluation of a Sign Language Support System for Viewing Sports Programs

Tsubasa Uchida; Hideki Sumiyoshi; Taro Miyazaki; Makiko Azuma; Shuichi Umeda; Naoto Kato; Yuko Yamanouchi; Nobuyuki Hiruma

As information support to deaf and hard of hearing people who are viewing sports programs, we have developed a sign language support system. The system automatically generates Japanese Sign Language (JSL) computer graphics (CG) animation and subtitles from prepared templates of JSL phrases corresponding to fixed format game data. To verify the systems performance, we carried out demonstration experiments on the generation and displaying of contents using real-time match data from actual games. From the experiment results we concluded that the automatically generated JSL CG is practical enough for understanding the information. We also found that among several display methods, the one providing game video and JSL CG on a single tablet screen was most preferred in this small-scale experiment.


international conference on computers helping people with special needs | 2016

Experimenting with Tactile Sense and Kinesthetic Sense Assisting System for Blind Education

Junji Onishi; Tadahiro Sakai; Masatsugu Sakajiri; Akihiro Ogata; Takahiro Miura; Takuya Handa; Nobuyuki Hiruma; Toshihiro Shimizu; Tsukasa Ono

In most of cases, communications based on multimedia form is inaccessible to the visually impaired. Thus, persons lacking eyesight are eager for a method that can provide them with access to progress in technology. We consider that the main important key for inclusive education is to real-timely provide materials which a teacher shows in a lesson. In this study, we present tactile sense and kinesthetic sense assisting system in order to provide figure or graphical information without an any assistant. This system gives us more effective teaching under inclusive education system.


virtual environments human computer interfaces and measurement systems | 2006

Objectively Evaluating TV Programs by Using a Viewer's Gaze Direction

Yasuhito Sawahata; Kazuteru Komine; Nobuyuki Hiruma; Takayuki Ito; Seiji Watanabe; Yuji Suzuki; Yumiko Hara; Nobuo Issiki

We conducted experiments on 26 elementary school pupils to measure the movements of their gaze while they were watching a news TV program for children and analyzed the relationship between the measurements and the pupils comprehension of the programs contents. The comprehension data were acquired with a quiz-style examination after the experimental TV program. We evaluated the variances of their gaze directions by calculating entropy of the estimated gaze direction probability distributions that were represented as a mixture of two-dimensional normal distributions. The results indicate the variances of the gaze direction for scenes that gave better comprehension tended to be lower. The tendency was noticeable after a keyword utterance related to the answer to the corresponding question

Collaboration


Dive into the Nobuyuki Hiruma's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takayuki Ito

Nagoya Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yukio Fukui

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge