Naoyuki Matsuzaki
Toyohashi University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Naoyuki Matsuzaki.
Perception | 2008
Naoyuki Matsuzaki; Takao Sato
We examined the contribution of motion information in perceiving facial expressions using point-light displays of faces. First, we established the minimum number of feature points necessary for the perception of facial expression from a single image. Next, we examined the effects of motion with a stimulus using an insufficient number of dots. We used two conditions. In the motion condition, the apparent motion was induced by a preceding neutral face image followed by an emotional face image. In the repetition condition, the same emotional face image was presented twice. The performance was higher in the motion condition than in the repetition condition. This advantage was reduced by inserting a white blank field between the neutral and emotional faces thus confirming that the improvement was due to the motion.
virtual reality software and technology | 2006
Hiroaki Shigemasu; Toshiya Morita; Naoyuki Matsuzaki; Takao Sato; Masamitsu Harasawa; Kiyoharu Aizawa
Viewing environment is an important factor to understand the mechanism of visually induced motion sickness (VIMS). In Experiment 1, we investigated whether the symptom of VIMS changed depending on viewing angle and physical display size. Our results showed that larger viewing angle made the symptom of sickness severer and nausea symptom changed depending on physical display size with identical viewing angles. In Experiment 2, we investigated effects of viewing angle and amplitude of oscillation. The results showed that the effects of viewing angle were not only related to amplitude of oscillation but also to the other factors of viewing angle.
virtual reality software and technology | 2008
Shin'ichi Onimaru; Taro Uraoka; Naoyuki Matsuzaki; Michiteru Kitazaki
We developed a driving simulator with visual and/or auditory information display to enhance the perception of lateral position of the driving car in real-time. The purpose of this study was to test effects of the cross-modal assistance information on the driving performance. We found the discrete visual assistance improved the driving accuracy, but increased the driving load. For auditory and audio-visual assistances, the continuous information improved the accuracy without increasing the load. Thus, the cross-modal information is useful for assisting and improving drivers performance with fewer loads.
Perception | 2008
Takao Sato; Yasuyuki Inoue; T Tani; Naoyuki Matsuzaki; K Kawamura; Michiteru Kitazaki
ITE Technical Report | 2000
Takao Sato; Naoyuki Matsuzaki
virtual reality software and technology | 2006
Michiteru Kitazaki; Tomoaki Nakano; Naoyuki Matsuzaki; Hiroaki Shigemasu
Transactions of the Virtual Reality Society of Japan | 2010
Jun Ando; Junichi Toyama; Hiroaki Shigemasu; Naoyuki Matsuzaki; Michiteru Kitazaki
SAE World Congress & Exhibition | 2009
Eri Kishida; Kenya Uenuma; Keijiro Iwao; Naoyuki Matsuzaki; Hiroaki Shigemasu; Michiteru Kitazaki
Transactions of the Japan Society of Mechanical Engineers. C | 2008
Eri Kishida; Naoyuki Matsuzaki; Kenya Uenuma; Hiroaki Shigemasu; Michiteru Kitazaki; Keijiro Iwao
Journal of Vision | 2010
Naoyuki Matsuzaki; Michiteru Kitazaki