Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yukio Iwaya is active.

Publication


Featured researches published by Yukio Iwaya.


PLOS ONE | 2009

Alternation of Sound Location Induces Visual Motion Perception of a Static Object

Souta Hidaka; Yuko Manaka; Wataru Teramoto; Yoichi Sugita; Ryota Miyauchi; Jiro Gyoba; Yôiti Suzuki; Yukio Iwaya

Background Audition provides important cues with regard to stimulus motion although vision may provide the most salient information. It has been reported that a sound of fixed intensity tends to be judged as decreasing in intensity after adaptation to looming visual stimuli or as increasing in intensity after adaptation to receding visual stimuli. This audiovisual interaction in motion aftereffects indicates that there are multimodal contributions to motion perception at early levels of sensory processing. However, there has been no report that sounds can induce the perception of visual motion. Methodology/Principal Findings A visual stimulus blinking at a fixed location was perceived to be moving laterally when the flash onset was synchronized to an alternating left-right sound source. This illusory visual motion was strengthened with an increasing retinal eccentricity (2.5 deg to 20 deg) and occurred more frequently when the onsets of the audio and visual stimuli were synchronized. Conclusions/Significance We clearly demonstrated that the alternation of sound location induces illusory visual motion when vision cannot provide accurate spatial information. The present findings strongly suggest that the neural representations of auditory and visual motion processing can bias each other, which yields the best estimates of external events in a complementary manner.


Neuroscience Letters | 2010

Visual motion perception induced by sounds in vertical plane

Wataru Teramoto; Yuko Manaka; Souta Hidaka; Yoichi Sugita; Ryota Miyauchi; Shuichi Sakamoto; Jiro Gyoba; Yukio Iwaya; Yôiti Suzuki

The alternation of sounds in the left and right ears induces motion perception of a static visual stimulus (SIVM: Sound-Induced Visual Motion). In this case, binaural cues were of considerable benefit in perceiving locations and movements of the sounds. The present study investigated how a spectral cue - another important cue for sound localization and motion perception - contributed to the SIVM. In experiments, two alternating sound sources aligned in the vertical plane were presented, synchronized with a static visual stimulus. We found that the proportion of the SIVM and the magnitude of the perceived movements of the static visual stimulus increased with an increase of retinal eccentricity (1.875-30 degree), indicating the influence of the spectral cue on the SIVM. These findings suggest that the SIVM can be generalized to the whole two dimensional audio-visual space, and strongly imply that there are common neural substrates for auditory and visual motion perception in the brain.


advanced information networking and applications | 2004

Design of network management support system based on active information resource

Susumu Konno; Yukio Iwaya; Toru Abe; Tetsuo Kinoshita

Generally, in order to manage and maintain a network system, it is necessary to carry out a series of operations including assessing the network status, determining the network errors, selecting/approving the countermeasures and applying the countermeasures. Since network systems are becoming more complicated and larger in scale, network operations increasingly require special professional knowledge and effort, thereby imposing heavy workloads on network administrators. In this paper, we propose and design the AIR-NMS (Active Information Resource Architecture based Network Management Support System) for decreasing network administrator workloads.


Journal of the Acoustical Society of America | 2007

Estimation of interaural level difference based on anthropometry and its effect on sound localization

Kanji Watanabe; Kenji Ozawa; Yukio Iwaya; Yôiti Suzuki; Kenji Aso

Individualization of head-related transfer functions (HRTFs) is important for highly accurate sound localization systems such as virtual auditory displays. A method to estimate interaural level differences (ILDs) from a listeners anthropometry is presented in this paper to avoid the burden of direct measurement of HRTFs. The main result presented in this paper is that localization is improved with nonindividualized HRTF if ILD is fitted to the listener. First, the relationship between ILDs and the anthropometric parameters was analyzed using multiple regression analysis. The azimuthal variation of the ILDs in each 1/3-octave band was then estimated from the listeners anthropometric parameters. A psychoacoustical experiment was carried out to evaluate its effectiveness. The experimental results show that the adjustment of the frequency characteristics of ILDs for a listener with the proposed method is effective for localization accuracy.


Journal of Vision | 2012

Sounds can alter the perceived direction of a moving visual object.

Wataru Teramoto; Souta Hidaka; Yoichi Sugita; Shuichi Sakamoto; Jiro Gyoba; Yukio Iwaya; Yôiti Suzuki

Auditory temporal or semantic information often modulates visual motion events. However, the effects of auditory spatial information on visual motion perception were reported to be absent or of smaller size at perceptual level. This could be caused by a superiority of vision over hearing in reliability of motion information. Here, we manipulated the retinal eccentricity of visual motion and challenged the previous findings. Visual apparent motion stimuli were presented in conjunction with a sound delivered alternately from two horizontally or vertically aligned loudspeakers; the direction of visual apparent motion was always perpendicular to the direction in which the sound alternated. We found that the perceived direction of visual motion could be consistent with the direction in which the sound alternated or lay between this direction and that of actual visual motion. The deviation of the perceived direction of motion from the actual direction was more likely to occur at larger retinal eccentricities. These findings suggest that the auditory and visual modalities can mutually influence one another in motion processing so that the brain obtains the best estimates of external events.


Japanese Journal of Applied Physics | 2013

A Hardware-Oriented Finite-Difference Time-Domain Algorithm for Sound Field Rendering

Tan Yiyu; Yasushi Inoguchi; Yukinori Sato; Makoto Otani; Yukio Iwaya; Hiroshi Matsuoka; Takao Tsuchiya

Sound field renderings are data-intensive and computation-intensive applications. An alternative solution is to directly implement sound field rendering algorithms by using hardware. In this paper, a hardware-oriented finite-difference time-domain (FDTD) algorithm named HO-FDTD is proposed for sound field rendering, which has no complex operations involved, and consumes small hardware resources. In a sound space with 32,768 elements surrounded by rigid walls, the hardware simulation results are in good agreement with the software simulation results except for the one-cycle delay. In the software simulation, when the element scale is 32×32×32 and the time steps are 20,000, the HO-FDTD speeds up computations by 19% against the updated digital Huygens model (DHM) and Yee-FDTD, and by 132% against the original DHM. Compared with the software simulation, the hardware systems with the parallel architecture and the time-sharing architecture enhance their calculation performance significantly in the case of different element scales, and provide a higher data throughput.


Japanese Journal of Applied Physics | 2014

A real-time sound rendering system based on the finite-difference time-domain algorithm

Tan Yiyu; Yasushi Inoguchi; Yukinori Sato; Makoto Otani; Yukio Iwaya; Hiroshi Matsuoka; Takao Tsuchiya

Real-time sound rendering applications are memory-intensive and computation-intensive. To speed up computation and extend the simulated area, a real-time sound rendering system based on the hardware-oriented finite difference time domain algorithm (HO-FDTD) and time-sharing architecture is proposed and implemented by the field programmable gate array (FPGA) in this study. Compared with the traditional rendering system with parallel architecture, the proposed system extends by about 37 times in the simulated area because data are stored in the on-chip block memories instead of the D flip-flops. The hardware system becomes stable after 400 time steps in the impulse response. To render a three-minute Beethoven classical music clip, the hardware system carries it out in real-time while the software simulation takes about 63 min in a computer with 4 GB RAM and an AMD Phenom 9500 Quad-core processor running on 2.2 GHz.


ieee internationalconference on network infrastructure and digital content | 2010

Implementation of a high-definition 3D audio-visual display based on higher-order ambisonics using a 157-loudspeaker array combined with a 3D projection display

Takuma Okamoto; Zheng Lie Cui; Yukio Iwaya; Yôiti Suzuki

We have implemented a high-definition 3D audiovisual reproduction system based on higher-order ambisonics (HOA) using a surrounding-157-loudspeaker array combined with a 3D projection display to reproduce information with high sense-of-presence. In this report, we introduce the system overview as well as an investigation to realize good system synchronization, which we are keenly devoted to. Results of the investigation show that all 157 channels of loudspeakers were completely synchronized within the one-sample level (48 kHz) controlled using a single PC with three Multichannel Audio Digital Interface (MADI) systems. Moreover, the latency among 157 audio signals and the video signal was only about 1.1 ms.


ieee wic acm international conference on intelligent agent technology | 2006

Knowledge-Based Support of Network Management Tasks Using Active Information Resource

Susumu Konno; Abar Sameera; Yukio Iwaya; Toru Abe; Tetsuo Kinoshita

A network system is a kind of large and complex systems and the network administrators are required exhaustive work to maintain the quality and functions of the network system. To reduce the load of administrators, systematic and intelligent facilities for the network management tasks should be realized and provided for administrators. In this paper, we propose a knowledge-based support method of the network management tasks using the active information resource (AIR) which has knowledges and functions for its information resource. Furthermore, a novel network management support system based on this method, called AIR-NMS, is also proposed by using the agent-based computing technologies. In the AIR-NMS, a lot of AIRs are defined and utilized in order to monitor and collect the status information of the network automatically. The AIRs are collaborated each other and inspect the behavior of the network. The network administrator can obtain useful supports for management in responses of AIR-NMS. Moreover, a prototype system is implemented to demonstrate and evaluate the essential functions of the AIR-NMS.


Archive | 2011

Effects of microphone arrangements on the accuracy of a spherical microphone array (SENZI) in acquiring high-definition 3D sound space information

Jun'ichi Kodama; Shuichi Sakamoto; Satoshi Hongo; Takuma Okamoto; Yukio Iwaya; Yôiti Suzuki

We propose a three-dimensional sound space sensing system using a microphone array on a solid, human-head-sized sphere with numerous microphones, which is called SENZI (Symmetrical object with ENchased ZIllion microphones). It can acquire 3D sound space information accurately for recording and/or transmission to a distant place. Moreover, once recorded, the accurate information might be reproduced accurately for any listener at any time. This study investigated the effects of microphone arrangement and the number of controlled directions on the accuracy of the sound space information acquired by SENZI. Results of a computer simulation indicated that the microphones should be arranged at an interval that is equal to or narrower than 5.7 to avoid the effect of spatial aliasing and that the number of controlled directions should be set densely at intervals of less than 5 when the microphone array radius is 85 mm.

Collaboration


Dive into the Yukio Iwaya's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takuma Okamoto

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akio Honda

Tohoku Fukushi University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge