Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yôiti Suzuki is active.

Publication


Featured researches published by Yôiti Suzuki.


PLOS ONE | 2009

Alternation of Sound Location Induces Visual Motion Perception of a Static Object

Souta Hidaka; Yuko Manaka; Wataru Teramoto; Yoichi Sugita; Ryota Miyauchi; Jiro Gyoba; Yôiti Suzuki; Yukio Iwaya

Background Audition provides important cues with regard to stimulus motion although vision may provide the most salient information. It has been reported that a sound of fixed intensity tends to be judged as decreasing in intensity after adaptation to looming visual stimuli or as increasing in intensity after adaptation to receding visual stimuli. This audiovisual interaction in motion aftereffects indicates that there are multimodal contributions to motion perception at early levels of sensory processing. However, there has been no report that sounds can induce the perception of visual motion. Methodology/Principal Findings A visual stimulus blinking at a fixed location was perceived to be moving laterally when the flash onset was synchronized to an alternating left-right sound source. This illusory visual motion was strengthened with an increasing retinal eccentricity (2.5 deg to 20 deg) and occurred more frequently when the onsets of the audio and visual stimuli were synchronized. Conclusions/Significance We clearly demonstrated that the alternation of sound location induces illusory visual motion when vision cannot provide accurate spatial information. The present findings strongly suggest that the neural representations of auditory and visual motion processing can bias each other, which yields the best estimates of external events in a complementary manner.


Neuroscience Letters | 2010

Visual motion perception induced by sounds in vertical plane

Wataru Teramoto; Yuko Manaka; Souta Hidaka; Yoichi Sugita; Ryota Miyauchi; Shuichi Sakamoto; Jiro Gyoba; Yukio Iwaya; Yôiti Suzuki

The alternation of sounds in the left and right ears induces motion perception of a static visual stimulus (SIVM: Sound-Induced Visual Motion). In this case, binaural cues were of considerable benefit in perceiving locations and movements of the sounds. The present study investigated how a spectral cue - another important cue for sound localization and motion perception - contributed to the SIVM. In experiments, two alternating sound sources aligned in the vertical plane were presented, synchronized with a static visual stimulus. We found that the proportion of the SIVM and the magnitude of the perceived movements of the static visual stimulus increased with an increase of retinal eccentricity (1.875-30 degree), indicating the influence of the spectral cue on the SIVM. These findings suggest that the SIVM can be generalized to the whole two dimensional audio-visual space, and strongly imply that there are common neural substrates for auditory and visual motion perception in the brain.


PLOS ONE | 2012

Compression of Auditory Space during Forward Self-Motion

Wataru Teramoto; Shuichi Sakamoto; Fumimasa Furune; Jiro Gyoba; Yôiti Suzuki

Background Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. Methodology/Principal Findings Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener’s physical coronal plane reached the location of one of the speakers (null point). In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. Conclusions/Significance These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial shifts in the auditory receptive field locations driven by afferent signals from vestibular system.


Attention Perception & Psychophysics | 2010

Auditory temporal cues can modulate visual representational momentum

Wataru Teramoto; Souta Hidaka; Jiro Gyoba; Yôiti Suzuki

In representational momentum (RM), the final position of a moving target is mislocalized in the direction of motion. Here, the effect of a concurrent sound on visual RM was demonstrated. A visual stimulus moved horizontally and disappeared at unpredictable positions. A complex tone without any motion cues was presented continuously from the beginning of the visual motion. As compared with a silent condition, the RM magnitude increased when the sound lasted longer than and decreased when it did not last as long as the visual motion. However, the RM was unchanged when a brief complex tone was presented before or after the target disappeared (Experiment 2) or when the onset of the long-lasting sound was not synchronized with that of the visual motion (Experiments 3 and 4). These findings suggest that visual motion representation can be modulated by a sound if the visual motion information is firmly associated with the auditory information.


Neuroreport | 2009

Visual speech improves the intelligibility of time-expanded auditory speech.

Akihiro Tanaka; Shuichi Sakamoto; Komi Tsumura; Yôiti Suzuki

This study investigated the effects of intermodal timing differences and speed differences on word intelligibility of auditory–visual speech. Words were presented under visual-only, auditory-only, and auditory–visual conditions. Two types of auditory–visual conditions were used: asynchronous and expansion conditions. In the asynchronous conditions, the audio lag was 0–400u2009ms. In the expansion conditions, the auditory signal was time expanded (0–400u2009ms), whereas the visual signal was kept at the original speed. Results showed that word intelligibility was higher in the auditory–visual conditions than in the auditory-only condition. The results of auditory–visual benefit revealed that the benefit at the end of words declined as the amount of time expansion increased, although it did not decline in the asynchronous conditions.


Vision Research | 2010

Sound can prolong the visible persistence of moving visual objects

Souta Hidaka; Wataru Teramoto; Jiro Gyoba; Yôiti Suzuki

An abrupt change in a visual attribute (size) of apparently moving visual stimuli extends the time the changed stimuli is visible even after its physical termination (visible persistence). In this study, we show that elongation of visible persistence is enhanced by an abrupt change in an attribute (frequency) of the sounds presented along with the size-changed apparently moving visual stimuli. This auditory effect disappears when sounds are not associated with the visual stimuli. These results suggest that auditory attribute change can contribute to the establishment of a new object representation and that object-level audio-visual interactions can occur in motion perception.


I-perception | 2013

Effects of head movement and proprioceptive feedback in training of sound localization.

Akio Honda; Hiroshi Shibata; Souta Hidaka; Jiro Gyoba; Yukio Iwaya; Yôiti Suzuki

We investigated the effects of listeners head movements and proprioceptive feedback during sound localization practice on the subsequent accuracy of sound localization performance. The effects were examined under both restricted and unrestricted head movement conditions in the practice stage. In both cases, the participants were divided into two groups: a feedback group performed a sound localization drill with accurate proprioceptive feedback; a control group conducted it without the feedback. Results showed that (1) sound localization practice, while allowing for free head movement, led to improvement in sound localization performance and decreased actual angular errors along the horizontal plane, and that (2) proprioceptive feedback during practice decreased actual angular errors in the vertical plane. Our findings suggest that unrestricted head movement and proprioceptive feedback during sound localization training enhance perceptual motor learning by enabling listeners to use variable auditory cues and proprioceptive information.


Journal of the Acoustical Society of America | 2013

Accuracy of head-related transfer functions synthesized with spherical microphone arrays

Cesar D. Salvador Castaneda; Shuichi Sakamoto; Jorge A. Trevino Lopez; Junfeng Li; Yonghong Yan; Yôiti Suzuki

The spherical harmonic decomposition can be applied to present realistically localized sound sources over headphones. The acoustic field, measured by a spherical microphone array, is first decomposed into a weighted sum of spherical harmonics evaluated at the microphone positions. The resulting decomposition is used to generate a set of virtual sources at various angles. The virtual sources are thus binaurally presented by applying the corresponding head-related transfer functions (HRTF). Reproduction accuracy is heavily dependent on the spatial distribution of microphones and virtual sources. Nearly uniform sphere samplings are used in positioning the microphones so as to improve spatial accuracy. However, no previous studies have looked into the optimal arrangement for the virtual sources. We evaluate the effects of the virtual source distribution on the accuracy of the synthesized HRTF. Furthermore, our study considers the impact of spatial aliasing for a 252-channel spherical microphone array. The microphones body is modeled as a human-head-sized rigid sphere. We evaluate the synthesis error by comparison with the target HRTF using the logarithmic spectral distance. Our study finds that 362 virtual sources, distributed on an icosahedral grid, can synthesize the HRTF in the horizontal plane up to 9 kHz with a log-spectral distance below 5 dB.


Journal of the Acoustical Society of America | 2013

Improvement of accuracy of 3D sound space synthesized by real-time “SENZI,” a sound space information acquisition system using spherical array with numerous microphones

Shuichi Sakamoto; Satoshi Hongo; Takuma Okamoto; Yukio Iwaya; Yôiti Suzuki

We proposed a sensing method of 3D sound-space information based on symmetrically and densely arranged microphones mounted on a solid sphere. We call this method SENZI [Sakamoto et al., ISUC2008 (2008)]. In SENZI, the sensed signals from each of the microphone is simply weighted and summed to synthesize a listeners HRTF, reflecting the listeners facing direction. Weighting coefficients are calculated for individual listeners based on their HRTFs. These coefficients are changed according to the listeners head movement, which is known to provide important dynamic perceptual cue for sound localization. Therefore, accurate sound space information can be presented to unlimited number of listeners not only beyond the distance but also beyond the time. Recently, we realized this method as a real-time system using a 252-ch spherical microphone array and FPGAs. By using this system, accurate sound space information up to around 10 kHz can be synthesized to any listeners. However, the SNR of microphones affected...


Journal of the Acoustical Society of America | 2008

Numerical Analysis of the Effects of Pinna Shape and Position on the Characteristics of Head‐Related Transfer Functions

Yukio Iwaya; Yôiti Suzuki

There are distinctive notches and peaks in Head‐Related Transfer Functions (HRTFs). Some of them are considered as important cues in the perception of the elevation angle and thus the roles of these peaks and notches should be clarified. It is known that the characteristics of HRTFs are deeply related to listeners anthropometry. It is thus naturally expected that frequencies of the peaks and notches also change according to the individuality of listeners anthropometry. Therefore, in this study, effects of ear shapes and positions on frequency positions of the peaks and notches are examined by numerical analyses. The analysis was performed with boundary element method (BEM). A three‐dimensional model of a dummy‐head was constructed with a three‐dimensional laser scanner and HRTFs of the model were numerically computed with a BEM solver. The model was modified on some features as follows: 1) pinna position, 2) pinna size, 3) angle of pinna toward listeners head, and 4) existence of wrinkles of pinna. HRT...

Collaboration


Dive into the Yôiti Suzuki's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yukio Iwaya

Tohoku Gakuin University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takuma Okamoto

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akio Honda

Tohoku Fukushi University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge