Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maori Kobayashi is active.

Publication


Featured researches published by Maori Kobayashi.


BMC Neuroscience | 2011

Sound-contingent visual motion aftereffect

Souta Hidaka; Wataru Teramoto; Maori Kobayashi; Yoichi Sugita

BackgroundAfter a prolonged exposure to a paired presentation of different types of signals (e.g., color and motion), one of the signals (color) becomes a driver for the other signal (motion). This phenomenon, which is known as contingent motion aftereffect, indicates that the brain can establish new neural representations even in the adults brain. However, contingent motion aftereffect has been reported only in visual or auditory domain. Here, we demonstrate that a visual motion aftereffect can be contingent on a specific sound.ResultsDynamic random dots moving in an alternating right or left direction were presented to the participants. Each direction of motion was accompanied by an auditory tone of a unique and specific frequency. After a 3-minutes exposure, the tones began to exert marked influence on the visual motion perception, and the percentage of dots required to trigger motion perception systematically changed depending on the tones. Furthermore, this effect lasted for at least 2 days.ConclusionsThese results indicate that a new neural representation can be rapidly established between auditory and visual modalities.


Scientific Reports | 2012

Indiscriminable sounds determine the direction of visual motion

Maori Kobayashi; Wataru Teramoto; Souta Hidaka; Yoichi Sugita

On cross-modal interactions, top-down controls such as attention and explicit identification of cross-modal inputs were assumed to play crucial roles for the optimization. Here we show the establishment of cross-modal associations without such top-down controls. The onsets of two circles producing apparent motion perception were accompanied by indiscriminable sounds consisting of six identical and one unique sound frequencies. After adaptation to the visual apparent motion with the sounds, the sounds acquired a driving effect for illusory visual apparent motion perception. Moreover, the pure tones with each unique frequency of the sounds acquired the same effect after the adaptation, indicating that the difference in the indiscriminable sounds was implicitly coded. We further confrimed that the aftereffect didnot transfer between eyes. These results suggest that the brain establishes new neural representations between sound frequency and visual motion without clear identification of the specific relationship between cross-modal stimuli in early perceptual processing stages.


PLOS ONE | 2012

Sound frequency and aural selectivity in sound-contingent visual motion aftereffect.

Maori Kobayashi; Wataru Teramoto; Souta Hidaka; Yoichi Sugita

Background One possible strategy to evaluate whether signals in different modalities originate from a common external event or object is to form associations between inputs from different senses. This strategy would be quite effective because signals in different modalities from a common external event would then be aligned spatially and temporally. Indeed, it has been demonstrated that after adaptation to visual apparent motion paired with alternating auditory tones, the tones begin to trigger illusory motion perception to a static visual stimulus, where the perceived direction of visual lateral motion depends on the order in which the tones are replayed. The mechanisms underlying this phenomenon remain unclear. One important approach to understanding the mechanisms is to examine whether the effect has some selectivity in auditory processing. However, it has not yet been determined whether this aftereffect can be transferred across sound frequencies and between ears. Methodology/Principal Findings Two circles placed side by side were presented in alternation, producing apparent motion perception, and each onset was accompanied by a tone burst of a specific and unique frequency. After exposure to this visual apparent motion with tones for a few minutes, the tones became drivers for illusory motion perception. However, the aftereffect was observed only when the adapter and test tones were presented at the same frequency and to the same ear. Conclusions/Significance These findings suggest that the auditory processing underlying the establishment of novel audiovisual associations is selective, potentially but not necessarily indicating that this processing occurs at an early stage.


Teleoperators and Virtual Environments | 2015

The effects of spatialized sounds on the sense of presence in auditory virtual environments: A psychological and physiological study

Maori Kobayashi; Kanako Ueno; Shiro Ise

Although many studies have indicated that spatialized sounds increase the subjective sense of presence in virtual environments, few studies have examined the effects of sounds objectively. In this study, we examined whether three-dimensional reproduced sounds increase the sense of presence in auditory virtual environments by using physiological and psychological measures. We presented the sounds of people approaching the listener through a three-dimensional reproduction system using 96 loudspeakers. There were two spatial sound conditions, spatialized and non-spatialized, which had different spatial accuracy of the reproduction. The experimental results showed that presence ratings for spatialized sounds were greater than for non-spatialized sounds. Further, the results of the physiological measures showed that the sympathetic nervous system was activated to a greater extent by the spatialized sounds compared with the non-spatialized sounds, and the responses to the three-dimensional reproduced sounds were similar to those that occur during intrusions into personal space in the real world. Additionally, a correlation was found between the psychological and the physiological responses in the spatialized sound condition. These results suggest that the physiological measures correlate to the perceived presence in acoustic environments.


Experimental Brain Research | 2013

Vision contingent auditory pitch aftereffects

Wataru Teramoto; Maori Kobayashi; Souta Hidaka; Yoichi Sugita

Visual motion aftereffects can occur contingent on arbitrary sounds. Two circles, placed side by side, were alternately presented, and the onsets were accompanied by tone bursts of high and low frequencies, respectively. After a few minutes of exposure to the visual apparent motion with the tones, a circle blinking at a fixed location was perceived as a lateral motion in the same direction as the previously exposed apparent motion (Teramoto et al. in PLoS One 5:e12255, 2010). In the present study, we attempted to reverse this contingency (pitch aftereffects contingent on visual information). Results showed that after prolonged exposure to the audio-visual stimuli, the apparent visual motion systematically affected the perceived pitch of the auditory stimuli. When the leftward apparent visual motion was paired with the high–low-frequency sequence during the adaptation phase, a test tone sequence was more frequently perceived as a high–low-pitch sequence when the leftward apparent visual motion was presented and vice versa. Furthermore, the effect was specific for the exposed visual field and did not transfer to the other side, thus ruling out an explanation in terms of simple response bias. These results suggest that new audiovisual associations can be established within a short time, and visual information processing and auditory processing can mutually influence each other.


Journal of the Acoustical Society of America | 2013

Subjective evaluation of a virtual acoustic system: Trials with three-dimensional sound field reproduced by the “Sound Cask”

Maori Kobayashi; Kanako Ueno; Mai Yamashita; Shiro Ise; Seigo Enomoto

It has been necessary to establish subjective measures for the performance of the virtual acoustic systems. In this paper, we report our trials to evaluate the performance of a three-dimensional sound field reproduction system based on the boundary surface control principle, the “Sound Cask.” First, we introduce our investigations for the experts of audio engineering in order to clarify the difference of auditory impression between the Sound Cask and conventional audio systems. Second, we report psychological and physiological experiments focusing on the advantageous points of the Sound Cask, localization performance, and a clear sense of reality, which were pointed out in the investigations for the experts. Finally, we discuss the issues to be considered for subjective evaluation of virtual acoustic systems for future studies.


intelligent information hiding and multimedia signal processing | 2009

The Effects of Ambient Sounds on the Quality of 3D Virtual Sound Space

Satoshi Yairi; Yukio Iwaya; Maori Kobayashi; Makoto Otani; Yôiti Suzuki; Takeru Chiba

Immersive sound spaces which are synthesized using virtual reality techniques have recently been developed to realize highly realistic telecommunication interactions. In real world soundscapes the sounds we usually listen to include background or “ambient” sounds such as the sounds of the ventilation system in a room. However, current virtual auditory display systems generate point sound sources which are often attributed to specific object locations. Therefore, it is possible to reproduce sound direction, but no information about the sound space is included. As a result, the sound output is often dry and unnatural. In this research, a rendering method for ambient sounds and its effects are investigated. An optimum rendering algorithm of ambient sounds is proposed and its effects on the quality of sound space are examined.


PLOS ONE | 2012

Effect of Flanking Sounds on the Auditory Continuity Illusion

Maori Kobayashi; Makio Kashino

Background The auditory continuity illusion or the perceptual restoration of a target sound briefly interrupted by an extraneous sound has been shown to depend on masking. However, little is known about factors other than masking. Methodology/Principal Findings We examined whether a sequence of flanking transient sounds affects the apparent continuity of a target tone alternated with a bandpass noise at regular intervals. The flanking sounds significantly increased the limit of perceiving apparent continuity in terms of the maximum target level at a fixed noise level, irrespective of the frequency separation between the target and flanking sounds: the flanking sounds enhanced the continuity illusion. This effect was dependent on the temporal relationship between the flanking sounds and noise bursts. Conclusions/Significance The spectrotemporal characteristics of the enhancement effect suggest that a mechanism to compensate for exogenous attentional distraction may contribute to the continuity illusion.


I-perception | 2011

Crossmodal Contingent Aftereffect

Wataru Teramoto; Maori Kobayashi; Souta Hidaka; Yoichi Sugita

Sounds containing no motion or positional cues could induce illusory visual motion perception for static visual stimuli. Two identical visual stimuli placed side by side were presented in alternation producing apparent motion perception and each stimulus was accompanied by a tone burst of a specific and unique frequency. After prolonged exposure to the apparent motion, the tones acquired driving effects for motion perception; a visual stimulus blinking at a fixed location was perceived as lateral motion. The effect lasted at least for a few days and was only observed at the retinal position that was previously exposed to apparent motion with the tones. Furthermore, the effect was specific to ear and sound frequency presented in the exposure period. These results indicate that strong association between visual motion and sound sequence is easily formed within a short period and that very early stages of sensory processing might be responsive loci for the current phenomenon.


Journal of the Acoustical Society of America | 2008

Synchrony‐asynchrony discrimination of audio‐visual signals in auditory streaming

Maori Kobayashi; Shuichi Sakamoto; Yôiti Suzuki

Temporal synchrony is a critical condition for integrating information presented in different sensory modalities. In this study, the effect of tonal organization on synchrony‐asynchrony discrimination was examined. The auditory sequences were four repetitions of a triplet pattern comprising a low‐frequency tone (L) and a high‐frequency tone (H). The frequency difference (ΔF) between L and H was either approximately 1/12, 1/6, 1/3, 1/2, or 1 octave, centered at 1 kHz. Each tone was of 33.2 ms duration including rising and falling raised‐cosine ramps of 5 ms. The stimulus onset asynchrony (SOA) of adjacent tones was randomized between 33.2 and 332 ms. The tone sequences were presented diotically via headphones at 65 dB SPL. The visual stimulus was a luminance‐modulated Gaussian blob presented on a CRT monitor. The visual stimulus duration was 8.3 ms. Synchrony‐asynchrony discrimination thresholds of visual‐auditory stimulus onsets were measured using the 2IFC paradigm with a 2‐up 1‐down method under six ΔF ...

Collaboration


Dive into the Maori Kobayashi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoichi Sugita

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Makio Kashino

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar

Shiro Ise

Tokyo Denki University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seigo Enomoto

National Institute of Information and Communications Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge