Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sachiko Koyama is active.

Publication


Featured researches published by Sachiko Koyama.


Neuroscience | 2007

Neural correlates of auditory feedback control in human

Akira Toyomura; Sachiko Koyama; T. Miyamaoto; Atsushi Terao; Takashi Omori; Harumitu Murohashi; Shinya Kuriki

Auditory feedback plays an important role in natural speech production. We conducted a functional magnetic resonance imaging (fMRI) experiment using a transformed auditory feedback (TAF) method to delineate the neural mechanism for auditory feedback control of pitch. Twelve right-handed subjects were required to vocalize /a/ for 5 s, while hearing their own voice through headphones. In the TAF condition, the pitch of the feedback voice was randomly shifted either up or down from the original pitch two or three times in each trial. The subjects were required to hold the pitch of the feedback voice constant by changing the pitch of original voice. In non-TAF condition, the pitch of the feedback voice was not modulated and the subjects just vocalized /a/ continuously. The contrast between TAF and non-TAF conditions revealed significant activations; the supramarginal gyrus, the prefrontal area, the anterior insula, the superior temporal area and the intraparietal sulcus in the right hemisphere, but only the premotor area in the left hemisphere. This result suggests that auditory feedback control of pitch is mainly supported by the right hemispheric network.


Neuroscience Research | 2007

Speech comprehension assessed by electroencephalography : A new method using m-sequence modulation

Hiroshige Takeichi; Sachiko Koyama; Ayumu Matani; Andrzej Cichocki

Electroencephalograms (EEGs) were recorded from eight Japanese speakers while they listened to Japanese and Spanish sentences (approximately 51s each). The sentences were modulated in amplitude by a binary m-sequence and played forward or backward. A circular cross-correlation function was computed between the EEG signals and the m-sequence and averaged across subjects. Independent component analysis of the averaged function revealed a component source response which was obtained only for the comprehensible Japanese and not for the incomprehensible sentences. The present study has thus shown that a 1-min long EEG signal is sufficient for the assessment of speech comprehension.


international conference on acoustics, speech, and signal processing | 2006

Smoothness Constraint For the Estimation of Current Distribution From EEG/MEG Data

Wakako Nakamura; Sachiko Koyama; Shinya Kuriki; Yujiro Inouye

Separation of EEG (electroencephalography) or MEG (magnetoencephalography) data into activations of small dipoles or current density distribution is an ill-posed problem in which the number of parameters to estimate is larger than the dimension of the data. Several constraints have been proposed and used to avoid this problem, such as minimization of the L1-norm of the current distribution or minimization of Laplacian of the distribution. In this paper, we propose another biologically plausible constraint, sparseness of spatial difference of the current distribution. By numerical experiments, we show that the proposed method estimates current distribution well from both data generated by strongly localized current distributions and data generated by currents broadly distributed


ieee/icme international conference on complex medical engineering | 2007

Measuring Sentence Processing by Electroencephalography (EEG): New Technique Using M-Sequence Modulation

Hiroshige Takeichi; Sachiko Koyama; Andrzej Cichocki

Studies using event-related brain potentials (ERPs) have reported electrophysiological responses to semantically and/or grammatically anomalous words embedded in a sentence for decades. Here we have successfully developed a technique with which we can objectively estimate the level of listeners speech comprehension using continuous speech sounds without linguistic anomalies. We used minute-long speech sounds whose amplitudes were modulated by an m-sequence (pseudorandom binary sequence). Eelectroencephalograms (EEGs) were recorded from Japanese speakers and were cross-correlated with the m-sequence. We identified a signal peak which was found only for comprehensible but not for incomprehensible (backward-played Japanese and Spanish) speech stimuli in an independent component cross-correlation function. The correlation time of the signal peak was 400 ms and the peak location on the scalp was Cz-Pz. The present study has thus shown that a minute-long EEG signal is sufficient for the assessment of speech comprehension.


Neuroscience Research | 2007

Neural substrates involved in auditory feedback and feed-forward controls of pitch modulation

Akira Toyomura; Sachiko Koyama; Tamaki Miyamoto; Atsushi Terao; Takashi Omori; Harumitsu Murohashi; Shinya Kuriki

s / Neuroscience Research 58S (2007) S1–S244 S99 P1-e47 Responsiveness of thalamic neurons to rotational and translational motions in the horizontal plane K.P. Ng, C.H. Lai, Y.C. Tse, Y.S. Chan Department of Physiology, The University of Hong Kong, Hong Kong, China To identify the thalamic region(s) that relays vestibular signals towards the cerebral cortex, immunostaining was performed on rats that were subjected to rotational or translational stimulation along the horizontal plane. Neuronal activation was defined by Fos expression. Both canaland otolith-related neurons were found in the subparafascicular nucleus(SPF). Fos-immunoreactive(ir) neurons activated by horizontal canal or otolithic inputs were intermingled throughout the SPF, with a higher number in its middle portion. Stationary and labyrinthectomized controls showed only sporadically scattered Fos-ir neurons in the SPF. Given that vestibular inputs from both ears converge at the vestibular nucleus, experiments were designed to determine whether the vestibular thalamus on one side receive input from both inner ears. In rats with unilateral labyrinthectomy for 2 weeks, symmetric Fos pattern was found on both sides of the SPF after stimulations. Our findings suggest that horizontal head movement signals arising from one ear are processed in both sides of the SPF. P1-e48 Developmental changes in excitatory and inhibitory transmission at central vestibular synapses of rats Y.S. Chan1, Y.C. Tse1, S.K. Lai1, C.H. Lai1, W.H. Yung2 1 Department of Physiology, The University of Hong Kong, Hong Kong, China; 2 Department of Physiology, The Chinese University of Hong Kong, China Signal transmission in the adult vestibular nucleus (VN) is regulated by excitatory and inhibitory synapses. Little is known about synaptic activities in developing VN. Patch clamp recordings in VN neurons of postnatal rats revealed that both the number and amplitude of GABAergic miniature IPSC increased with age while the decay time decreased. Corresponding parameters of glutamatergic miniature EPSC (mEPSC) did not change with age. These mEPSCs were equally contributed by AMPA and NMDA components before P9 but were dominated by AMPA component at P21. For evoked EPSC (eEPSC), we observed a developmental decrease in NMDA component and a reciprocal increase in the quantal contribution of AMPA component. The reduction in NMDA-only synapses coincided with the increase in AMPA/NMDA ratio. We further found that NMDA receptor-mediated eEPSC was contributed predominantly by NR2B subunit in neonates but by NR2A subunit after P7. Our findings document changes in synaptic events during the maturation of VN neurons. Supported by HKRGC. P1-f0 2 Neural substrates involved in auditory feedback and feed-forward controls of pitch modulation Akira Toyomura1,2, Sachiko Koyama1,2, Tamaki Miyamoto1, Atsushi Terao1, Takashi Omori3, Harumitsu Murohashi1, Shinya Kuriki1 1 Hokkaido University, Sapporo, Japan; 2 JST, Saitama, Japan; 3 Tamagawa University, Machida, Japan Subjects (n = 17, right handed) were required to vocalize/a/for 5 s in an fMRI experiment. In transformed auditory feedback condition (TAF), the feedback voice pitch was shifted and the subjects were required to hold the pitch of the feedback voice constant by changing the original pitch. In the pitch modulation condition under masking condition (Mm), they were required to modulate their pitch indicated by visual cues without vocal feedback. In non-TAF condition (with vocal feedback) and vocalization condition under masking (M, without vocal feedback), they merely vocalized/a/continuously. The comparison between the TAF and the non-TAF conditions revealed significant activations in the superior temporal gyrus, the insula and the inferior pariental lobule, while that between the Mm and M conditions revealed a significant activation in the basal ganglia. Feedback and feed-forward control of pitch modulation are thus suggested to be supported by different neural networks. Research funds: JST to S. Koyama P1-f0 3 Neural responses to gap embedded in continuous sound: Simultaneous recordings of MEG and EEG Atsuhito Toyomaki1, Sachiko Koyama1,2, Yuko Toyosawa2, Fumiya Takeuchi1, Hiroshige Takeichi3, Shinya Kuriki1 1 Department of Psychiatry, Hokkaido University, Sapporo, Japan; 2 JST, Tokyo, Japan; 3 Riken, Wako, Japan Gaps in continuous sounds play important roles for speech perception: perception of voiceless consonants (/k/,/p/,/t/) and, segments between spoken sentences. Our previous ERPs studies showed that ERPs for onsets of gaps differ for different gap durations. In order to investigate neural sources for the gap responses, we measured EEG (Cz, T5, T6, nose tip reference) and MEG (a 306-channel whole head MEG system, Elekta Neuromag Oy, Finland) simultaneously using gap stimuli (16, 64, 256 ms) embedded in a continuous tone from eight healthy adults. As results, both of ERPs and MEG showed deflection peaking 100 ms after the gap onsets (16, 64, 256 ms) and the gap end (256 ms only) (N1 and N1m component). N1m showed a clear right hemisphere dominance. A positive peak followed the N1 in the EEG responses while no corresponding peak was observed in MEG responses. EEG and MEG are thus shown to reflect different aspects of neural activity associated with gap perception. Research funds: JST (Brain Science and Education) P1-f0 4 Neural mechanism of corticofugal modulation of frequency processing in bat auditory system Yoshiki Kashimori1,2, Seiichi Hirooka2, Kazuhisa Fujita2 1 Department of Applied Physics and Chemistry, University of Electro-Communications, Japan; 2 Graduate School of Information Systems, University of Electro-Communications, Japan Most species of bats making echolocation use Doppler-shifted frequency of ultrasonic echo pulse to measure the velocity of target. To perform the fine-frequency analysis, the feedback signals from cortex to subcortical and peripheral areas are needed. The feedback signals are known to modulate the tuning property of subcortical neurons. However, the neural mechanism for modulation of the tuning property is poorly understood. To address this issue, we present a neural model for detecting Doppler-shifted frequency of echo sound reflecting from a target. We show here that the direction of best frequency shifts of subcortical neurons is determined by a single mechanism consisting of two components, or facilitation and inhibition. We also propose a functional role of the best frequency shift in extracting the information of target velocity under background signal reflecting from trees. P1-f0 5 Effect of a change in amplitude at sound onset on activities of auditory cortical neurons in the Mongolian gerbils Masaya Tsuji Department of Engineer, Graduate School of Doshisha, Kyoto, Japan The onset of sound is important for auditory nerve responses. However, in most auditory experiments rising parts of sound stimuli have not been paid attention to. In this study we focused on neural responses to constant frequency (CF) sounds in auditory cortex (AC) of Mongolian gerbils (Meriones unguiculatus) when the rising speed at the onset was varied. Because of the logarithmic factor of auditory sensation, we used the stimuli that had several constant logarithmic rising speeds (100–3200 dB/s) and several peak sound pressure levels. The frequencies of sounds corresponding to the responses were different by the rising speed of sound stimuli. In 100 dB/s rising speed, more than 6000 Hz Stimuli hadn’t generated auditory responses, but in 3200 dB/s rising speed, had generated. And the latency was also different by the frequencies of sound stimuli at slow rising speed. These results cannot assert that rising speed decide the generations of the auditory responses, though suggest that the threshold have not been controlled by sound pressure level or the frequency.


international symposium on circuits and systems | 2006

Estimation of current density distributions from EEG/MEG data by maximizing sparseness of spatial difference

Wakako Nakamura; Sachiko Koyama; Shinya Kuriki; Yujiro Inouye

Separation of EEG (electroencephalography) or MEG (magnetoencephalography) data into activations of small dipoles or current density distribution is an ill-posed problem in which the number of parameters to estimate is larger than the dimension of the data. Several constraints have been proposed and used to avoid this problem, such as minimization of the L1-norm of the current distribution or minimization of Laplacian of the distribution. In this paper, we propose another constraint that the current density distribution changes at only a small number of areas and these changes can be large. By numerical experiments, we show that the proposed method estimates current distribution well from both data generated by strongly localized current distributions and data generated by currents broadly distributed


Cerebral Cortex | 2007

Persistent Responsiveness of Long-Latency Auditory Cortical Activities in Response to Repeated Stimuli of Musical Timbre and Vowel Sounds

Shinya Kuriki; Keisuke Ohta; Sachiko Koyama


The Proceedings of the Conference on Information, Intelligence and Precision Equipment : IIP | 2008

1608 A new technique to analyze brain responses for speech comprehension

Hiroshige Takeichi; Sachiko Koyama; Fumiya Takeuchi; Hidehiko Matsumoto; Takashi Morotomi


Cognitive Neuroscience Society Annual Meeting, San Francisco, California, United States, 12-15 April 2008 | 2008

A coherence analysis of EEG response to speech modulated by m-sequence

Hiroshige Takeichi; Sachiko Koyama; Brett L. Foster; David T. J. Liley


International Congress Series | 2007

Speech comprehension assessed by electroencephalography with m-sequence technique ☆

Hiroshige Takeichi; Sachiko Koyama; Ayumu Matani; A. Cichocki

Collaboration


Dive into the Sachiko Koyama's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrzej Cichocki

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge