Tsutomu Murata
National Institute of Information and Communications Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tsutomu Murata.
NeuroImage | 2009
Norio Fujimaki; Tomoe Hayakawa; Aya Ihara; Qiang Wei; Shinji Munetsuna; Yasushi Terazono; Ayumu Matani; Tsutomu Murata
To determine the time and location of lexico-semantic access, we measured neural activations by magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) and estimated the neural sources by fMRI-assisted MEG multidipole analysis. Since the activations for phonological processing and lexico-semantic access were reported to overlap in many brain areas, we compared the activations in lexical and phonological decision tasks. The former task required visual form processing, phonological processing, and lexico-semantic access, while the latter task required only visual form and phonological processing, with similar phonological task demands for both tasks. The activation areas observed among 9 or 10 subjects out of 10 were the superior temporal and inferior parietal areas, anterior temporal area, and inferior frontal area of both hemispheres, and the left ventral occipitotemporal area. The activations showed a significant difference between the 2 tasks in the left anterior temporal area in all 50-ms time windows between 200-400 ms from the onset of visual stimulus presentation. Previous studies on semantic dementia and neuroimaging studies on normal subjects have shown that this area plays a key role in accessing semantic knowledge. The difference between the tasks appeared in common to all areas in the time windows of 100-150 ms and 400-450 ms, suggesting early differences in visual form processing and late differences in the decision process, respectively. The present results demonstrate that the activations for lexico-semantic access in the left anterior temporal area start in the time window of 200-250 ms, after early visual form processing.
Journal of the Neurological Sciences | 2010
Aya Ihara; Masayuki Hirata; Norio Fujimaki; Tetsu Goto; Yuka Umekawa; Norihiko Fujita; Yasushi Terazono; Ayumu Matani; Qiang Wei; Toshiki Yoshimine; Shiro Yorifuji; Tsutomu Murata
Situs inversus totalis (SI) is a rare condition in which all visceral organs are arranged as mirror images of the usual pattern. The objective of this study was to determine whether SI individuals have reversed brain asymmetries. We performed a neuroimaging study on 3 SI subjects and 11 control individuals with normally arranged visceral organs. The language-dominant hemisphere was determined by magnetoencephalography. Left-hemispheric dominance was observed in 1 SI subject and all controls, whereas right-hemispheric dominance was observed in the remaining 2 SI subjects. Statistical analysis revealed that language dominance patterns in SI subjects were different from those in the controls, suggesting that the developmental mechanisms underlying visceral organ asymmetries are related to those underlying functional brain asymmetry. Anatomical brain asymmetries were determined by magnetic resonance imaging. SI subjects had the same planum temporale (PT) asymmetry pattern as the controls, but a reversed petalia asymmetry pattern. The inferior frontal gyrus (IFG) asymmetry pattern varied within both groups, indicating a relationship between the rightward IFG and right-hemispheric language dominance. These results suggest that the developmental mechanisms underlying visceral organ asymmetries are related to those underlying petalia asymmetry but not to those underlying PT and IFG asymmetries, and that brain asymmetries might develop via multiple region-dependent mechanisms.
international conference natural language processing | 2007
Shunji Mitsuyoshi; Kouichi Shibasaki; Yasuto Tanaka; Makoto Kato; Tsutomu Murata; Tetsuto Minami; Haruko Yagura; Fuji Ren
To investigate human brain activities in association with emotional speech, we developed a novel voice analysis system connected to a functional magnetic resonance imaging (fMRI) machine. Participants spoke inside the MR magnet during which BOLD activities of the brain was measured. Speech voice was transmitted through the newly developed mask-microphone inside the magnet to the external computer and was processed by the emotional voice analysis system. Two participants conversed without hindrance and their emotional state was analyzed. Using the system, we were able to detect brain activities during speech and simultaneously evaluate the human emotional voice.
Neuroreport | 2007
Qiang Wei; Aya Ihara; Tomoe Hayakawa; Tsutomu Murata; Eriko Matsumoto; Norio Fujimaki
To investigate the phonological influences on the lexicosemantic process with a strong orthographic constraint, we used kanji (morphogram) homophone words and measured, using magnetoencephalography, the neural activities during the silent reading of prime-target pairs. The primes were phonologically the same as or different from the targets or pseudocharacters. The neural activities in the left posterior temporal and inferior parietal areas became weaker with phonological repetition. Furthermore, stronger activities for the different condition in the left anterior temporal area and for the same condition in the left inferior frontal cortex, respectively, suggest the roles of these areas of the brain in the semantic processing of words and in the selection of appropriate meanings. We conclude that phonological information affects the lexicosemantic process even with a strong orthographic constraint.
IEEE Transactions on Biomedical Engineering | 2010
Ayumu Matani; Yasushi Naruse; Yasushi Terazono; Taro Iwasaki; Norio Fujimaki; Tsutomu Murata
Stimulus-locked averaging for electroencephalography and/or megnetoencephalography (EEG/MEG) epochs cancels out ongoing spontaneous activities by treating them as noise. However, such spontaneous activities are the object of interest for EEG/MEG researchers who study phase-related phenomena, e.g., long-distance synchronization, phase-reset, and event-related synchronization/desynchronization (ERD/ERS). We propose a complex-weighted averaging method, called phase-compensated averaging, to investigate phase-related phenomena. In this method, any EEG/MEG channel is used as a trigger for averaging by setting the instantaneous phases at the trigger timings to 0 so that cross-channel averages are obtained. First, we evaluated the fundamental characteristics of this method by performing simulations. The results showed that this method could selectively average ongoing spontaneous activity phase-locked in each channel; that is, it evaluates the directional phase-synchronizing relationship between channels. We then analyzed flash evoked potentials. This method clarified the directional phase-synchronizing relationship from the frontal to occipital channels and recovered another piece of information, perhaps regarding the sequence of experiments, which is lost when using only conventional averaging. This method can also be used to reconstruct EEG/MEG time series to visualize long-distance synchronization and phase-reset directly, and on the basis of the potentials, ERS/ERD can be explained as a side effect of phase-reset.
PLOS ONE | 2014
Tsutomu Murata; Takashi Hamada; Tetsuya Shimokawa; Manabu Tanifuji; Toshio Yanagida
When a degraded two-tone image such as a “Mooney” image is seen for the first time, it is unrecognizable in the initial seconds. The recognition of such an image is facilitated by giving prior information on the object, which is known as top-down facilitation and has been intensively studied. Even in the absence of any prior information, however, we experience sudden perception of the emergence of a salient object after continued observation of the image, whose processes remain poorly understood. This emergent recognition is characterized by a comparatively long reaction time ranging from seconds to tens of seconds. In this study, to explore this time-consuming process of emergent recognition, we investigated the properties of the reaction times for recognition of degraded images of various objects. The results show that the time-consuming component of the reaction times follows a specific exponential function related to levels of image degradation and subjects capability. Because generally an exponential time is required for multiple stochastic events to co-occur, we constructed a descriptive mathematical model inspired by the neurophysiological idea of combination coding of visual objects. Our model assumed that the coincidence of stochastic events complement the information loss of a degraded image leading to the recognition of its hidden object, which could successfully explain the experimental results. Furthermore, to see whether the present results are specific to the task of emergent recognition, we also conducted a comparison experiment with the task of perceptual decision making of degraded images, which is well known to be modeled by the stochastic diffusion process. The results indicate that the exponential dependence on the level of image degradation is specific to emergent recognition. The present study suggests that emergent recognition is caused by the underlying stochastic process which is based on the coincidence of multiple stochastic events.
Neuroscience Research | 2012
Aya Ihara; Qiang Wei; Ayumu Matani; Norio Fujimaki; Haruko Yagura; Takeshi Nogai; Hiroaki Umehara; Tsutomu Murata
In communication, language can be interpreted differently depending upon the emotional context. To clarify the effect of emotional context on language processing, we performed experiments using a cross-modal priming paradigm with an auditorily presented prime and a visually presented target. The primes were the names of people that were spoken with a happy, sad, or neutral intonation; the targets were interrogative one-word sentences with emotionally neutral content. Using magnetoencephalography, we measured neural activities during silent reading of the targets presented in a happy, sad, or neutral context. We identified two conditional differences: the happy and sad conditions produced less activity than the neutral condition in the right posterior inferior and middle frontal cortices in the latency window from 300 to 400 ms; the happy and neutral conditions produced greater activity than the sad condition in the left posterior inferior frontal cortex in the latency window from 400 to 500 ms. These results suggest that the use of emotional context stored in the right frontal cortex starts at ∼300 ms, that integration of linguistic information with emotional context starts at ∼400 ms in the left frontal cortex, and that language comprehension dependent on emotional context is achieved by ∼500 ms.
2011 Defense Science Research Conference and Expo (DSR) | 2011
Shunji Mitsuyoshi; Yasuto Tanaka; Fumiaki Monnma; Tetsuto Minami; Makoto Kato; Tsutomu Murata
To evaluate neural components of emotional utterance, functional Magnetic Resonance Imaging (fMRI) was operated during free conversation. Timing and types of emotional elements such as anger, sorrow, joy, excitement and calmness were identified by the Voice Emotion Analysis (VEA) system. Conducting the modified event-related analysis, we found increased Blood Oxygen Level Dependent (BOLD) activities during free conversation in the lateral frontal cortex (BA47). Furthermore, the dorsolateral frontal cortex (BA45) and the limbic cortex were activated when the VEA system indicated excitement and anger, respectively. Since these areas are consistent with the neural circuits subserving emotional speech, the results confirm the neurophysiological correlates of emotion extracted by specific patterns of phonetic parameters during speech production.
Inverse Problems | 2010
Yasushi Terazono; Norio Fujimaki; Tsutomu Murata; Ayumu Matani
Exact point source reconstruction for underdetermined linear inverse problems with a block-wise structure was studied. In a block-wise problem, elements of a source vector are partitioned into blocks. Accordingly, a leadfield matrix, which represents the forward observation process, is also partitioned into blocks. A point source is a source having only one nonzero block. An example of such a problem is current distribution estimation in electroencephalography and magnetoencephalography, where a source vector represents a vector field and a point source represents a single current dipole. In this study, the block-wise norm, a block-wise extension of the lp-norm, was defined as the family of cost functions of the inverse method. The main result is that a set of three conditions was found to be necessary and sufficient for block-wise norm minimization to ensure exact point source reconstruction for any leadfield matrix that admit such reconstruction. The block-wise norm that satisfies the conditions is the sum of the cost of all the observations of source blocks, or in other words, the block-wisely extended leadfield-weighted l1-norm. Additional results are that minimization of such a norm always provides block-wisely sparse solutions and that its solutions form cones in source space.
Neuroscience Research | 2009
Norio Fujimaki; Shinji Munetsuna; Toyofumi Sasaki; Tomoe Hayakawa; Aya Ihara; Qiang Wei; Yasushi Terazono; Tsutomu Murata
Functional magnetic resonance imaging was used to measure neural activations in subjects instructed to silently read novels at ordinary and rapid speeds. Among the 19 subjects, 8 were experts in a rapid reading technique. Subjects pressed a button to turn pages during reading, and the interval between turning pages was recorded to evaluate the reading speed. For each subject, we evaluated activations in 14 areas and at 2 instructed reading speeds. Neural activations decreased with increasing reading speed in the left middle and posterior superior temporal area, left inferior frontal area, left precentral area, and the anterior temporal areas of both hemispheres, which have been reported to be active for linguistic processes, while neural activation increased with increasing reading speed in the right intraparietal sulcus, which is considered to reflect visuo-spatial processes. Despite the considerable reading speed differences, correlation analysis showed no significant difference in activation dependence on reading speed with respect to the subject groups and instructed reading speeds. The activation reduction with speed increase in language-related areas was opposite to the previous reports for low reading speeds. The present results suggest that subjects reduced linguistic processes with reading speed increase from ordinary to rapid speed.
Collaboration
Dive into the Tsutomu Murata's collaboration.
National Institute of Information and Communications Technology
View shared research outputsNational Institute of Information and Communications Technology
View shared research outputsNational Institute of Information and Communications Technology
View shared research outputsNational Institute of Information and Communications Technology
View shared research outputs