Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takako Mitsudo is active.

Publication


Featured researches published by Takako Mitsudo.


Neuroscience Research | 2008

Early ERP components differentially extract facial features: evidence for spatial frequency-and-contrast detectors.

Taisuke Nakashima; Kunihiko Kaneko; Yoshinobu Goto; Tomotaka Abe; Takako Mitsudo; Katsuya Ogata; Akifumi Makinouchi; Shozo Tobimatsu

It is generally accepted that the N170 component of an event-related potential (ERP) reflects the structural encoding of faces and is specialized for face processing. Recent neuroimaging and ERP studies have demonstrated that spatial frequency is a crucial factor for face recognition. To clarify which early ERP components reflect either coarse (low spatial frequency, LSF) or fine (high spatial frequency, HSF) processing of faces, we recorded ERPs induced by manipulated face stimuli. By filtering the original grayscale faces (broadband spatial frequency) spatially, we created LSF and HSF face stimuli. Next, we created physically equiluminant (PEL) face stimuli to eliminate the effects of lower order information, such as luminance and contrast. The P1 amplitude at the occipital region was augmented by LSF faces, while the N170 amplitude increased for HSF faces. The occipital P1 amplitude for PEL faces was relatively unaffected compared with that for PEL houses. In addition, the occipital N2 for PEL faces was spatiotemporally separable from N170 in a time-window between P1 and N170. These results indicate that P1 reflects coarse processing of faces, and that the face robustness further assures face-specific processing in the early component. Moreover, N2 reflects the early contrast processing of faces whereas N170 analyzes the fine facial features. Our findings suggest the presence of spatial frequency-and-contrast detectors for face processing.


PLOS ONE | 2013

EEG investigations of duration discrimination: the intermodal effect is induced by an attentional bias.

Emilie Gontier; Emi Hasuo; Takako Mitsudo; Simon Grondin

Previous studies indicated that empty time intervals are better discriminated in the auditory than in the visual modality, and when delimited by signals delivered from the same (intramodal intervals) rather than from different sensory modalities (intermodal intervals). The present electrophysiological study was conducted to determine the mechanisms which modulated the performances in inter- and intramodal conditions. Participants were asked to categorise as short or long empty intervals marked by auditory (A) and/or visual (V) signals (intramodal intervals: AA, VV; intermodal intervals: AV, VA). Behavioural data revealed that the performances were higher for the AA intervals than for the three other intervals and lower for inter- compared to intramodal intervals. Electrophysiological results indicated that the CNV amplitude recorded at fronto-central electrodes increased significantly until the end of the presentation of the long intervals in the AA conditions, while no significant change in the time course of this component was observed for the other three modalities of presentation. They also indicated that the N1 and P2 amplitudes recorded after the presentation of the signals which delimited the beginning of the intervals were higher for the inter- (AV/VA) compared to the intramodal intervals (AA/VV). The time course of the CNV revealed that the high performances observed with AA intervals would be related to the effectiveness of the neural mechanisms underlying the processing of the ongoing interval. The greater amplitude of the N1 and P2 components during the intermodal intervals suggests that the weak performances observed in these conditions would be caused by an attentional bias induced by the cognitive load and the necessity to switch between modalities.


Clinical Neurophysiology | 2011

Neural responses in the occipital cortex to unrecognizable faces

Takako Mitsudo; Yoko Kamio; Yoshinobu Goto; Taisuke Nakashima; Shozo Tobimatsu

OBJECTIVE Event-related potentials (ERPs) were recorded to examine neural responses to face stimuli in a masking paradigm. METHODS Images of faces (neutral or fearful) and objects were presented in subthreshold, threshold, and suprathreshold conditions (exposure durations of approximately 20, 30 and 300 ms, respectively), followed by a 1000-ms pattern mask. We recorded ERP responses at Oz, T5, T6, Cz and Pz. The effects of physical stimulus features were examined by inverted stimuli. RESULTS The occipital N1 amplitude (approximately 160 ms) was significantly smaller in response to faces than objects when presented at a subthreshold duration. In contrast, the occipitotemporal N170 amplitude was significantly greater in the threshold and suprathreshold conditions compared with the subthreshold condition for faces, but not for objects. The P1 amplitude (approximately 120 ms) elicited by upright faces in the subthreshold condition was significantly larger than for inverted faces. CONCLUSIONS P1 and N1 components at Oz were sensitive to subthreshold faces, which suggests the presence of fast face-specific process(es) prior to face-encoding. The N170 reflects the robustness of the face selective response in the occipitotemporal area. SIGNIFICANCE Even when presented for a subthreshold duration, faces were processed differently to images of objects at an early stage of visual processing.


Frontiers in Integrative Neuroscience | 2012

An electroencephalographic investigation of the filled-duration illusion.

Takako Mitsudo; Caroline Gagnon; Hiroshige Takeichi; Simon Grondin

The study investigated how the brain activity changed when participants were engaged in a temporal production task known as the “filled-duration illusion.” Twelve right-handed participants were asked to memorize and reproduce the duration of time intervals (600 or 800 ms) bounded by two flashes. Random trials contained auditory stimuli in the form of three 20 ms sounds between the flashes. In one session, the participants were asked to ignore the presence of the sounds, and in the other, they were instructed to pay attention to sounds. The behavioral results showed that duration reproduction was clearly affected by the presence of the sounds and the duration of time intervals. The filled-duration illusion occurred when there were sounds; the participants overestimated the interval in the 600-ms interval condition with sounds. On the other hand, the participants underestimated the 800-ms interval condition without sounds. During the presentation of the interval to be encoded, the contingent negative variation (CNV) appeared around the prefrontal scalp site, and P300 appeared around the parieto-central scalp site. The CNV grew larger when the intervals contained the sounds, whereas the P300 grew larger when the intervals were 800 ms and did not contain the sounds. During the reproduction of the interval to be presented, the Bereitschaftspotential (BP) appeared over the fronto-central scalp site from 1000 ms before the participants’ response. The BP could refer to the decision making process associated with the duration reproduction. The occurrence of three event-related potentials (ERPs), the P300, CNV, and BP, suggests that the fronto-parietal area, together with supplementary motor area (SMA), is associated with timing and time perception, and magnitude of these potentials is modulted by the “filled-duration illusion”.


Frontiers in Psychology | 2014

Perceptual inequality between two neighboring time intervals defined by sound markers: correspondence between neurophysiological and psychological data

Takako Mitsudo; Yoshitaka Nakajima; Hiroshige Takeichi; Shozo Tobimatsu

Brain activity related to time estimation processes in humans was analyzed using a perceptual phenomenon called auditory temporal assimilation. In a typical stimulus condition, two neighboring time intervals (T1 and T2 in this order) are perceived as equal even when the physical lengths of these time intervals are considerably different. Our previous event-related potential (ERP) study demonstrated that a slow negative component (SNCt) appears in the right-frontal brain area (around the F8 electrode) after T2, which is associated with judgment of the equality/inequality of T1 and T2. In the present study, we conducted two ERP experiments to further confirm the robustness of the SNCt. The stimulus patterns consisted of two neighboring time intervals marked by three successive tone bursts. Thirteen participants only listened to the patterns in the first session, and judged the equality/inequality of T1 and T2 in the next session. Behavioral data showed typical temporal assimilation. The ERP data revealed that three components (N1; contingent negative variation, CNV; and SNCt) emerged related to the temporal judgment. The N1 appeared in the central area, and its peak latencies corresponded to the physical timing of each marker onset. The CNV component appeared in the frontal area during T2 presentation, and its amplitude increased as a function of T1. The SNCt appeared in the right-frontal area after the presentation of T1 and T2, and its magnitude was larger for the temporal patterns causing perceptual inequality. The SNCt was also correlated with the perceptual equality/inequality of the same stimulus pattern, and continued up to about 400 ms after the end of T2. These results suggest that the SNCt can be a signature of equality/inequality judgment, which derives from the comparison of the two neighboring time intervals.


Frontiers in Human Neuroscience | 2014

Cortical activity associated with the detection of temporal gaps in tones: a magnetoencephalography study.

Takako Mitsudo; Naruhito Hironaga; Shuji Mori

We used magnetoencephalogram (MEG) in two experiments to investigate spatio-temporal profiles of brain responses to gaps in tones. Stimuli consisted of leading and trailing markers with gaps between the two markers of 0, 30, or 80 ms. Leading and trailing markers were 300 ms pure tones at 800 or 3200 Hz.Two conditions were examined: the within-frequency (WF) condition in which the leading and trailing markers had identical frequencies, and the between-frequency (BF) condition in which they had different frequencies. Using minimum norm estimates (MNE), we localized the source activations at the time of the peak response to the trailing markers. Results showed that MEG signals in response to 800 and 3200 Hz tones were localized in different regions within the auditory cortex, indicating that the frequency pathways activated by the two markers were spatially represented.The time course of regional activity (RA) was extracted from each localized region for each condition. In Experiment 1, which used a continuous tone for the WF 0-ms stimulus, the N1m amplitude for the trailing marker in the WF condition differed depending on gap duration but not tonal frequency. In contrast, N1m amplitude in BF conditions differed depending on the frequency of the trailing marker. In Experiment 2, in which the 0-ms gap stimulus in the WF condition was made from two markers and included an amplitude reduction in the middle, the amplitude in WF and BF conditions changed depending on frequency, but not gap duration.The difference in temporal characteristics betweenWF and BF conditions could be observed in the RA.


Neural Computing and Applications | 2011

A neural decoding approach to auditory temporal assimilation

Hiroshige Takeichi; Takako Mitsudo; Yoshitaka Nakajima; Gerard B. Remijn; Yoshinobu Goto; Shozo Tobimatsu

By constructing Gaussian Naïve Bayes Classifiers, we have re-analyzed data from an earlier event-related potential (ERP) study of an illusion in time perception known as auditory temporal assimilation. In auditory temporal assimilation, two neighboring physically unequal time intervals marked by three successive tone bursts are illusorily perceived as equal if the two time intervals satisfy a certain relationship. The classifiers could discriminate whether or not the subject was engaged in the task, which was judgment of the subjective equality between the two intervals, at an accuracy of >79%, and from principal component scores of individual average ERP waveforms, we were able to predict their subjective judgments to each stimulus at an accuracy of >70%. Chernoff information, unlike accuracy or Kullback–Leibler (KL) distance, suggested brain activation associated with auditory temporal assimilation at an early pre-decision stage. This may provide us with a simple and useful neural decoding scheme in analyzing information processing of temporal patterns in the brain.


Scientific Reports | 2017

Spatiotemporal brain dynamics of auditory temporal assimilation

Naruhito Hironaga; Takako Mitsudo; Mariko Hayamizu; Yoshitaka Nakajima; Hiroshige Takeichi; Shozo Tobimatsu

Time is a fundamental dimension, but millisecond-level judgments sometimes lead to perceptual illusions. We previously introduced a “time-shrinking illusion” using a psychological paradigm that induces auditory temporal assimilation (ATA). In ATA, the duration of two successive intervals (T1 and T2), marked by three auditory stimuli, can be perceived as equal when they are not. Here, we investigate the spatiotemporal profile of human temporal judgments using magnetoencephalography (MEG). Behavioural results showed typical ATA: participants judged T1 and T2 as equal when T2 − T1 ≤ +80 ms. MEG source-localisation analysis demonstrated that regional activity differences between judgment and no-judgment conditions emerged in the temporoparietal junction (TPJ) during T2. This observation in the TPJ may indicate its involvement in the encoding process when T1 ≠ T2. Activation in the inferior frontal gyrus (IFG) was enhanced irrespective of the stimulus patterns when participants engaged in temporal judgment. Furthermore, just after the final marker, activity in the IFG was enhanced specifically for the time-shrinking pattern. This indicates that activity in the IFG is also related to the illusory perception of time-interval equality. Based on these observations, we propose neural signatures for judgments of temporal equality in the human brain.


Ear and Hearing | 2015

Between-Frequency and Between-Ear Gap Detections and Their Relation to Perception of Stop Consonants.

Shuji Mori; Kazuki Oyama; Yousuke Kikuchi; Takako Mitsudo; Nobuyuki Hirose

Objectives: The objective of this study was to examine the hypothesis that between-channel gap detection, which includes between-frequency and between-ear gap detection, and perception of stop consonants, which is mediated by the length of voice-onset time (VOT), share common mechanisms, namely relative-timing operation in monitoring separate perceptual channels. Design: The authors measured gap detection thresholds and identification functions of /ba/ and /pa/ along VOT in 49 native young adult Japanese listeners. There were three gap detection tasks. In the between-frequency task, the leading and trailing markers differed in terms of center frequency (Fc). The leading marker was a broadband noise of 10 to 20,000 Hz. The trailing marker was a 0.5-octave band-passed noise of 1000-, 2000-, 4000-, or 8000-Hz Fc. In the between-ear task, the two markers were spectrally identical but presented to separate ears. In the within-frequency task, the two spectrally identical markers were presented to the same ear. The /ba/-/pa/ identification functions were obtained in a task in which the listeners were presented synthesized speech stimuli of varying VOTs from 10 to 46 msec and asked to identify them as /ba/ or /pa/. Results: The between-ear gap thresholds were significantly positively correlated with the between-frequency gap thresholds (except those obtained with the trailing marker of 4000-Hz Fc). The between-ear gap thresholds were not significantly correlated with the within-frequency gap thresholds, which were significantly correlated with all the between-frequency gap thresholds. The VOT boundaries and slopes of /ba/-/pa/ identification functions were not significantly correlated with any of these gap thresholds. Conclusions: There was a close relation between the between-ear and between-frequency gap detection, supporting the view that these two types of gap detection share common mechanisms of between-channel gap detection. However, there was no evidence for a relation between the perception of stop consonants and the between-frequency/ear gap detection in native Japanese speakers.


international conference on neural information processing | 2009

Auditory Temporal Assimilation: A Discriminant Analysis of Electrophysiological Evidence

Hiroshige Takeichi; Takako Mitsudo; Yoshitaka Nakajima; Gerard B. Remijn; Yoshinobu Goto; Shozo Tobimatsu

A portion of the data from an event-related potential (ERP) experiment [1] on auditory temporal assimilation [2, 3] was reanalyzed by constructing Gaussian Naive Bayes Classifiers [4]. In auditory temporal assimilation, two neighboring physically-unequal time intervals marked by three successive tone bursts are illusorily perceived to have the same duration if the two time intervals satisfy a certain relationship. The classifiers could discriminate the subjects task, which was judgment of the equivalence between the two intervals, at an accuracy of 86---96% as well as their subjective judgments to the physically equivalent stimulus at an accuracy of 82---86% from individual ERP average waveforms. Chernoff information [5] provided more consistent interpretations compared with classification errors as to the selection of the component most strongly associated with the perceptual judgment. This may provide us with a simple but somewhat robust neurodecoding scheme.

Collaboration


Dive into the Takako Mitsudo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshinobu Goto

International University of Health and Welfare

View shared research outputs
Top Co-Authors

Avatar

Hiroshige Takeichi

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge