Yuko Yotsumoto
University of Tokyo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yuko Yotsumoto.
Neuron | 2008
Yuko Yotsumoto; Takeo Watanabe; Yuka Sasaki
Perceptual learning is regarded as a manifestation of experience-dependent plasticity in the sensory systems, yet the underlying neural mechanisms remain unclear. We measured the dynamics of performance on a visual task and brain activation in the human primary visual cortex (V1) across the time course of perceptual learning. Within the first few weeks of training, brain activation in a V1 subregion corresponding to the trained visual field quadrant and task performance both increased. However, while performance levels then saturated and were maintained at a constant level, brain activation in the corresponding areas decreased to the level observed before training. These findings indicate that there are distinct temporal phases in the time course of perceptual learning, related to differential dynamics of BOLD activity in visual cortex.
Current Biology | 2009
Yuko Yotsumoto; Yuka Sasaki; Patrick Chan; Christos Vasios; Giorgio Bonmassar; Nozomi Ito; José E. Náñez; Shinsuke Shimojo; Takeo Watanabe
Visual perceptual learning is defined as performance enhancement on a sensory task and is distinguished from other types of learning and memory in that it is highly specific for location of the trained stimulus. The location specificity has been shown to be paralleled by enhancement in functional magnetic resonance imaging (fMRI) signal in the trained region of V1 after visual training. Although recently the role of sleep in strengthening visual perceptual learning has attracted much attention, its underlying neural mechanism has yet to be clarified. Here, for the first time, fMRI measurement of human V1 activation was conducted concurrently with a polysomnogram during sleep with and without preceding training for visual perceptual learning. As a result of predetermined region-of-interest analysis of V1, activation enhancement during non-rapid-eye-movement sleep after training was observed specifically in the trained region of V1. Furthermore, improvement of task performance measured subsequently to the post-training sleep session was significantly correlated with the amount of the trained-region-specific fMRI activation in V1 during sleep. These results suggest that as far as V1 is concerned, only the trained region is involved in improving task performance after sleep.
Memory & Cognition | 2007
Yuko Yotsumoto; Michael J. Kahana; Hugh R. Wilson; Robert Sekuler
A series of experiments examined short-term recognition memory for trios of briefly presented, synthetic human faces derived from three real human faces. The stimuli were a graded series of faces, which differed by varying known amounts from the face of the average female. Faces based on each of the three real faces were transformed so as to lie along orthogonal axes in a 3-D face space. Experiment 1 showed that the synthetic faces’ perceptual similarity structure strongly influenced recognition memory. Results were fit by a noisy exemplar model (NEMO) of perceptual recognition memory (Kahana & Sekuler, 2002). The fits revealed that recognition memory was influenced both by the similarity of the probe to the series items and by the similarities among the series items themselves. Nonmetric multidimensional scaling (MDS) showed that the faces’ perceptual representations largely preserved the 3-D space in which the face stimuli were arrayed. NEMO gave a better account of the results when similarity was defined as perceptual MDS similarity, rather than as the physical proximity of one face to another. Experiment 2 confirmed the importance of within-list homogeneity directly, without mediation of a model. We discuss the affinities and differences between visual memory for synthetic faces and memory for simpler stimuli.
Vision Research | 2009
Yuko Yotsumoto; Li-Hung Chang; Takeo Watanabe; Yuka Sasaki
Perceptual learning (PL) often shows specificity to a trained feature. We investigated whether feature specificity is related to disruption in PL using the texture discrimination task (TDT), which shows learning specificity to background element but not to target element. Learning was disrupted when orientations of background elements were changed in two successive training sessions (interference) but not in a random order from trial to trial (roving). The presentation of target elements seemed to have reversed effect; learning occurred in two-parts training but not with roving. These results suggest that interference in TDT is feature specific while disruption by roving is not.
The Journal of Neuroscience | 2013
Masako Tamaki; Tsung-Ren Huang; Yuko Yotsumoto; Matti Hämäläinen; Fa-Hsuan Lin; José E. Náñez; Takeo Watanabe; Yuka Sasaki
Sleep is beneficial for various types of learning and memory, including a finger-tapping motor-sequence task. However, methodological issues hinder clarification of the crucial cortical regions for sleep-dependent consolidation in motor-sequence learning. Here, to investigate the core cortical region for sleep-dependent consolidation of finger-tapping motor-sequence learning, while human subjects were asleep, we measured spontaneous cortical oscillations by magnetoencephalography together with polysomnography, and source-localized the origins of oscillations using individual anatomical brain information from MRI. First, we confirmed that performance of the task at a retest session after sleep significantly increased compared with performance at the training session before sleep. Second, spontaneous δ and fast-σ oscillations significantly increased in the supplementary motor area (SMA) during post-training compared with pretraining sleep, showing significant and high correlation with the performance increase. Third, the increased spontaneous oscillations in the SMA correlated with performance improvement were specific to slow-wave sleep. We also found that correlations of δ oscillation between the SMA and the prefrontal and between the SMA and the parietal regions tended to decrease after training. These results suggest that a core brain region for sleep-dependent consolidation of the finger-tapping motor-sequence learning resides in the SMA contralateral to the trained hand and is mediated by spontaneous δ and fast-σ oscillations, especially during slow-wave sleep. The consolidation may arise along with possible reorganization of a larger-scale cortical network that involves the SMA and cortical regions outside the motor regions, including prefrontal and parietal regions.
Psychology and Aging | 2006
Robert Sekuler; Chris McLaughlin; Michael J. Kahana; Arthur Wingfield; Yuko Yotsumoto
Increased difficulty with memory for recent events is a well-documented consequence of normal aging, but not all aspects of memory are equally affected. To compare the impact of aging on short-term recognition and temporal order memory, young and older adults were asked to identify the serial position that a probe item had occupied in a study set, or to judge that the probe was novel (had not been in the study set). Stimuli were compound sinusoidal gratings, which resist verbal description and rehearsal. With retention intervals of 1 or 4 seconds, young and older adults produced highly similar overall performance, serial position curves, and proportions of trials on which a correct recognition response was accompanied by an incorrect temporal order judgment. Temporal order errors, which occurred on about one quarter of trials, were traced to two factors: perceptual similarity between the wrongly identified study item and the correct item, and temporal similarity between the wrongly identified item and the correct one. Our results show that short-term visual temporal order memory is well-preserved in normal aging, and when temporal order errors do occur, they arise from similar causes for young and older people.
Timing & Time Perception | 2015
Yuki Hashimoto; Yuko Yotsumoto
When a visually presented stimulus flickers, the perceived stimulus duration exceeds the actual duration. This effect is called ‘time dilation’. On the basis of recent electrophysiological findings, we hypothesized that this flicker induced time dilation is caused by distortions of the internal clock, which is composed of many oscillators with many intrinsic vibration frequencies. To examine this hypothesis, we conducted behavioral experiments and a neural simulation. In the behavioral experiments, we measured flicker induced time dilation at various flicker frequencies. The stimulus was either a steadily presented patch or a flickering patch. The temporal frequency spectrum of the flickering patch was either single peaked at 10.9, 15, or 30 Hz, peaked with a narrow band at 8–12 or 12–16 Hz, or peaked with broad band at 4–30 Hz. Time dilation was observed with 10.9 Hz, 15 Hz, 30 Hz, or 8–12 Hz flickers, but not with 12–16 Hz or 4–30 Hz flickers. These results indicate that both the peak frequency and the width of the frequency distribution contribute to time dilation. To explain our behavioral results in the context of a physiological model, we proposed a model that combined the Striatal Beat Frequency Model and neural entrainment. The simulation successfully predicted the effect of flicker frequency locality and frequency specificity on time dilation, as observed in the behavioral experiments.
Memory & Cognition | 2006
Yuko Yotsumoto; Robert Sekuler
Does visual information enjoy automatic, obligatory entry into memory, or, after such information has been seen, can it still be actively excluded? To characterize the process by which visual information could be excluded from memory, we used Sternberg’s (1966, 1975) recognition paradigm, measuring visual episodic memory for compound grating stimuli. Because recognition declines as additional study items enter memory, episodic recognition performance provides a sensitive index of memory’s contents. Three experiments showed that an item occupying a fixed serial position in a series of study items could be intentionally excluded from memory. In addition, exclusion does not depend on lowlevel information, such as the stimulus’s spatial location, orientation, or spatial frequency, and does not depend on the precise timing of irrelevant information, which suggests that the exclusion process is triggered by some event during a trial. The results, interpreted within the framework of a summed similarity model for visual recognition, suggest that exclusion operates after considerable visual processing of the to-be-excluded item.
Nature Communications | 2014
Yuko Yotsumoto; Li-Hung Chang; Rui Ni; Russell S. Pierce; George J. Andersen; Takeo Watanabe; Yuka Sasaki
Visual perceptual learning (VPL) with younger subjects is associated with changes in functional activation of the early visual cortex. Although overall brain properties decline with age, it is unclear whether these declines are associated with visual perceptual learning. Here we use diffusion tensor imaging to test whether changes in white matter are involved in VPL for older adults. After training on a texture discrimination task for 3 daily sessions, both older and younger subjects show performance improvements. While the older subjects show significant changes in fractional anisotropy (FA) in the white matter beneath the early visual cortex after training, no significant change in FA is observed for younger subjects. These results suggest that the mechanism for VPL in older individuals is considerably different from that in younger individuals and that VPL of older individuals involves re-organization of white matter.
PLOS ONE | 2015
Kenichi Yuasa; Yuko Yotsumoto
When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.