Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tamami Nakano is active.

Publication


Featured researches published by Tamami Nakano.


The Journal of Neuroscience | 2010

Development of Global Cortical Networks in Early Infancy

Fumitaka Homae; Hama Watanabe; Takayuki Otobe; Tamami Nakano; Tohshin Go; Yukuo Konishi; Gentaro Taga

Human cognition and behaviors are subserved by global networks of neural mechanisms. Although the organization of the brain is a subject of interest, the process of development of global cortical networks in early infancy has not yet been clarified. In the present study, we explored developmental changes in these networks from several days to 6 months after birth by examining spontaneous fluctuations in brain activity, using multichannel near-infrared spectroscopy. We set up 94 measurement channels over the frontal, temporal, parietal, and occipital regions of the infant brain. The obtained signals showed complex time-series properties, which were characterized as 1/f fluctuations. To reveal the functional connectivity of the cortical networks, we calculated the temporal correlations of continuous signals between all the pairs of measurement channels. We found that the cortical network organization showed regional dependency and dynamic changes in the course of development. In the temporal, parietal, and occipital regions, connectivity increased between homologous regions in the two hemispheres and within hemispheres; in the frontal regions, it decreased progressively. Frontoposterior connectivity changed to a “U-shaped” pattern within 6 months: it decreases from the neonatal period to the age of 3 months and increases from the age of 3 months to the age of 6 months. We applied cluster analyses to the correlation coefficients and showed that the bilateral organization of the networks begins to emerge during the first 3 months of life. Our findings suggest that these developing networks, which form multiple clusters, are precursors of the functional cerebral architecture.


Proceedings of the National Academy of Sciences of the United States of America | 2013

Blink-related momentary activation of the default mode network while viewing videos

Tamami Nakano; Makoto Kato; Yusuke Morito; Seishi Itoi; Shigeru Kitazawa

It remains unknown why we generate spontaneous eyeblinks every few seconds, more often than necessary for ocular lubrication. Because eyeblinks tend to occur at implicit breakpoints while viewing videos, we hypothesized that eyeblinks are actively involved in the release of attention. We show that while viewing videos, cortical activity momentarily decreases in the dorsal attention network after blink onset but increases in the default-mode network implicated in internal processing. In contrast, physical blackouts of the video do not elicit such reciprocal changes in brain networks. The results suggest that eyeblinks are actively involved in the process of attentional disengagement during a cognitive behavior by momentarily activating the default-mode network while deactivating the dorsal attention network.


Frontiers in Psychology | 2011

Large-Scale Brain Networks Underlying Language Acquisition in Early Infancy

Fumitaka Homae; Hama Watanabe; Tamami Nakano; Gentaro Taga

A critical issue in human development is that of whether the language-related areas in the left frontal and temporal regions work as a functional network in preverbal infants. Here, we used 94-channel near-infrared spectroscopy to reveal the functional networks in the brains of sleeping 3-month-old infants with and without presenting speech sounds. During the first 3u2009min, we measured spontaneous brain activation (period 1). After period 1, we provided stimuli by playing Japanese sentences for 3u2009min (period 2). Finally, we measured brain activation for 3u2009min without providing the stimulus (period 3), as in period 1. We found that not only the bilateral temporal and temporoparietal regions but also the prefrontal and occipital regions showed oxygenated hemoglobin signal increases and deoxygenated hemoglobin signal decreases when speech sounds were presented to infants. By calculating time-lagged cross-correlations and coherences of oxy-Hb signals between channels, we tested the functional connectivity for the three periods. The oxy-Hb signals in neighboring channels, as well as their homologous channels in the contralateral hemisphere, showed high correlation coefficients in period 1. Similar correlations were observed in period 2; however, the number of channels showing high correlations was higher in the ipsilateral hemisphere, especially in the anterior–posterior direction. The functional connectivity in period 3 showed a close relationship between the frontal and temporal regions, which was less prominent in period 1, indicating that these regions form the functional networks and work as a hysteresis system that has memory of the previous inputs. We propose a hypothesis that the spatiotemporally large-scale brain networks, including the frontal and temporal regions, underlie speech processing in infants and they might play important roles in language acquisition during infancy.


Human Brain Mapping | 2011

Effect of auditory input on activations in infant diverse cortical regions during audiovisual processing

Hama Watanabe; Fumitaka Homae; Tamami Nakano; Daisuke Tsuzuki; Lkhamsuren Enkhtur; Kiyotaka Nemoto; Ippeita Dan; Gentaro Taga

A fundamental question with regard to perceptual development is how multisensory information is processed in the brain during the early stages of development. Although a growing body of evidence has shown the early emergence of modality‐specific functional differentiation of the cortical regions, the interplay between sensory inputs from different modalities in the developing brain is not well understood. To study the effects of auditory input during audio‐visual processing in 3‐month‐old infants, we evaluated the spatiotemporal cortical hemodynamic responses of 50 infants while they perceived visual objects with or without accompanying sounds. The responses were measured using 94‐channel near‐infrared spectroscopy over the occipital, temporal, and frontal cortices. The effects of sound manipulation were pervasive throughout the diverse cortical regions and were specific to each cortical region. Visual stimuli co‐occurring with sound induced the early‐onset activation of the early auditory region, followed by activation of the other regions. Removal of the sound stimulus resulted in focal deactivation in the auditory regions and reduced activation in the early visual region, the association region of the temporal and parietal cortices, and the anterior prefrontal regions, suggesting multisensory interplay. In contrast, equivalent activations were observed in the lateral occipital and lateral prefrontal regions, regardless of sound manipulation. Our findings indicate that auditory input did not generally enhance overall activation in relation to visual perception, but rather induced specific changes in each cortical region. The present study implies that 3‐month‐old infants may perceive audio‐visual multisensory inputs by using the global network of functionally differentiated cortical regions. Hum Brain Mapp, 2013.


Pediatrics | 2012

How Children With Specific Language Impairment View Social Situations: An Eye Tracking Study

Mariko Hosozawa; Kyoko Tanaka; Toshiaki Shimizu; Tamami Nakano; Shigeru Kitazawa

OBJECTIVE: Children with specific language impairment (SLI) face risks for social difficulties. However, the nature and developmental course of these difficulties remain unclear. Gaze behaviors have been studied by using eye tracking among those with autism spectrum disorders (ASDs). Using this method, we compared the gaze behaviors of children with SLI with those of individuals with ASD and typically developing (TD) children to explore the social perception of children with SLI. METHODS: The eye gazes of 66 children (16 with SLI, 25 with ASD, and 25 TD) were studied while viewing videos of social interactions. Gaze behaviors were summarized with multidimensional scaling, and participants with similar gaze behaviors were represented proximally in a 2-dimensional plane. RESULTS: The SLI and TD groups each formed a cluster near the center of the multidimensional scaling plane, whereas the ASD group was distributed around the periphery. Frame-by-frame analyses showed that children with SLI and TD children viewed faces in a manner consistent with the story line, but children with ASD devoted less attention to faces and social interactions. During speech scenes, children with SLI were significantly more fixated on the mouth, whereas TD children viewed the eyes and the mouth. CONCLUSIONS: Children with SLI viewed social situations in ways similar to those of TD children but different from those of children with ASD. However, children with SLI concentrated on the speaker’s mouth, possibly to compensate for audiovisual processing deficits. Because eyes carry important information, this difference may influence the social development of children with SLI.


Human Brain Mapping | 2012

Functional development in the infant brain for auditory pitch processing.

Fumitaka Homae; Hama Watanabe; Tamami Nakano; Gentaro Taga

Understanding how the developing brain processes auditory information is a critical step toward the clarification of infants perception of speech and music. We have reported that the infant brain perceives pitch information in speech sounds. Here, we used multichannel near‐infrared spectroscopy to examine whether the infant brain is sensitive to information of pitch changes in auditory sequences. Three types of auditory sequences with distinct temporal structures of pitch changes were presented to 3‐ and 6‐month‐old infants: a long condition of 12 successive tones constructing a chromatic scale (600 ms), a short condition of four successive tones constructing a chromatic scale (200 ms), and a random condition of random tone sequences (50 ms per tone). The difference among the conditions was only in the sequential order of the tones, which causes pitch changes between the successive tones. We found that the bilateral temporal regions of both ages of infants showed significant activation under the three conditions. The stimulus‐dependent activation was observed in the right temporoparietal region of the both infant groups; the 3‐ and 6‐month‐old infants showed the most prominent activation under the random and short conditions, respectively. Our findings indicate that the infant brain, which shows functional differentiation and lateralization in auditory‐related areas, is capable of responding to more than single tones of pitch information. These results suggest that the right temporoparietal region of the infants increases sensitivity to auditory sequences, which have temporal structures similar to those of syllables in speech sounds, in the course of development. Hum Brain Mapp, 2012.


Neuropsychologia | 2011

Lack of eyeblink entrainments in autism spectrum disorders.

Tamami Nakano; Nobumasa Kato; Shigeru Kitazawa

Interpersonal synchrony is the temporal coordination of movements between individuals during social interactions. For example, it has been shown that listeners synchronize their eyeblinks to a speakers eyeblinks, especially at breakpoints of speech, when viewing a close-up video clip of the speakers face. We hypothesized that this interpersonal synchronous behavior would not be observed in individuals with autism spectrum disorders (ASD), which are characterized by impaired social communication. To test this hypothesis, we examined eyeblink entrainments in adults with ASD. As we reported previously, the eyeblinks of adults without ASD were significantly synchronized with the speakers eyeblinks at pauses in his speech when they viewed the speakers entire face. However, the significant eyeblink synchronization disappeared when adults without ASD viewed only the speakers eyes or mouth, suggesting that information from the whole face, including information from both the eyes and the mouth, was necessary for eyeblink entrainment. By contrast, the ASD participants did not show any eyeblink synchronization with the speaker, even when viewing the speakers eyes and mouth simultaneously. The lack of eyeblink entrainment to the speaker in individuals with ASD suggests that they are not able to temporally attune themselves to others behaviors. The deficits in temporal coordination may impair effective social communication with others.


Neuropsychologia | 2012

Superior haptic-to-visual shape matching in autism spectrum disorders

Tamami Nakano; Nobumasa Kato; Shigeru Kitazawa

A weak central coherence theory in autism spectrum disorder (ASD) proposes that a cognitive bias toward local processing in ASD derives from a weakness in integrating local elements into a coherent whole. Using this theory, we hypothesized that shape perception through active touch, which requires sequential integration of sensorimotor traces of exploratory finger movements into a shape representation, would be impaired in ASD. Contrary to our expectation, adults with ASD showed superior performance in a haptic-to-visual delayed shape-matching task compared to adults without ASD. Accuracy in discriminating haptic lengths or haptic orientations, which lies within the somatosensory modality, did not differ between adults with ASD and adults without ASD. Moreover, this superior ability in inter-modal haptic-to-visual shape matching was not explained by the score in a unimodal visuospatial rotation task. These results suggest that individuals with ASD are not impaired in integrating sensorimotor traces into a global visual shape and that their multimodal shape representations and haptic-to-visual information transfer are more accurate than those of individuals without ASD.


Neuropsychologia | 2013

Facilitation of face recognition through the retino-tectal pathway.

Tamami Nakano; Noriko Higashida; Shigeru Kitazawa

Humans can shift their gazes faster to human faces than to non-face targets during a task in which they are required to choose between face and non-face targets. However, it remains unclear whether a direct projection from the retina to the superior colliculus is specifically involved in this facilitated recognition of faces. To address this question, we presented a pair of face and non-face pictures to participants modulated in greyscale (luminance-defined stimuli) in one condition and modulated in a blue-yellow scale (S-cone-isolating stimuli) in another. The information of the S-cone-isolating stimuli is conveyed through the retino-geniculate pathway rather than the retino-tectal pathway. For the luminance stimuli, the reaction time was shorter towards a face than towards a non-face target. The facilitatory effect while choosing a face disappeared with the S-cone stimuli. Moreover, fearful faces elicited a significantly larger facilitatory effect relative to neutral faces, when the face (with or without emotion) and non-face stimuli were presented in greyscale. The effect of emotional expressions disappeared with the S-cone stimuli. In contrast to the S-cone stimuli, the face facilitatory effect was still observed with negated stimuli that were prepared by reversing the polarity of the original colour pictures and looked as unusual as the S-cone stimuli but still contained luminance information. These results demonstrate that the face facilitatory effect requires the facial and emotional information defined by luminance, suggesting that the luminance information conveyed through the retino-tectal pathway is responsible for the faster recognition of human faces.


Proceedings of the Royal Society of London B: Biological Sciences | 2014

Cortical networks for face perception in two-month-old infants

Tamami Nakano; Kazuko Nakatani

Newborns have an innate system for preferentially looking at an upright human face. This face preference behaviour disappears at approximately one month of age and reappears a few months later. However, the neural mechanisms underlying this U-shaped behavioural change remain unclear. Here, we isolate the functional development of the cortical visual pathway for face processing using S-cone-isolating stimulation, which blinds the subcortical visual pathway. Using luminance stimuli, which are conveyed by both the subcortical and cortical visual pathways, the preference for upright faces was not observed in two-month-old infants, but it was observed in four- and six-month-old infants, confirming the recovery phase of the U-shaped development. By contrast, using S-cone stimuli, two-month-old infants already showed a preference for upright faces, as did four- and six-month-old infants, demonstrating that the cortical visual pathway for face processing is already functioning at the bottom of the U-shape at two months of age. The present results suggest that the transient functional deterioration stems from a conflict between the subcortical and cortical functional pathways, and that the recovery thereafter involves establishing a level of coordination between the two pathways.

Collaboration


Dive into the Tamami Nakano's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fumitaka Homae

Tokyo Metropolitan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge