Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takemasa Yokoyama is active.

Publication


Featured researches published by Takemasa Yokoyama.


Neuropsychologia | 2013

Unconscious processing of direct gaze: Evidence from an ERP study

Takemasa Yokoyama; Yasuki Noguchi; Shinichi Kita

Humans detect faces with direct gaze more rapidly than they do faces with averted gaze. Evidence suggests that the visual information of faces with direct gaze reaches conscious awareness faster than that of faces with averted gaze. This suggests that faces with direct gaze are effectively processed in the brain before they reach conscious awareness; however, it is unclear how the unconscious perception of faces with direct gaze is processed in the brain. To address this unanswered question, we recorded event-related potentials while observers viewed faces with direct or averted gaze that were either visible or rendered invisible during continuous flash suppression. We observed that invisible faces with direct gaze elicited significantly larger negative deflections than did invisible faces with averted gaze at 200, 250, and 350 ms over the parietofrontal electrodes, whereas we did not observe such effects when facial images were visible. Our results suggest that the visual information of faces with direct gaze is preferentially processed in the brain when they are presented unconsciously.


Scientific Reports | 2015

Perception of Direct Gaze Does Not Require Focus of Attention

Takemasa Yokoyama; Hiroki Sakai; Yasuki Noguchi; Shinichi Kita

Previous research using averted (e.g., leftward or rightward) gaze indicates that gaze perception requires a focus of attention. However, direct gaze, compared with averted gaze, is processed in the brain preferentially and enhances cognitive functions. Thus, it is necessary to use direct gaze to investigate whether gaze perception is possible without focused attention. We conducted a dual-task paradigm in which attention was drawn away from gaze. Results showed performance on gaze-direction discrimination (direct vs. averted gaze) in the dual-task condition was only slightly lower than in the single-task condition; participants were able to discriminate direct from averted gaze without focusing their attention in a similar manner to when they did focus their attention. In contrast, when participants discriminated between averted gazes (leftward and rightward), performance dropped to near-chance levels. It was concluded that gaze perception does not require a focus of attention for direct gaze.


Perception | 2011

Attentional Capture by Change in Direct Gaze

Takemasa Yokoyama; Kazuya Ishibashi; Yuki Hongoh; Shinichi Kita

In three experiments, we examined whether change in direct gaze was better at capturing visuospatial attention than non-direct gaze change. Also, change in direct gaze can be categorised into two types: ‘look toward’, which means gaze changing to look toward observers, and ‘look away’, which means gaze changing to look away from observers. Thus, we also investigated which type of change in direct gaze was more effective in capturing visuospatial attention. Each experiment employed a change-detection task, and we compared detection accuracy between ‘look away’, ‘look toward’, and non-direct gaze-change conditions. In experiment 1, we found detection advantage for change in direct gaze relative to non-direct gaze change, and for ‘look toward’ compared with ‘look away’. In experiment 2, we conducted control experiments to exclude possibilities of simple motion detection and geometrical factors of eyes, and confirm detection advantage in experiment 1 only occurred when the stimuli were processed as faces and gazes. In experiment 3, we manipulated the head deviation, but the results in experiment 1 persisted despite changes in head orientation. The findings establish that individuals are sensitive to change in direct relative to non-direct gaze change, and ‘look toward’ compared with ‘look away’.


Journal of Cognitive Neuroscience | 2012

Temporal dynamics of neural activity at the moment of emergence of conscious percept

Yasuki Noguchi; Takemasa Yokoyama; Megumi Suzuki; Shinichi Kita; Ryusuke Kakigi

From which regions of the brain do conscious representations of visual stimuli emerge? This is an important but controversial issue in neuroscience because some studies have reported a major role of the higher visual regions of the ventral pathway in conscious perception, whereas others have found neural correlates of consciousness as early as in the primary visual areas and in the thalamus. One reason for this controversy has been the difficulty in focusing on neural activity at the moment when conscious percepts are generated in the brain, excluding any bottom–up responses (not directly related to consciousness) that are induced by stimuli. In this study, we address this issue with a new approach that can induce a rapid change in conscious perception with little influence from bottom–up responses. Our results reveal that the first consciousness-related activity emerges from the higher visual region of the ventral pathway. However, this activity is rapidly diffused to the entire brain, including the early visual cortex. These results thus integrate previous “higher” and “lower” views on the emergence of neural correlates of consciousness, providing a new perspective for the temporal dynamics of consciousness.


Frontiers in Human Neuroscience | 2014

A critical role of holistic processing in face gender perception.

Takemasa Yokoyama; Yasuki Noguchi; Ryosuke Tachibana; Shigeru Mukaida; Shinichi Kita

Whether face gender perception is processed by encoding holistic (whole) or featural (parts) information is a controversial issue. Although neuroimaging studies have identified brain regions related to face gender perception, the temporal dynamics of this process remain under debate. Here, we identified the mechanism and temporal dynamics of face gender perception. We used stereoscopic depth manipulation to create two conditions: the front and behind condition. In the front condition, facial patches were presented stereoscopically in front of the occluder and participants perceived them as disjoint parts (featural cues). In the behind condition, facial patches were presented stereoscopically behind the occluder and were amodally completed and unified in a coherent face (holistic cues). We performed three behavioral experiments and one electroencephalography experiment, and compared the results of the front and behind conditions. We found faster reaction times (RTs) in the behind condition compared with the front, and observed priming effects and aftereffects only in the behind condition. Moreover, the EEG experiment revealed that face gender perception is processed in the relatively late phase of visual recognition (200–285 ms). Our results indicate that holistic information is critical for face gender perception, and that this process occurs with a relatively late latency.


European Journal of Neuroscience | 2011

Modulation of neuromagnetic responses to face stimuli by preceding biographical information.

Satoshi Tsujimoto; Takemasa Yokoyama; Yasuki Noguchi; Shinichi Kita; Ryusuke Kakigi

When we encode faces in memory, we often do so in association with biographical information regarding the person. To examine the neural dynamics underlying such encoding processes, we devised a face recognition task and recorded cortical activity using magnetoencephalography. The task included two conditions. In the experimental condition, face stimuli were preceded by biographical information regarding the person whose face was to be memorized, whereas in the control condition, nonsense syllables were presented before face stimuli. Behavioral results indicated that the biographical information about a person facilitated the recognition memory of their face. Magnetoencephalography signals showed clear visually evoked magnetic fields mainly in the occipitotemporal cortex, in response to the face stimuli that were to be encoded. The phasic peak was observed at 100–200 ms after onset of a face stimulus, which was followed by late latency deflections (200–400 ms). Comparison of the signal between conditions revealed that the preceding semantic information does modulate the neuromagnetic responses to the face stimuli. This modulation occurred primarily at the late latency component in the sensors over the occipitotemporal cortex. In addition, the effects of conditions were also observed in the signals from more anterior sensors, which occurred earlier than the effects in the occipitotemporal cortex. These results provide insights into the neural dynamics underlying the encoding of faces in association with their biographical information.


Scientific Reports | 2016

Self-Other Distinction Enhanced Empathic Responses in Individuals with Alexithymia

Natsuki Saito; Takemasa Yokoyama; Hideki Ohira

Although empathy is important for social interactions, individuals with alexithymia have low empathic ability, particularly where advanced empathy is concerned (empathic concern, perspective taking). It has been argued that awareness of the self-other distinction enhances advanced empathy, and alexithymics are thought to inadequately distinguish the self from others. We therefore tested whether the self-other distinction increases advanced empathy in alexithymics. To this end, we presented painful hand images over participants’ own hands, and required participants to estimate felt pain intensity and their affective states. Half of the participants got specific instructions to distinct themselves from the other in the images. Felt pain intensity (perspective taking) and other-oriented affective responses (empathic concern) were increased by the instructions only when participants had high alexithymia scores as measured by questionnaire, although self-oriented affective responses (personal distress) were not affected by the instructions. These findings indicate that enhancing the self-other distinction enhances alexithymics’ ability to use advanced empathy, but not the primitive empathy.


Journal of Vision | 2015

Reward vs. Emotion in Visual Selective Attention

Takemasa Yokoyama; Srikanth Padmala; Luiz Pessoa

Learned stimulus-reward associations influence how visuospatial attention is allocated, such that stimuli previously paired with reward are favored in situations involving limited resources and competition. At the same time, task-irrelevant emotional stimuli grab attention and divert resources away from tasks resulting in poor behavioral performance. However, investigations of how reward learning and negative stimuli affect visual perception and attention have been conducted in a largely independent fashion. We have recently reported that performance-based monetary rewards reduce negative stimuli interference during visual perception. Here, we investigated how stimuli associated with past monetary rewards compete with negative emotional stimuli during a subsequent visual attention task when, critically, no performance-based rewards were at stake. We conducted two experiments to address this question. In Experiment 1, during the initial learning phase, participants selected between two stimulus categories that were paired with high- and low-reward probabilities. In the test phase, we conducted an RSVP task where a target stimulus was preceded by a task-irrelevant neutral or negative image. We found that target stimuli that were previously associated with high reward reduced the interference effect of potent, negative images. In Experiment 2, with a related design, this response pattern persisted despite the fact that the reward manipulation was irrelevant to the task at hand. Similar to our recent findings with performance-based rewards, across two experiments, our results demonstrate that reward-associated stimuli reduce the deleterious impact of negative stimuli on behavior. Meeting abstract presented at VSS 2015.


Perception | 2014

Location probability learning requires focal attention.

Takashi Kabata; Takemasa Yokoyama; Yasuki Noguchi; Shinichi Kita

Target identification is related to the frequency with which targets appear at a given location, with greater frequency enhancing identification. This phenomenon suggests that location probability learned through repeated experience with the target modulates cognitive processing. However, it remains unclear whether attentive processing of the target is required to learn location probability. Here, we used a dual-task paradigm to test the location probability effect of attended and unattended stimuli. Observers performed an attentionally demanding central-letter task and a peripheral-bar discrimination task in which location probability was manipulated. Thus, we were able to compare performance on the peripheral task when attention was fully engaged to the target (single-task condition) versus when attentional resources were drawn away by the central task (dual-task condition). The location probability effect occurred only in the single-task condition, when attention resources were fully available. This suggests that location probability learning requires attention to the target stimuli.


Experimental Brain Research | 2012

Attentional shifts by gaze direction in voluntary orienting: evidence from a microsaccade study

Takemasa Yokoyama; Yasuki Noguchi; Shinichi Kita

Collaboration


Dive into the Takemasa Yokoyama's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryusuke Kakigi

Graduate University for Advanced Studies

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge