Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joan Liu-Shuang is active.

Publication


Featured researches published by Joan Liu-Shuang.


Neuropsychologia | 2013

The 6 Hz fundamental stimulation frequency rate for individual face discrimination in the right occipito-temporal cortex.

Esther Alonso-Prieto; Goedele Van Belle; Joan Liu-Shuang; Anthony M. Norcia; Bruno Rossion

What is the stimulus presentation rate at which the human brain can discriminate each exemplar of a familiar visual category? We presented faces at 14 frequency rates (1.0-16.66 Hz) to human observers while recording high-density electroencephalogram (EEG). Different face exemplars elicited a larger steady-state visual evoked (ssVEP) response than when the same face was repeated, but only for stimulation frequencies between 4 and 8.33 Hz, with a maximal difference at 5.88 Hz (170 ms cycle). The effect was confined to the exact stimulation frequency and localized over the right occipito-temporal cortex. At high frequency rates (>10 Hz), the response to different and identical exemplars did not differ, suggesting that the fine-grained analysis needed for individual face discrimination cannot be completed before the next face interrupts, or competes, with the processed face. At low rates (<3 Hz), repetition suppression could not be identified at the stimulation frequency, suggesting that the neural response to an individual face is temporally dispersed and distributed over different processes. These observations indicate that at a temporal rate of 170 ms (6 faces/s) the face perception network is able to fully discriminate between each individual face presented, providing information about the temporal bottleneck of individual face discrimination in humans. These results also have important practical implications for optimizing paradigms that rely on repetition suppression, and open an avenue for investigating complex visual processes at an optimal range of stimulation frequency rates.


Neuropsychologia | 2014

An objective index of individual face discrimination in the right occipito-temporal cortex by means of fast periodic oddball stimulation.

Joan Liu-Shuang; Anthony M. Norcia; Bruno Rossion

We introduce an approach based on fast periodic oddball stimulation that provides objective, high signal-to-noise ratio (SNR), and behavior-free measures of the human brains discriminative response to complex visual patterns. High-density electroencephalogram (EEG) was recorded for human observers presented with 60s sequences containing a base-face (A) sinusoidally contrast-modulated at a frequency of 5.88 Hz (F), with face size varying every cycle. Different oddball-faces (B, C, D...) were introduced at fixed intervals (every 4 stimuli = F/5 = 1.18 Hz: AAAABAAAACAAAAD...). Individual face discrimination was indexed by responses at this 1.18 Hz oddball frequency. Following only 4 min of recording, significant responses emerged at exactly 1.18 Hz and its harmonics (e.g., 2F/5 = 2.35 Hz, 3F/5 = 3.53 Hz...), with up to a 300% signal increase over the right occipito-temporal cortex. This response was present in all participants, for both color and greyscale faces, providing a robust implicit neural measure of individual face discrimination. Face inversion or contrast-reversal did not affect the basic 5.88 Hz periodic response over medial occipital channels. However, these manipulations substantially reduced the 1.18 Hz oddball discrimination response over the right occipito-temporal region, indicating that this response reflects high-level processes that are partly face-specific. These observations indicate that fast periodic oddball stimulation can be used to rapidly and objectively characterize the discrimination of visual patterns and may become invaluable in characterizing this process in typical adult, developmental, and neuropsychological patient populations.


Journal of Vision | 2015

Fast periodic presentation of natural images reveals a robust face-selective electrophysiological response in the human brain

Bruno Rossion; Katrien Torfs; Corentin Jacques; Joan Liu-Shuang

We designed a fast periodic visual stimulation approach to identify an objective signature of face categorization incorporating both visual discrimination (from nonface objects) and generalization (across widely variable face exemplars). Scalp electroencephalographic (EEG) data were recorded in 12 human observers viewing natural images of objects at a rapid frequency of 5.88 images/s for 60 s. Natural images of faces were interleaved every five stimuli, i.e., at 1.18 Hz (5.88/5). Face categorization was indexed by a high signal-to-noise ratio response, specifically at an oddball face stimulation frequency of 1.18 Hz and its harmonics. This face-selective periodic EEG response was highly significant for every participant, even for a single 60-s sequence, and was generally localized over the right occipitotemporal cortex. The periodicity constraint and the large selection of stimuli ensured that this selective response to natural face images was free of low-level visual confounds, as confirmed by the absence of any oddball response for phase-scrambled stimuli. Without any subtraction procedure, time-domain analysis revealed a sequence of differential face-selective EEG components between 120 and 400 ms after oddball face image onset, progressing from medial occipital (P1-faces) to occipitotemporal (N1-faces) and anterior temporal (P2-faces) regions. Overall, this fast periodic visual stimulation approach provides a direct signature of natural face categorization and opens an avenue for efficiently measuring categorization responses of complex visual stimuli in the human brain.


Proceedings of the National Academy of Sciences of the United States of America | 2016

A face-selective ventral occipito-temporal map of the human brain with intracerebral potentials

Jacques Jonas; Corentin Jacques; Joan Liu-Shuang; Hélène Brissart; Sophie Colnat-Coulbois; Louis Maillard; Bruno Rossion

Significance Understanding the neural basis of face perception, arguably the most important visual function for human social ecology, is of the utmost importance. With an original fast periodic visual stimulation approach, we provide a comprehensive quantification of selective brain responses to faces throughout the ventral visual stream with direct recordings in the gray matter. Selective responses to faces are distributed in the whole ventral occipito-temporal cortex, with a right hemispheric and regional specialization supporting two decades of indirect recordings of human brain activity in neuroimaging. We also disclose three distinct face-selective regions in the anterior temporal lobe, an undersampled region in neuroimaging, and reveal exclusive responses to faces at the neural population level in these regions. Human neuroimaging studies have identified a network of distinct face-selective regions in the ventral occipito-temporal cortex (VOTC), with a right hemispheric dominance. To date, there is no evidence for this hemispheric and regional specialization with direct measures of brain activity. To address this gap in knowledge, we recorded local neurophysiological activity from 1,678 contact electrodes implanted in the VOTC of a large group of epileptic patients (n = 28). They were presented with natural images of objects at a rapid fixed rate (six images per second: 6 Hz), with faces interleaved as every fifth stimulus (i.e., 1.2 Hz). High signal-to-noise ratio face-selective responses were objectively (i.e., exactly at the face stimulation frequency) identified and quantified throughout the whole VOTC. Face-selective responses were widely distributed across the whole VOTC, but also spatially clustered in specific regions. Among these regions, the lateral section of the right middle fusiform gyrus showed the largest face-selective response by far, offering, to our knowledge, the first supporting evidence of two decades of neuroimaging observations with direct neural measures. In addition, three distinct regions with a high proportion of face-selective responses were disclosed in the right ventral anterior temporal lobe, a region that is undersampled in neuroimaging because of magnetic susceptibility artifacts. A high proportion of contacts responding only to faces (i.e., “face-exclusive” responses) were found in these regions, suggesting that they contain populations of neurons involved in dedicated face-processing functions. Overall, these observations provide a comprehensive mapping of visual category selectivity in the whole human VOTC with direct neural measures.


Proceedings of the National Academy of Sciences of the United States of America | 2015

Neural microgenesis of personally familiar face recognition

Meike Ramon; Luca Vizioli; Joan Liu-Shuang; Bruno Rossion

Significance We addressed the open question of how the human brain recognizes personally familiar faces. A dynamic visual-stimulation paradigm revealed that familiar face recognition is achieved first and foremost in medial and anterior temporal regions of the extended face-processing system. These regions, including the amygdala, respond categorically to individual familiar faces. In contrast, activation in posterior core face-preferential regions is associated with the amount of visual information available, irrespective of familiarity. Through integration of core and extended face-processing systems, these observations provide a common framework for understanding the neural basis of familiar face recognition. Despite a wealth of information provided by neuroimaging research, the neural basis of familiar face recognition in humans remains largely unknown. Here, we isolated the discriminative neural responses to unfamiliar and familiar faces by slowly increasing visual information (i.e., high-spatial frequencies) to progressively reveal faces of unfamiliar or personally familiar individuals. Activation in ventral occipitotemporal face-preferential regions increased with visual information, independently of long-term face familiarity. In contrast, medial temporal lobe structures (perirhinal cortex, amygdala, hippocampus) and anterior inferior temporal cortex responded abruptly when sufficient information for familiar face recognition was accumulated. These observations suggest that following detailed analysis of individual faces in core posterior areas of the face-processing network, familiar face recognition emerges categorically in medial temporal and anterior regions of the extended cortical face network.


Neuropsychologia | 2016

An objective electrophysiological marker of face individualisation impairment in acquired prosopagnosia with fast periodic visual stimulation.

Joan Liu-Shuang; Katrien Torfs; Bruno Rossion

One of the most striking pieces of evidence for a specialised face processing system in humans is acquired prosopagnosia, i.e. the inability to individualise faces following brain damage. However, a sensitive and objective non-behavioural marker for this deficit is difficult to provide with standard event-related potentials (ERPs), such as the well-known face-related N170 component reported and investigated in-depth by our late distinguished colleague Shlomo Bentin. Here we demonstrate that fast periodic visual stimulation (FPVS) in electrophysiology can quantify face individualisation impairment in acquired prosopagnosia. In Experiment 1 (Liu-Shuang et al., 2014), identical faces were presented at a rate of 5.88 Hz (i.e., ≈ 6 images/s, SOA=170 ms, 1 fixation per image), with different faces appearing every 5th face (5.88 Hz/5=1.18 Hz). Responses of interest were identified at these predetermined frequencies (i.e., objectively) in the EEG frequency-domain data. A well-studied case of acquired prosopagnosia (PS) and a group of age- and gender-matched controls completed only 4 × 1-min stimulation sequences, with an orthogonal fixation cross task. Contrarily to controls, PS did not show face individualisation responses at 1.18 Hz, in line with her prosopagnosia. However, her response at 5.88 Hz, reflecting general visual processing, was within the normal range. In Experiment 2 (Rossion et al., 2015), we presented natural (i.e., unsegmented) images of objects at 5.88 Hz, with face images shown every 5th image (1.18 Hz). In accordance with her preserved ability to categorise a face as a face, and despite extensive brain lesions potentially affecting the overall EEG signal-to-noise ratio, PS showed 1.18 Hz face-selective responses within the normal range. Collectively, these findings show that fast periodic visual stimulation provides objective and sensitive electrophysiological markers of preserved and impaired face processing abilities in the neuropsychological population.


Vision Research | 2015

The effect of contrast polarity reversal on face detection: evidence of perceptual asymmetry from sweep VEP

Joan Liu-Shuang; Justin Ales; Bruno Rossion; Anthony M. Norcia

Contrast polarity inversion (i.e., turning dark regions light and vice versa) impairs face perception. We investigated the perceptual asymmetry between positive and negative polarity faces (matched for overall luminance) using a sweep VEP approach in the context of face detection (Journal of Vision 12 (2012) 1-18). Phase-scrambled face stimuli alternated at a rate of 3 Hz (6 images/s). The phase coherence of every other stimulus was parametrically increased so that a face gradually emerged over a 20-s stimulation sequence, leading to a 3 Hz response reflecting face detection. Contrary to the 6 Hz response, reflecting low-level visual processing, this 3 Hz response was larger and emerged earlier over right occipito-temporal channels for positive than negative polarity faces. Moreover, the 3 Hz response emerged abruptly to positive polarity faces, whereas it increased linearly for negative polarity faces. In another condition, alternating between a positive and a negative polarity face also elicited a strong 3 Hz response, indicating an asymmetrical representation of positive and negative polarity faces even at supra-threshold levels (i.e., when both stimuli were perceived as faces). Overall, these findings demonstrate distinct perceptual representations of positive and negative polarity faces, independently of low-level cues, and suggest qualitatively different detection processes (template-based matching for positive polarity faces vs. linear accumulation of evidence for negative polarity faces).


Journal of Cognitive Neuroscience | 2017

Individual Differences in Face Identity Processing with Fast Periodic Visual Stimulation

Buyun Xu; Joan Liu-Shuang; Bruno Rossion; James W. Tanaka

A growing body of literature suggests that human individuals differ in their ability to process face identity. These findings mainly stem from explicit behavioral tasks, such as the Cambridge Face Memory Test (CFMT). However, it remains an open question whether such individual differences can be found in the absence of an explicit face identity task and when faces have to be individualized at a single glance. In the current study, we tested 49 participants with a recently developed fast periodic visual stimulation (FPVS) paradigm [Liu-Shuang, J., Norcia, A. M., & Rossion, B. An objective index of individual face discrimination in the right occipitotemporal cortex by means of fast periodic oddball stimulation. Neuropsychologia, 52, 57–72, 2014] in EEG to rapidly, objectively, and implicitly quantify face identity processing. In the FPVS paradigm, one face identity (A) was presented at the frequency of 6 Hz, allowing only one gaze fixation, with different face identities (B, C, D) presented every fifth face (1.2 Hz; i.e., AAAABAAAACAAAAD…). Results showed a face individuation response at 1.2 Hz and its harmonics, peaking over occipitotemporal locations. The magnitude of this response showed high reliability across different recording sequences and was significant in all but two participants, with the magnitude and lateralization differing widely across participants. There was a modest but significant correlation between the individuation response amplitude and the performance of the behavioral CFMT task, despite the fact that CFMT and FPVS measured different aspects of face identity processing. Taken together, the current study highlights the FPVS approach as a promising means for studying individual differences in face identity processing.


Journal of Vision | 2015

Separable effects of inversion and contrast-reversal on face detection thresholds and response functions: A sweep VEP study

Joan Liu-Shuang; Justin Ales; Bruno Rossion; Anthony M. Norcia

The human brain rapidly detects faces in the visual environment. We recently presented a sweep visual evoked potential approach to objectively define face detection thresholds as well as suprathreshold response functions (Ales, Farzin, Rossion, & Norcia, 2012). Here we determined these parameters are affected by orientation (upright vs. inverted) and contrast polarity (positive vs. negative), two manipulations that disproportionately disrupt the perception of faces relative to other object categories. Face stimuli parametrically increased in visibility through phase-descrambling while alternating with scrambled images at a fixed presentation rate of 3 Hz (6 images/s). The power spectrum and mean luminance of all stimuli were equalized. As a face gradually emerged during a stimulation sequence, EEG responses at 3 Hz appeared at ≈35% phase coherence over right occipito-temporal channels, replicating previous observations. With inversion and contrast-reversal, the 3-Hz amplitude decreased by ≈20%-50% and the face detection threshold increased by ≈30%-60% coherence. Furthermore, while the 3-Hz response emerged abruptly and saturated quickly for normal faces, suggesting a categorical neural response, the response profile for inverted and negative polarity faces was shallower and more linear, indicating gradual and continuously increasing activation of the underlying neural population. These findings demonstrate that inversion and contrast-reversal increase the threshold and modulate the suprathreshold response function of face detection.


Journal of Vision | 2018

An objective signature of emotional expressions and context integration within a single glance: evidence from electroencephalographic frequency-tagging

Stéphanie Matt; Joan Liu-Shuang; Louis Maillard; Joëlle Lighezzolo-Alnot; Bruno Rossion; Stéphanie Caharel

The ability to quickly and accurately extract someones emotional state from their face is crucial for social interaction. Over the last decades, the processing of emotional expressions has been studied mainly using isolated faces. However, at the behavioral level, contextual information often leads to radical changes in the categorization of facial expressions, yet the underlying mechanisms are not well understood (Aviezer et al., 2017, Current Opinion in Psychology, 17, 47–54; Barrett et al., 2011, Current Directions in Psychological Science, 20, 286 –290). Here we examined the impact of emotional visual scenes on the perception of emotional expressions within a single glance by means of fast periodic visual stimulation (FPVS). We recorded 128-channel EEG while participants viewed 60s sequences with a dual frequency-tagging paradigm (Boremanse et al, 2013, Journal of Vision (11):6, 1-18). We presented faces and scenes simultaneously, with each stimulus set flickering at specific frequency (f1=4.61 Hz and f2=5.99 Hz; frequencies were counterbalanced across stimuli). Each sequence displayed different faces with the same emotional expression (disgust, fear, or joy) within either positive or negative valence visual scenes. Periodic EEG responses at the image presentation frequencies (4.61 Hz and 5.99 Hz) captured general visual processing of the emotional faces and scenes, while intermodulation components (e.g. f2-f1: 5.99 – 4.61 Hz = 1.38 Hz) captured the integration between the emotional expressions and their context. At the group-level, emotional expressions elicited right-lateralized occipito-temporal electrophysiological responses that were stronger for negative valence expressions (especially disgust). Similarly, negative scenes elicited stronger neural responses than positive scenes over the medial occipital region. Finally, and critically, we observed intermodulation components that were prominent over right occipito-temporal sites and showed increased response amplitude for negative scenes, thereby providing an objective demonstration of the perceptual integration of emotional facial expressions with their emotional context.

Collaboration


Dive into the Joan Liu-Shuang's collaboration.

Top Co-Authors

Avatar

Bruno Rossion

Catholic University of Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Justin Ales

University of St Andrews

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Meike Ramon

University of Fribourg

View shared research outputs
Top Co-Authors

Avatar

Corentin Jacques

Catholic University of Leuven

View shared research outputs
Top Co-Authors

Avatar

Genevieve Quek

Catholic University of Leuven

View shared research outputs
Top Co-Authors

Avatar

Katrien Torfs

Catholic University of Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge