Julie Bertels
Université libre de Bruxelles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Julie Bertels.
Acta Psychologica | 2010
Julie Bertels; Régine Kolinsky; Jose Morais
The influence of the affective content of speech on the spatial orienting of auditory attention was examined by adapting the dot probe task. Two words, one of which was emotional in one quarter of the trials, were played simultaneously from a left- and a right-located loudspeaker, respectively, and followed (or not) by a lateralized beep to be detected (Experiments 1 and 2) or localized (Experiment 3). Taboo words induced attentional biases towards their spatial location in all experiments, as did negative words in Experiment 3, but not positive words. Thus, in audition, the identification of an emotional word automatically activates the information about its spatial origin. Moreover, for both word types, attentional biases were only observed when the emotional word was presented on the participants right side, suggesting that the dominant left hemisphere processing of words constrains the occurrence of spatial congruency effects.
Emotion | 2011
Julie Bertels; Régine Kolinsky; Elise P.E. Pietrons; Jose Morais
Using an auditory adaptation of the emotional and taboo Stroop tasks, the authors compared the effects of negative and taboo spoken words in mixed and blocked designs. Both types of words elicited carryover effects with mixed presentations and interference with blocked presentations, suggesting similar long-lasting attentional effects. Both were also relatively resilient to the long-lasting influence of the preceding emotional word. Hence, contrary to what has been assumed (Schmidt & Saari, 2007), negative and taboo words do not seem to differ in terms of the temporal dynamics of the interdimensional shifting, at least in the auditory modality.
PLOS ONE | 2013
Julie Bertels; Catherine Demoulin; Ana Franco; Arnaud Destrebecqz
It is well established that mood influences many cognitive processes, such as learning and executive functions. Although statistical learning is assumed to be part of our daily life, as mood does, the influence of mood on statistical learning has never been investigated before. In the present study, a sad vs. neutral mood was induced to the participants through the listening of stories while they were exposed to a stream of visual shapes made up of the repeated presentation of four triplets, namely sequences of three shapes presented in a fixed order. Given that the inter-stimulus interval was held constant within and between triplets, the only cues available for triplet segmentation were the transitional probabilities between shapes. Direct and indirect measures of learning taken either immediately or 20 minutes after the exposure/mood induction phase revealed that participants learned the statistical regularities between shapes. Interestingly, although participants from the sad and neutral groups performed similarly in these tasks, subjective measures (confidence judgments taken after each trial) revealed that participants who experienced the sad mood induction showed increased conscious access to their statistical knowledge. These effects were not modulated by the time delay between the exposure/mood induction and the test phases. These results are discussed within the scope of the robustness principle and the influence of negative affects on processing style.
Journal of cognitive psychology | 2011
Julie Bertels; Régine Kolinsky; Aurélie Bernaerts; Jose Morais
Attentional biases linked to emotional stimuli were investigated in healthy people using an auditory adaptation of the cueing paradigm. Specifically, we investigated whether both validity effects elicited by predictive, endogenous cues and the Inhibition of Return phenomenon (IOR; Posner & Cohen, 1984) elicited by unpredictive, exogenous cues are influenced by the emotional content of spoken words. Supporting the idea that exogenous orienting is not an encapsulated phenomenon (Stolz, 1996), we found abolished IOR for negative words (Experiments 3 and 4). Thus, attention would not be prevented from returning to the previously explored location of a negative word. On the contrary, no emotional modulation of the validity effects was observed (Experiments 1 and 2), suggesting that the intervention of resource-demanding orienting strategies increased cognitive load and thus prevented any emotional modulation. Still, facilitative, nonspatial effects of negative words were found when initial attentional shifts elicited by the cue were both exogenous and endogenous (Experiment 1), but not when they were exclusively endogenous (Experiment 2). These results highlight the importance of both the negativity of a stimulus and the automaticity of attentional shifts in eliciting spatial and nonspatial attentional effects.
Frontiers in Psychology | 2015
Julie Bertels; Emeline Boursain; Arnaud Destrebecqz; Vinciane Gaillard
Visual statistical learning (VSL) is the ability to extract the joint and conditional probabilities of shapes co-occurring during passive viewing of complex visual configurations. Evidence indicates that even infants are sensitive to these regularities (e.g., Kirkham et al., 2002). However, there is continuing debate as to whether VSL is accompanied by conscious awareness of the statistical regularities between sequence elements. Bertels et al. (2012) addressed this question in young adults. Here, we adapted their paradigm to investigate VSL and conscious awareness in children. Using the same version of the paradigm, we also tested young adults so as to directly compare results from both age groups. Fifth graders and undergraduates were exposed to a stream of visual shapes arranged in triplets. Learning of these sequences was then assessed using both direct and indirect measures. In order to assess the extent to which learning occurred explicitly, we also measured confidence through subjective measures in the direct task (i.e., binary confidence judgments). Results revealed that both children and young adults learned the statistical regularities between shapes. In both age groups, participants who performed above chance in the completion task had conscious access to their knowledge. Nevertheless, although adults performed above chance even when they claimed to guess, there was no evidence of implicit knowledge in children. These results suggest that the role of implicit and explicit influences in VSL may follow a developmental trajectory.
Experimental Psychology | 2015
Ana Franco; Julia Eberlen; Arnaud Destrebecqz; Axel Cleeremans; Julie Bertels
The Rapid Serial Visual Presentation procedure is a method widely used in visual perception research. In this paper we propose an adaptation of this method which can be used with auditory material and enables assessment of statistical learning in speech segmentation. Adult participants were exposed to an artificial speech stream composed of statistically defined trisyllabic nonsense words. They were subsequently instructed to perform a detection task in a Rapid Serial Auditory Presentation (RSAP) stream in which they had to detect a syllable in a short speech stream. Results showed that reaction times varied as a function of the statistical predictability of the syllable: second and third syllables of each word were responded to faster than first syllables. This result suggests that the RSAP procedure provides a reliable and sensitive indirect measure of auditory statistical learning.
Frontiers in Psychology | 2015
Julie Bertels; Arnaud Destrebecqz; Ana Franco
The statistical regularities of a sequence of visual shapes can be learned incidentally. Arciuli et al. (2014) recently argued that intentional instructions only improve learning at slow presentation rates as they favor the use of explicit strategies. The aim of the present study was (1) to test this assumption directly by investigating how instructions (incidental vs. intentional) and presentation rate (fast vs. slow) affect the acquisition of knowledge and (2) to examine how these factors influence the conscious vs. unconscious nature of the knowledge acquired. To this aim, we exposed participants to four triplets of shapes, presented sequentially in a pseudo-random order, and assessed their degree of learning in a subsequent completion task that integrated confidence judgments. Supporting Arciuli et al.’s (2014) claim, participant performance only benefited from intentional instructions at slow presentation rates. Moreover, informing participants beforehand about the existence of statistical regularities increased their explicit knowledge of the sequences, an effect that was not modulated by presentation speed. These results support that, although visual statistical learning can take place incidentally and, to some extent, outside conscious awareness, factors such as presentation rate and prior knowledge can boost learning of these regularities, presumably by favoring the acquisition of explicit knowledge.
Cognition & Emotion | 2012
Julie Bertels; Régine Kolinsky; Jose Morais
Following a suggestion made by Aquino and Arnell (2007), we assumed that the processing of emotional words is influenced by their context of presentation. Supporting this idea, previous studies using the emotional Stroop task in its visual or auditory variant revealed different results depending on the mixed versus blocked presentation of the stimuli (Bertels, Kolinsky, Pietrons, & Morais, 2011; Richards, French, Johnson, Naparstek, & Williams, 1992). In the present study, we investigated the impact of these presentation designs on the occurrence of spatial attentional biases in a modified version of the beep-probe task (Bertels, Kolinsky, & Morais, 2010). Attentional vigilance to taboo words as well as non-spatial slowing effects of these words were observed whatever the mixed or blocked design, whereas attentional vigilance to positive words was only observed in the mixed design. Together with the results from our previous study (Bertels et al., 2010), the present data support the reliability of the effects of shocking stimuli, while vigilance to positive words would only be observed in a threatening context.
NeuroImage | 2019
Florian Destoky; Morgane Philippe; Julie Bertels; Marie Verhasselt; Nicolas Coquelet; Marc vander Ghinst; Vincent Wens; Xavier De Tiege; Mathieu Bourguignon
&NA; During connected speech listening, brain activity tracks speech rhythmicity at delta (˜0.5 Hz) and theta (4–8 Hz) frequencies. Here, we compared the potential of magnetoencephalography (MEG) and high‐density electroencephalography (EEG) to uncover such speech brain tracking. Ten healthy right‐handed adults listened to two different 5‐min audio recordings, either without noise or mixed with a cocktail‐party noise of equal loudness. Their brain activity was simultaneously recorded with MEG and EEG. We quantified speech brain tracking channel‐by‐channel using coherence, and with all channels at once by speech temporal envelope reconstruction accuracy. In both conditions, speech brain tracking was significant at delta and theta frequencies and peaked in the temporal regions with both modalities (MEG and EEG). However, in the absence of noise, speech brain tracking estimated from MEG data was significantly higher than that obtained from EEG. Furthemore, to uncover significant speech brain tracking, recordings needed to be ˜3 times longer in EEG than MEG, depending on the frequency considered (delta or theta) and the estimation method. In the presence of noise, both EEG and MEG recordings replicated the previous finding that speech brain tracking at delta frequencies is stronger with attended speech (i.e., the sound subjects are attending to) than with the global sound (i.e., the attended speech and the noise combined). Other previously reported MEG findings were replicated based on MEG but not EEG recordings: 1) speech brain tracking at theta frequencies is stronger with attended speech than with the global sound, 2) speech brain tracking at delta frequencies is stronger in noiseless than noisy conditions, and 3) when noise is added, speech brain tracking at delta frequencies dampens less in the left hemisphere than in the right hemisphere. Finally, sources of speech brain tracking reconstructed from EEG data were systematically deeper and more posterior than those derived from MEG. The present study demonstrates that speech brain tracking is better seen with MEG than EEG. Quantitatively, EEG recordings need to be ˜3 times longer than MEG recordings to uncover significant speech brain tracking. As a consequence, MEG appears more suited than EEG to pinpoint subtle effects related to speech brain tracking in a given recording time. HighlightsSpeech brain tracking was analysed from simultaneous MEG and EEG data.Uncovering speech brain tracking requires 3 times shorter MEG than EEG recordings.Some previous MEG findings were replicated with MEG but not with EEG.
Quarterly Journal of Experimental Psychology | 2018
Arnaud Destrebecqz; Michaël Vande Velde; Estibaliz San Anton; Axel Cleeremans; Julie Bertels
In a partial reinforcement schedule where a cue repeatedly predicts the occurrence of a target in consecutive trials, reaction times to the target tend to decrease in a monotonic fashion, while participants’ expectancies for the target decrease at the same time. This dissociation between reaction times and expectancies—the so-called Perruchet effect—challenges the propositional view of learning, which posits that human conditioned responses result from conscious inferences about the relationships between events. However, whether the reaction time pattern reflects the strength of a putative cue-target link, or only non-associative processes, such as motor priming, remains unclear. To address this issue, we implemented the Perruchet procedure in a two-choice reaction time task and compared reaction time patterns in an Experimental condition, in which a tone systematically preceded a visual target, and in a Control condition, in which the onset of the two stimuli were uncoupled. Participants’ expectancies regarding the target were recorded separately in an initial block. Reaction times decreased with the succession of identical trials in both conditions, reflecting the impact of motor priming. Importantly, reaction time slopes were steeper in the Experimental than in the Control condition, indicating an additional influence of the associative strength between the two stimuli. Interestingly, slopes were less steep for participants who showed the gambler’s fallacy in the initial block. In sum, our results suggest the mutual influences of motor priming, associative strength, and expectancies on performance. They are in line with a dual-process model of learning involving both a propositional reasoning process and an automatic link-formation mechanism.