Eleonora Bartoli
Istituto Italiano di Tecnologia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eleonora Bartoli.
Philosophical Transactions of the Royal Society B | 2014
Alessandro D'Ausilio; Laura Maffongelli; Eleonora Bartoli; Martina Campanella; Elisabetta Ferrari; Jeffrey Berry; Luciano Fadiga
The activation of listeners motor system during speech processing was first demonstrated by the enhancement of electromyographic tongue potentials as evoked by single-pulse transcranial magnetic stimulation (TMS) over tongue motor cortex. This technique is, however, technically challenging and enables only a rather coarse measurement of this motor mirroring. Here, we applied TMS to listeners’ tongue motor area in association with ultrasound tissue Doppler imaging to describe fine-grained tongue kinematic synergies evoked by passive listening to speech. Subjects listened to syllables requiring different patterns of dorso-ventral and antero-posterior movements (/ki/, /ko/, /ti/, /to/). Results show that passive listening to speech sounds evokes a pattern of motor synergies mirroring those occurring during speech production. Moreover, mirror motor synergies were more evident in those subjects showing good performances in discriminating speech in noise demonstrating a role of the speech-related mirror system in feed-forward processing the speakers ongoing motor plan.
Neuropsychologia | 2014
Alessandro D’Ausilio; Eleonora Bartoli; Laura Maffongelli; Jeffrey Berry; Luciano Fadiga
Audiovisual speech perception is likely based on the association between auditory and visual information into stable audiovisual maps. Conflicting audiovisual inputs generate perceptual illusions such as the McGurk effect. Audiovisual mismatch effects could be either driven by the detection of violations in the standard audiovisual statistics or via the sensorimotor reconstruction of the distal articulatory event that generated the audiovisual ambiguity. In order to disambiguate between the two hypotheses we exploit the fact that the tongue is hidden to vision. For this reason, tongue movement encoding can solely be learned via speech production but not via others׳ speech perception alone. Here we asked participants to identify speech sounds while matching or mismatching visual representations of tongue movements which were shown. Vision of congruent tongue movements facilitated auditory speech identification with respect to incongruent trials. This result suggests that direct visual experience of an articulator movement is not necessary for the generation of audiovisual mismatch effects. Furthermore, we suggest that audiovisual integration in speech may benefit from speech production learning.
Neuropsychologia | 2014
Eleonora Bartoli; Laura Maffongelli; Marco Jacono; Alessandro D’Ausilio
The term affordance defines a property of objects, which relates to the possible interactions that an agent can carry out on that object. In monkeys, canonical neurons encode both the visual and the motor properties of objects with high specificity. However, it is not clear if in humans exists a similarly fine-grained description of these visuomotor transformations. In particular, it has not yet been proven that the processing of visual features related to specific affordances induces both specific and early visuomotor transformations, given that complete specificity has been reported to emerge quite late (300-450ms). In this study, we applied an adaptation-stimulation paradigm to investigate early cortico-spinal facilitation and hand movements׳ synergies evoked by the observation of tools. We adapted, through passive observation of finger movements, neuronal populations coding either for precision or power grip actions. We then presented the picture of one tool affording one of the two grasps types and applied single-pulse Transcranial Magnetic Stimulation (TMS) to the hand primary motor cortex, 150ms after image onset. Cortico-spinal excitability of the Abductor Digiti Minimi and Abductor Pollicis Brevis showed a detailed pattern of modulations, matching tools׳ affordances. Similarly, TMS-induced hand movements showed a pattern of grip-specific whole hand synergies. These results offer a direct proof of the emergence of an early visuomotor transformation when tools are observed, that maintains the same amount of synergistic motor details as the actions we can perform on them.
Neuropsychologia | 2015
Laura Maffongelli; Eleonora Bartoli; Daniela Sammler; S. Kolsch; Claudio Campus; Etienne Olivier; Luciano Fadiga; Alessandro D’Ausilio
Sentences, musical phrases and goal-directed actions are composed of elements that are linked by specific rules to form meaningful outcomes. In goal-directed actions including a non-canonical element or scrambling the order of the elements alters the actions content and structure, respectively. In the present study we investigated event-related potentials of the electroencephalographic (EEG) activity recorded during observation of both alterations of the action content (obtained by violating the semantic components of an action, e.g. making coffee with cola) and alterations of the action structure (obtained by inverting the order of two temporally adjacent pictures of sequences depicting daily life actions) interfering with the normal flow of the motor acts that compose an action. Action content alterations elicited a bilateral posterior distributed EEG negativity, peaking at around 400 ms after stimulus onset similar to the ERPs evoked by semantic violations in language studies. Alteration of the action structure elicited an early left anterior negativity followed by a late left anterior positivity, which closely resembles the ERP pattern found in language syntax violation studies. Our results suggest a functional dissociation between the processing of action content and structure, reminiscent of a similar dissociation found in the language or music domains. Importantly, this study provides further support to the hypothesis that some basic mechanisms, such as the rule-based structuring of sequential events, are shared between different cognitive domains.
Scientific Reports | 2016
Eleonora Bartoli; Laura Maffongelli; Claudio Campus; Alessandro D’Ausilio
During speech listening motor regions are somatotopically activated, resembling the activity that subtends actual speech production, suggesting that motor commands can be retrieved from sensory inputs. Crucially, the efficient motor control of the articulators relies on the accurate anticipation of the somatosensory reafference. Nevertheless, evidence about somatosensory activities elicited by auditory speech processing is sparse. The present work looked for specific interactions between auditory speech presentation and somatosensory cortical information processing. We used an auditory speech identification task with sounds having different place of articulation (bilabials and dentals). We tested whether coupling the auditory task with a peripheral electrical stimulation of the lips would affect the pattern of sensorimotor electroencephalographic rhythms. Peripheral electrical stimulation elicits a series of spectral perturbations of which the beta rebound reflects the return-to-baseline stage of somatosensory processing. We show a left-lateralized and selective reduction in the beta rebound following lip somatosensory stimulation when listening to speech sounds produced with the lips (i.e. bilabials). Thus, the somatosensory processing could not return to baseline due to the recruitment of the same neural resources by speech stimuli. Our results are a clear demonstration that heard speech sounds are somatotopically mapped onto somatosensory cortices, according to place of articulation.
Human Brain Mapping | 2018
Eleonora Bartoli; Adam R. Aron; Nitin Tandon
Stopping incipient action activates both the right inferior frontal cortex (rIFC) and the anterior insula (rAI). Controversy has arisen as to whether these comprise a unitary cortical cluster—the rIFC/rAI—or whether rIFC is the primary stopping locus. To address this, we recorded directly from these structures while taking advantage of the high spatiotemporal resolution of closely spaced stereo‐electro‐encephalographic (SEEG) electrodes. We studied 12 patients performing a stop‐signal task. On each trial they initiated a motor response (Go) and tried to stop to an occasional stop signal. Both the rIFC and rAI exhibited an increase in broadband gamma activity (BGA) after the stop signal and within the time of stopping (stop signal reaction time, SSRT), regardless of the success of stopping. The proportion of electrodes with this response was significantly greater in the rIFC than the rAI. Also, the rIFC response preceded that in the rAI. Last, while the BGA increase in rIFC occurred mainly prior to SSRT, the rAI showed a sustained increase in the beta and low gamma bands after the SSRT. In summary, the rIFC was activated soon after the stop signal, prior to and more robustly than the rAI, which on the other hand, showed a more prolonged response after the onset of stopping. Our results are most compatible with the notion that the rIFC is involved in triggering outright stopping in concert with a wider network, while the rAI is likely engaged by other processes, such as arousal, saliency, or behavioral adjustments. Hum Brain Mapp 39:189–203, 2018.
Cerebral Cortex | 2018
Eleonora Bartoli; Christopher R. Conner; Cihan Mehmet Kadipasaoglu; Sudha Yellapantula; Matthew Rollo; Cameron S. Carter; Nitin Tandon
Cognitive control refers to the ability to produce flexible, goal-oriented behavior in the face of changing task demands and conflicting response tendencies. A classic cognitive control experiment is the Stroop-color naming task, which requires participants to name the color in which a word is written while inhibiting the tendency to read the word. By comparing stimuli with conflicting word-color associations to congruent ones, control processes over response tendencies can be isolated. We assessed the spatial specificity and temporal dynamics in the theta and gamma bands for regions engaged in detecting and resolving conflict in a cohort of 13 patients using a combination of high-resolution surface and depth recordings. We show that cognitive control manifests as a sustained increase in gamma band power, which correlates with response time. Conflict elicits a sustained gamma power increase but a transient theta power increase, specifically localized to the left cingulate sulcus and bilateral dorsolateral prefrontal cortex (DLPFC). Additionally, activity in DLPFC is affected by trial-by-trial modulation of cognitive control (the Gratton effect). Altogether, the sustained local neural activity in dorsolateral and medial regions is what determines the timing of the correct response.
Physics of Life Reviews | 2015
Alessandro D'Ausilio; Eleonora Bartoli; Laura Maffongelli
We are grateful to all commentators for their insightful commentaries and observations that enrich our proposal. One of our aims was indeed to bridge the gap between fields of research that, progressing independently, are facing similar issues regarding the neural representation of motor knowledge. In this respect, we were pleased to receive feedback from eminent researchers on both the mirror neuron as well as the motor control fields. Their expertise covers animal and human neurophysiology, as well as the computational modeling of neural and behavioral processes. Given their heterogeneous cultural perspectives and research approaches, a number of important open questions were raised. For simplicity we separated these issues into four sections. In the first section we present methodological aspects regarding how synergies can be measured in paradigms investigating the human mirror system. The second section regards the fundamental definition of what exactly synergies might be. The third concerns how synergies can generate testable predictions in mirror neuron research. Finally, the fourth section deals with the ultimate question regarding the function of the mirror neuron system. Before discussing the important observations risen by commentators (Enticott [1], Frey and Chen [2], Naish and Holmes [3], Casile [4], Pezzulo, Donnarumma, Iodice, Prevete and Dindo [5], Santello [6], Swinnen and Alaerts [7], Cattaneo [8], Candidi, Sacheli and Aglioti [9], Cavallo, Ansuini and Becchio [10], de C. Hamilton [11]) we wish to stress the almost unanimous awareness that we indeed have a problem. Human mirror neuron research has almost ended up in a theoretical cul de sac, and we are in need for new falsifiable models on the function of this system [12]. We are very pleased to observe that our aim to infuse some fresh blood, coming from more mature fields of research, was appreciated by almost all commentators, giving raise to intriguing new suggestions.
Neuropsychologia | 2018
Judith Schmitz; Eleonora Bartoli; Laura Maffongelli; Luciano Fadiga; Núria Sebastián-Gallés; Alessandro D’Ausilio
Listening to speech has been shown to activate motor regions, as measured by corticobulbar excitability. In this experiment, we explored if motor regions are also recruited during listening to non-native speech, for which we lack both sensory and motor experience. By administering Transcranial Magnetic Stimulation (TMS) over the left motor cortex we recorded corticobulbar excitability of the lip muscles when Italian participants listened to native-like and non-native German vowels. Results showed that lip corticobulbar excitability increased for a combination of lip use during articulation and non-nativeness of the vowels. Lip corticobulbar excitability was further related to measures obtained in perception and production tasks showing a negative relationship with nativeness ratings and a positive relationship with the uncertainty of lip movement during production of the vowels. These results suggest an active and compensatory role of the motor system during listening to perceptually/articulatory unfamiliar phonemes.
IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2017
Eleonora Bartoli; Francesca Caso; Giuseppe Magnani; Gabriel Baud-Bovy
A low-cost robotic interface was used to assess the visuo-motor performance of patients with Alzheimer’s disease (AD). Twenty AD patients and twenty age-matched controls participated in this work. The battery of tests included simple reaction times, position tracking, and stabilization tasks performed with both hands. The regularity, velocity, visual and haptic feedback were manipulated to vary movement complexity. Reaction times and movement tracking error were analyzed. Results show a marked group effect on a subset of conditions, in particular when the patients could not rely on the visual feedback of hand movement. The visuo-motor performance correlated with the measures of global cognitive functioning and with different memory-related abilities. Our results support the hypothesis that the ability to recall and use visuo-spatial associations might underlie the impairment in complex motor behavior that has been reported in AD patients. Importantly, the patients had preserved learning effects across sessions, which might relate to visuo-motor deficits being less evident in every-day life and clinical assessments. This robotic assessment, lasting less than 1 h, provides detailed information about the integrity of visuo-motor abilities. The data can aid the understanding of the complex pattern of deficits that characterizes this pervasive disease.