Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paolo Bernardis is active.

Publication


Featured researches published by Paolo Bernardis.


Neuropsychologia | 2006

Speech and gesture share the same communication system

Paolo Bernardis; Maurizio Gentilucci

Humans speak and produce symbolic gestures. Do these two forms of communication interact, and how? First, we tested whether the two communication signals influenced each other when emitted simultaneously. Participants either pronounced words, or executed symbolic gestures, or emitted the two communication signals simultaneously. Relative to the unimodal conditions, multimodal voice spectra were enhanced by gestures, whereas multimodal gesture parameters were reduced by words. In other words, gesture reinforced word, whereas word inhibited gesture. In contrast, aimless arm movements and pseudo-words had no comparable effects. Next, we tested whether observing word pronunciation during gesture execution affected verbal responses in the same way as emitting the two signals. Participants responded verbally to either spoken words, or to gestures, or to the simultaneous presentation of the two signals. We observed the same reinforcement in the voice spectra as during simultaneous emission. These results suggest that spoken word and symbolic gesture are coded as single signal by a unique communication system. This signal represents the intention to engage a closer interaction with a hypothetical interlocutor and it may have a meaning different from when word and gesture are encoded singly.


Neuroscience & Biobehavioral Reviews | 2008

Visually guided pointing, the Muller-Lyer illusion, and the functional interpretation of the dorsal-ventral split: Conclusions from 33 independent studies

Nicola Bruno; Paolo Bernardis; Maurizio Gentilucci

Models of the human vision propose a division of labor between vision-for-action (identified with the V1-PPT dorsal stream) and vision-for-perception (the V1-IT ventral stream). The idea has been successful in explaining a host of neuropsychological and behavioral data, but has remained controversial in predicting that visually guided actions should be immune from visual illusions. Here we evaluate this prediction by reanalyzing 33 independent studies of rapid pointing involving the Müller-Lyer or related illusions. We find that illusion effects vary widely across studies from around zero to comparable to perceptual effects. After examining several candidate factors both between and within participants, we show that almost 80% of this variability is explained well by two general concepts. The first is that the illusion has little effect when pointing is programmed from viewing the target rather than from memory. The second that the illusion effect is weakened when participants learn to selectively attend to target locations over repeated trials. These results are largely in accord with the vision-for-action vs. vision-for-perception distinction. However, they also suggest a potential involvement of learning and attentional processes during motor preparation. Whether these are specific to visuomotor mechanisms or shared with vision-for-perception remains to be established.


Journal of Cognitive Neuroscience | 2006

Repetitive Transcranial Magnetic Stimulation of Broca's Area Affects Verbal Responses to Gesture Observation

Maurizio Gentilucci; Paolo Bernardis; Girolamo Crisi; Riccardo Dalla Volta

The aim of the present study was to determine whether Brocas area is involved in translating some aspects of arm gesture representations into mouth articulation gestures. In Experiment 1, we applied low-frequency repetitive transcranial magnetic stimulation over Brocas area and over the symmetrical loci of the right hemisphere of participants responding verbally to communicative spoken words, to gestures, or to the simultaneous presentation of the two signals. We performed also sham stimulation over the left stimulation loci. In Experiment 2, we performed the same stimulations as in Experiment 1 to participants responding with words congruent and incongruent with gestures. After sham stimulation voicing parameters were enhanced when responding to communicative spoken words or to gestures as compared to a control condition of word reading. This effect increased when participants responded to the simultaneous presentation of both communicative signals. In contrast, voicing was interfered when the verbal responses were incongruent with gestures. The left stimulation neither induced enhancement on voicing parameters of words congruent with gestures nor interference on words incongruent with gestures. We interpreted the enhancement of the verbal response to gesturing in terms of intention to interact directly. Consequently, we proposed that Brocas area is involved in the process of translating into speech aspects concerning the social intention coded by the gesture. Moreover, we discussed the results in terms of evolution to support the theory [Corballis, M. C. (2002). From hand to mouth: The origins of language. Princeton, NJ: Princeton University Press] proposing spoken language as evolved from an ancient communication system using arm gestures.


Cognitive Neuropsychology | 2008

Behavioural and neurophysiological evidence of semantic interaction between iconic gestures and words

Paolo Bernardis; Elena Salillas; Nicoletta Caramelli

We report two experiments that provide converging behavioural and neurophysiological evidence on the relationship between the meaning of iconic gestures and words. Experiment 1 exploited a semantic priming paradigm and revealed interference between gestures and words when they were not related in meaning, but no facilitation when they were. This result was confirmed in Experiment 2, where ERPs were recorded during silent word reading with the same paradigm. The analysis showed a negative deflection peaking near 400 ms (N400) and, in the left hemisphere, greater negative values for verbs than for nouns. Differently from the classical distribution obtained with verbal stimuli, we found an N400 that spread more over central-anterior areas of the scalp, suggesting that the meaning systems of gesture and language do not overlap completely. These results are consistent with the view that the meaning systems for gesture and speech are tightly integrated.


Experimental Brain Research | 2005

How does action resist visual illusion? Uncorrected oculomotor information does not account for accurate pointing in peripersonal space

Paolo Bernardis; Paul C. Knox; Nicola Bruno

Using spatially identical displays (variants of the Müller–Lyer illusion), we compared the accuracy of spatial verbal judgments with that of saccadic (eye) and pointing (hand) movements. Verbal judgments showed a clear effect of the illusion. The amplitude of the primary saccade from one endpoint of the pattern (at fixation) to the other also showed an effect of the illusion. Conversely, movement amplitudes when pointing from one endpoint (initial finger position) to the other were significantly more accurate than both saccades and verbal responses. In a control experiment in which the viewing conditions between the saccade and pointing experiments were equalized, saccade amplitude was again affected by the illusion. In several studies, systematic biases in conscious spatial judgments have been contrasted with accurate open-loop pointing in peripersonal space. It has been proposed that such seeming dissociations between vision-for-action and vision-for-consciousness might in fact be because of a simple oculomotor strategy: saccade to the target before it disappears, then use the efference copy of the (accurate) saccadic movement to drive pointing. The present data do not support the hypothesis in this simple form.


Cognition | 2014

Functional dissociations in temporal preparation: Evidence from dual-task performance

Antonino Vallesi; Sandra Arbula; Paolo Bernardis

Implicit preparation over time is a complex cognitive capacity important to optimize behavioral responses to a target occurring after a temporal interval, the so-called foreperiod (FP). If the FP occurs randomly and with the same a priori probability, shorter response times are usually observed with longer FPs than with shorter ones (FP effect). Moreover, responses are slower when the preceding FP was longer than the current one (sequential effects). It is still a matter of debate how different processes influence these temporal preparation phenomena. The present study used a dual-task procedure to understand how different processes, along the automatic-controlled continuum, may contribute to these temporal preparation phenomena. Dual-task demands were manipulated in two experiments using a subtraction task during the FP. This secondary task was administered in blocks (Experiment 1) or was embedded together with a baseline single-task in the same experimental session (Experiment 2). The results consistently showed that the size of the FP effect, but not that of sequential effects, is sensitive to dual-task manipulations. This functional dissociation unveils the multi-faceted nature of implicit temporal preparation: while the FP effect is due to a controlled, resource-consuming preparatory mechanism, a more automatic mechanism underlies sequential effects.


Attention Perception & Psychophysics | 1997

Lightness, equivalent backgrounds, and anchoring

Nicola Bruno; Paolo Bernardis; James A. Schirillo

Observers compared two center/surround configurations haploscopically. One configuration consisted of a standard surface surrounded by two, three, or four surfaces, each with a different luminance. The other configuration consisted of a comparison surface surrounded by a single annulus that varied in luminance. Center surfaces always had the same luminance but only appeared to have the same lightness with certain annuli (equivalent backgrounds). For most displays, the luminance needed to obtain an equivalent background was close to the highest luminance in the standard surround configuration. Models based on the space-average luminance or the space-average contrast of the standard surround configuration yielded poorer fits. Implications for computational models of lightness and for candidate solutions to the anchoring problem are discussed.


Psychonomic Bulletin & Review | 2002

Dissociating perception and action in Kanizsa' s compression illusion

Nicola Bruno; Paolo Bernardis

When a horizontally elongated surface is occluded in the middle by a larger surface, it appears narrower than its true width (Kanizsa’s compression illusion). We report that a similar compression effect occurs for closed-loop visuomotor matches of size, but not for otherwise comparable open-loop “mimed” reaching or size-matching visuomotor responses. Our study is the first in which a comparison of size perception in personal space with bilateral actions performed with both hands (instead of precision grips employing the thumb and the index finger) is used to investigate motor responses to Kanizsa’s compression illusion. Implications for the current debate on the existence of dissociations between spatial perception and visually controlled actions in personal space are discussed.


Neuropsychologia | 2009

The observation of manual grasp actions affects the control of speech: A combined behavioral and Transcranial Magnetic Stimulation study

Maurizio Gentilucci; Giovanna Cristina Campione; Riccardo Dalla Volta; Paolo Bernardis

Does the mirror system affect the control of speech? This issue was addressed in behavioral and Transcranial Magnetic Stimulation (TMS) experiments. In behavioral experiment 1, participants pronounced the syllable /da/ while observing (1) a hand grasping large and small objects with power and precision grasps, respectively, (2) a foot interacting with large and small objects and (3) differently sized objects presented alone. Voice formant 1 was higher when observing power as compared to precision grasp, whereas it remained unaffected by observation of the different types of foot interaction and objects alone. In TMS experiment 2, we stimulated hand motor cortex, while participants observed the two types of grasp. Motor Evoked Potentials (MEPs) of hand muscles active during the two types of grasp were greater when observing power than precision grasp. In experiments 3-5, TMS was applied to tongue motor cortex of participants silently pronouncing the syllable /da/ and simultaneously observing power and precision grasps, pantomimes of the two types of grasps, and differently sized objects presented alone. Tongue MEPs were greater when observing power than precision grasp either executed or pantomimed. Finally, in TMS experiment 6, the observation of foot interaction with large and small objects did not modulate tongue MEPs. We hypothesized that grasp observation activated motor commands to the mouth as well as to the hand that were congruent with the hand kinematics implemented in the observed type of grasp. The commands to the mouth selectively affected postures of phonation organs and consequently basic features of phonological units.


Experimental Brain Research | 2007

On the relations between affordance and representation of the agent’s effector

Filippo Barbieri; Antimo Buonocore; Paolo Bernardis; Riccardo Dalla Volta; Maurizio Gentilucci

The present study aimed to determine whether the representation of object affordances requires specification of the effector potentially interacting with the object: specifically, in this study, vision of the interacting hand. In Experiment 1 we used an apparatus by which a fruit to be reached and grasped was identified by word reading, whereas another (interfering) fruit was visually perceived at the same location as the target. The apparatus allowed visual presentation of the agent’s interacting hand or prevented it. When visually presented, the hand was perceived as still at the start position even when it moved to grasp the fruit. An interference effect on the grasp congruent with the distractor size was observed only when the hand was visible. In Experiment 2, interference was observed also when a hand different from the agent’s own was visually presented. In both Experiments 1 and 2 the visible fruit interfered with the arm’s reach, but the effect was independent of its size and less dependent on the visually-presented hand. A control experiment (Experiment 3) enabled comparison of the interference of visual stimuli on targets identified by word reading (Experiments 1 and 2) with that of objects identified by word reading on visually-presented targets (Experiment 3). The interference induced by visual stimuli was stronger than the interference induced by objects identified by words (i.e. affordances evoked by visual stimuli were stronger than affordances evoked by semantics). Taken together, the results of the present study suggest that the specification of the agent’s effector is necessary for the elicitation of affordances. However, the elicitation of these affordances was observed for interactions between object and hand (grasp), rather than for interactions between object and arm (reach). Finally, our data confirm the influence of semantics on the control of arm movements, though less strong than that due to visual input.

Collaboration


Dive into the Paolo Bernardis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge