Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maria Grazia Di Bono is active.

Publication


Featured researches published by Maria Grazia Di Bono.


NeuroImage | 2011

Distinct representations of numerical and non-numerical order in the human intraparietal sulcus revealed by multivariate pattern recognition.

Marco Zorzi; Maria Grazia Di Bono; Wim Fias

Neuroimaging studies of numerical cognition have pointed to the horizontal segment of the intraparietal sulcus (hIPS) as the neural correlate of numerical representations in humans. However, the specificity of hIPS for numbers remains controversial. For example, its activation during numerical comparison cannot be distinguished from activation during ordinal judgments on non-numerical sequences such as letters (Fias et al., 2007, J. Neuroscience). Based on the hypothesis that the fine-grained distinction between representations of numerical vs. letter order in hIPS might simply be invisible to conventional fMRI data analysis, we used support vector machines (SVM) to reanalyse the data of Fias et al. (2007). We show that classifiers trained on hIPS voxels can discriminate between number comparison and letter comparison, even though the two tasks produce the same metric of behaviour. Voxels discriminating between the two conditions were consistent across subjects and contribution analysis revealed maps of distinct sets of voxels implicated in the processing of numerical vs. alphabetical order in bilateral hIPS. These results reconcile the neuroimaging data with the neuropsychological evidence suggesting dissociations between numbers and other non-numerical ordered sequences, and demonstrate that multivariate analyses are fundamental to address fine-grained theoretical issues with fMRI studies.


Journal of Experimental Psychology: Human Perception and Performance | 2012

Priming the mental time line.

Maria Grazia Di Bono; Marco Casarotti; Konstantinos Priftis; Lucia Gava; Carlo Umiltà; Marco Zorzi

Growing experimental evidence suggests that temporal events are represented on a mental time line, spatially oriented from left to right. Support for the spatial representation of time comes mostly from studies that have used spatially organized responses. Moreover, many of these studies did not avoid possible confounds attributable to target stimuli that simultaneously convey both spatial and temporal dimensions. Here we show that task-irrelevant, lateralized visuospatial primes affect auditory duration judgments. Responses to short durations were faster when the auditory target was paired with left- than with right-sided primes, whereas responses to long durations were faster when paired with right- than with left-sided primes. Thus, when the representations of physical space and time are concurrently activated, physical space may influence time even when a lateralized, spatially encoded response is not required by the task. The time-space interaction reported here cannot be ascribed to any Spatial-Temporal Association of Response Codes effect. It supports the hypothesis that the representation of time is spatially organized, with short durations represented on the left space and longer ones on the right.


Frontiers in Psychology | 2013

Deep generative learning of location-invariant visual word recognition.

Maria Grazia Di Bono; Marco Zorzi

It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words—which was the models learning objective—is largely based on letter-level information.


PLOS ONE | 2011

Numerosity Estimation in Visual Stimuli in the Absence of Luminance-Based Cues

Peter Kramer; Maria Grazia Di Bono; Marco Zorzi

Background Numerosity estimation is a basic preverbal ability that humans share with many animal species and that is believed to be foundational of numeracy skills. It is notoriously difficult, however, to establish whether numerosity estimation is based on numerosity itself, or on one or more non-numerical cues like—in visual stimuli—spatial extent and density. Frequently, different non-numerical cues are held constant on different trials. This strategy, however, still allows numerosity estimation to be based on a combination of non-numerical cues rather than on any particular one by itself. Methodology/Principal Findings Here we introduce a novel method, based on second-order (contrast-based) visual motion, to create stimuli that exclude all first-order (luminance-based) cues to numerosity. We show that numerosities can be estimated almost as well in second-order motion as in first-order motion. Conclusions/Significance The results show that numerosity estimation need not be based on first-order spatial filtering, first-order density perception, or any other processing of luminance-based cues to numerosity. Our method can be used as an effective tool to control non-numerical variables in studies of numerosity estimation.


Quarterly Journal of Experimental Psychology | 2013

The spatial representation of numerical and non-numerical ordered sequences: Insights from a random generation task

Maria Grazia Di Bono; Marco Zorzi

It is widely believed that numbers are spatially represented from left to right on the mental number line. Whether this spatial format of representation is specific to numbers or is shared by non-numerical ordered sequences remains controversial. When healthy participants are asked to randomly generate digits they show a systematic small-number bias that has been interpreted in terms of “pseudoneglect in number space”. Here we used a random generation task to compare numerical and non-numerical order. Participants performed the task at three different pacing rates and with three types of stimuli (numbers, letters, and months). In addition to a small-number bias for numbers, we observed a bias towards “early” items for letters and no bias for months. The spatial biases for numbers and letters were rate independent and similar in size, but they did not correlate across participants. Moreover, letter generation was qualified by a systematic forward direction along the sequence, suggesting that the ordinal dimension was more salient for letters than for numbers in a task that did not require its explicit processing. The dissociation between numerical and non-numerical orders is consistent with electrophysiological and neuroimaging studies and suggests that they rely on at least partially different mechanisms.


Brain and behavior | 2015

Probing the reaching-grasping network in humans through multivoxel pattern decoding

Maria Grazia Di Bono; Chiara Begliomini; Umberto Castiello; Marco Zorzi

The quest for a putative human homolog of the reaching–grasping network identified in monkeys has been the focus of many neuropsychological and neuroimaging studies in recent years. These studies have shown that the network underlying reaching‐only and reach‐to‐grasp movements includes the superior parieto‐occipital cortex (SPOC), the anterior part of the human intraparietal sulcus (hAIP), the ventral and the dorsal portion of the premotor cortex, and the primary motor cortex (M1). Recent evidence for a wider frontoparietal network coding for different aspects of reaching‐only and reach‐to‐grasp actions calls for a more fine‐grained assessment of the reaching–grasping network in humans by exploiting pattern decoding methods (multivoxel pattern analysis—MVPA).


PLOS ONE | 2017

Decoding social intentions in human prehensile actions: Insights from a combined kinematics-fMRI study

Maria Grazia Di Bono; Chiara Begliomini; Sanja Budisavljevic; Luisa Sartori; Diego Miotto; Raffaella Motta; Umberto Castiello

Consistent evidence suggests that the way we reach and grasp an object is modulated not only by object properties (e.g., size, shape, texture, fragility and weight), but also by the types of intention driving the action, among which the intention to interact with another agent (i.e., social intention). Action observation studies ascribe the neural substrate of this ‘intentional’ component to the putative mirror neuron (pMNS) and the mentalizing (MS) systems. How social intentions are translated into executed actions, however, has yet to be addressed. We conducted a kinematic and a functional Magnetic Resonance Imaging (fMRI) study considering a reach-to-grasp movement performed towards the same object positioned at the same location but with different intentions: passing it to another person (social condition) or putting it on a concave base (individual condition). Kinematics showed that individual and social intentions are characterized by different profiles, with a slower movement at the level of both the reaching (i.e., arm movement) and the grasping (i.e., hand aperture) components. fMRI results showed that: (i) distinct voxel pattern activity for the social and the individual condition are present within the pMNS and the MS during action execution; (ii) decoding accuracies of regions belonging to the pMNS and the MS are correlated, suggesting that these two systems could interact for the generation of appropriate motor commands. Results are discussed in terms of motor simulation and inferential processes as part of a hierarchical generative model for action intention understanding and generation of appropriate motor commands.


Infant Behavior & Development | 2012

Discrimination and ordinal judgments of temporal durations at 3 months.

Lucia Gava; Eloisa Valenza; Maria Grazia Di Bono; Chiara Tosatto

This study presents the first evidence that 3-month-old infants success in a timing matching task and in an ordinal timing task, when numerical information is controlled. Three-month-old infants discriminated brief temporal durations that differed by a 1:3 ratio, relying solely on temporal information. Moreover, at 3 months of age infants were able to discriminate between monotonic and non-monotonic time-based series, when numerical and temporal information were inconsistent. These findings strengthen the hypothesis that a magnitude representational system for temporal quantities is operating very early in the ontogenetic development.


Frontiers in Neuroscience | 2017

Bridging the Gap between Brain Activity and Cognition: Beyond the Different Tales of fMRI Data Analysis

Maria Grazia Di Bono; Konstantinos Priftis; Carlo Umiltà

The human brain is an extremely complex system of interacting physical and functional units, ranging from single neurons to complex networks. Cognition is a network phenomenon because it does not exist in isolated synapses, neurons, or even brain areas. In spite of that, a great amount of functional magnetic resonance imaging (fMRI) studies have explored what areas are involved in a variety of cognitive processes, merely localizing where in the brain those processes occur. Instead, the very notion of network phenomena requires understanding spatiotemporal dynamics, which, in turn, depends on the way fMRI data are analyzed.What are themechanisms for simulating different cognitive functions and their spatiotemporal activity patterns? In order to bridge the gap between brain network activity and the emerging cognitive functions, we needmore plausible computational models, which should reflect putative neural mechanisms and the properties of brain network dynamics.


Frontiers in Neuroscience | 2018

The Neural Correlates of Grasping in Left-Handers: When Handedness Does Not Matter

Chiara Begliomini; Luisa Sartori; Maria Grazia Di Bono; Sanja Budisavljević; Umberto Castiello

Neurophysiological studies showed that in macaques, grasp-related visuomotor transformations are supported by a circuit involving the anterior part of the intraparietal sulcus, the ventral and the dorsal region of the premotor area. In humans, a similar grasp-related circuit has been revealed by means of neuroimaging techniques. However, the majority of “human” studies considered movements performed by right-handers only, leaving open the question of whether the dynamics underlying motor control during grasping is simply reversed in left-handers with respect to right-handers or not. To address this question, a group of left-handed participants has been scanned with functional magnetic resonance imaging while performing a precision grasping task with the left or the right hand. Dynamic causal modeling was used to assess how brain regions of the two hemispheres contribute to grasping execution and whether the intra- and inter-hemispheric connectivity is modulated by the choice of the performing hand. Results showed enhanced inter-hemispheric connectivity between anterior intraparietal and dorsal premotor cortices during grasping execution with the left dominant hand (LDH) (e.g., right hemisphere) compared to the right (e.g., left hemisphere). These findings suggest that that the left hand, although dominant and theoretically more skilled in left handers, might need additional resources in terms of the visuomotor control and on-line monitoring to accomplish a precision grasping movement. The results are discussed in light of theories on the modulation of parieto-frontal networks during the execution of prehensile movements, providing novel evidence supporting the hypothesis of a handedness-independent specialization of the left hemisphere in visuomotor control.

Collaboration


Dive into the Maria Grazia Di Bono's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge