Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Giancarlo Valente is active.

Publication


Featured researches published by Giancarlo Valente.


NeuroImage | 2008

Combining multivariate voxel selection and support vector machines for mapping and classification of fMRI spatial patterns

Federico De Martino; Giancarlo Valente; Noël Staeren; John Ashburner; Rainer Goebel; Elia Formisano

In functional brain mapping, pattern recognition methods allow detecting multivoxel patterns of brain activation which are informative with respect to a subjects perceptual or cognitive state. The sensitivity of these methods, however, is greatly reduced when the proportion of voxels that convey the discriminative information is small compared to the total number of measured voxels. To reduce this dimensionality problem, previous studies employed univariate voxel selection or region-of-interest-based strategies as a preceding step to the application of machine learning algorithms. Here we employ a strategy for classifying functional imaging data based on a multivariate feature selection algorithm, Recursive Feature Elimination (RFE) that uses the training algorithm (support vector machine) recursively to eliminate irrelevant voxels and estimate informative spatial patterns. Generalization performances on test data increases while features/voxels are pruned based on their discrimination ability. In this article we evaluate RFE in terms of sensitivity of discriminative maps (Receiver Operative Characteristic analysis) and generalization performances and compare it to previously used univariate voxel selection strategies based on activation and discrimination measures. Using simulated fMRI data, we show that the recursive approach is suitable for mapping discriminative patterns and that the combination of an initial univariate activation-based (F-test) reduction of voxels and multivariate recursive feature elimination produces the best results, especially when differences between conditions have a low contrast-to-noise ratio. Furthermore, we apply our method to high resolution (2 x 2 x 2 mm(3)) data from an auditory fMRI experiment in which subjects were stimulated with sounds from four different categories. With these real data, our recursive algorithm proves able to detect and accurately classify multivoxel spatial patterns, highlighting the role of the superior temporal gyrus in encoding the information of sound categories. In line with the simulation results, our method outperforms univariate statistical analysis and statistical learning without feature selection.


The Journal of Neuroscience | 2011

Auditory cortex encodes the perceptual interpretation of ambiguous sound

Niclas Kilian-Hütten; Giancarlo Valente; Jean Vroomen; Elia Formisano

The confounding of physical stimulus characteristics and perceptual interpretations of stimuli poses a problem for most neuroscientific studies of perception. In the auditory domain, this pertains to the entanglement of acoustics and percept. Traditionally, most study designs have relied on cognitive subtraction logic, which demands the use of one or more comparisons between stimulus types. This does not allow for a differentiation between effects due to acoustic differences (i.e., sensation) and those due to conscious perception. To overcome this problem, we used functional magnetic resonance imaging (fMRI) in humans and pattern-recognition analysis to identify activation patterns that encode the perceptual interpretation of physically identical, ambiguous sounds. We show that it is possible to retrieve the perceptual interpretation of ambiguous phonemes—information that is fully subjective to the listener—from fMRI measurements of brain activity in auditory areas in the superior temporal cortex, most prominently on the posterior bank of the left Heschls gyrus and sulcus and in the adjoining left planum temporale. These findings suggest that, beyond the basic acoustic analysis of sounds, constructive perceptual processes take place in these relatively early cortical auditory networks. This disagrees with hierarchical models of auditory processing, which generally conceive of these areas as sets of feature detectors, whose task is restricted to the analysis of physical characteristics and the structure of sounds.


PLOS ONE | 2015

The default mode network and the working memory network are not anti-correlated during all phases of a working memory task

Tommaso Piccoli; Giancarlo Valente; David Edmund Johannes Linden; Marta Re; Fabrizio Esposito; Alexander T. Sack; Francesco Di Salle

Introduction The default mode network and the working memory network are known to be anti-correlated during sustained cognitive processing, in a load-dependent manner. We hypothesized that functional connectivity among nodes of the two networks could be dynamically modulated by task phases across time. Methods To address the dynamic links between default mode network and the working memory network, we used a delayed visuo-spatial working memory paradigm, which allowed us to separate three different phases of working memory (encoding, maintenance, and retrieval), and analyzed the functional connectivity during each phase within and between the default mode network and the working memory network networks. Results We found that the two networks are anti-correlated only during the maintenance phase of working memory, i.e. when attention is focused on a memorized stimulus in the absence of external input. Conversely, during the encoding and retrieval phases, when the external stimulation is present, the default mode network is positively coupled with the working memory network, suggesting the existence of a dynamically switching of functional connectivity between “task-positive” and “task-negative” brain networks. Conclusions Our results demonstrate that the well-established dichotomy of the human brain (anti-correlated networks during rest and balanced activation-deactivation during cognition) has a more nuanced organization than previously thought and engages in different patterns of correlation and anti-correlation during specific sub-phases of a cognitive task. This nuanced organization reinforces the hypothesis of a direct involvement of the default mode network in cognitive functions, as represented by a dynamic rather than static interaction with specific task-positive networks, such as the working memory network.


The Journal of Neuroscience | 2014

Task-Dependent Decoding of Speaker and Vowel Identity from Auditory Cortical Response Patterns

Milene Bonte; Lars Hausfeld; Wolfgang Scharke; Giancarlo Valente; Elia Formisano

Selective attention to relevant sound properties is essential for everyday listening situations. It enables the formation of different perceptual representations of the same acoustic input and is at the basis of flexible and goal-dependent behavior. Here, we investigated the role of the human auditory cortex in forming behavior-dependent representations of sounds. We used single-trial fMRI and analyzed cortical responses collected while subjects listened to the same speech sounds (vowels /a/, /i/, and /u/) spoken by different speakers (boy, girl, male) and performed a delayed-match-to-sample task on either speech sound or speaker identity. Univariate analyses showed a task-specific activation increase in the right superior temporal gyrus/sulcus (STG/STS) during speaker categorization and in the right posterior temporal cortex during vowel categorization. Beyond regional differences in activation levels, multivariate classification of single trial responses demonstrated that the success with which single speakers and vowels can be decoded from auditory cortical activation patterns depends on task demands and subjects behavioral performance. Speaker/vowel classification relied on distinct but overlapping regions across the (right) mid-anterior STG/STS (speakers) and bilateral mid-posterior STG/STS (vowels), as well as the superior temporal plane including Heschls gyrus/sulcus. The task dependency of speaker/vowel classification demonstrates that the informative fMRI response patterns reflect the top-down enhancement of behaviorally relevant sound representations. Furthermore, our findings suggest that successful selection, processing, and retention of task-relevant sound properties relies on the joint encoding of information across early and higher-order regions of the auditory cortex.


Human Brain Mapping | 2006

Functional Source Separation from Magnetoencephalographic Signals

Roberto Sigismondi; Filippo Zappasodi; Camillo Porcaro; Sara Graziadio; Giancarlo Valente; Marco Balsi; Paolo Maria Rossini; Franca Tecchio

We propose a novel cerebral source extraction method (functional source separation, FSS) starting from extra‐cephalic magnetoencephalographic (MEG) signals in humans. It is obtained by adding a functional constraint to the cost function of a basic independent component analysis (ICA) model, defined according to the specific experiment under study, and removing the orthogonality constraint, (i.e., in a single‐unit approach, skipping decorrelation of each new component from the subspace generated by the components already found). Source activity was obtained all along processing of a simple separate sensory stimulation of thumb, little finger, and median nerve. Being the sources obtained one by one in each stage applying different criteria, the a posteriori “interesting sources selection” step is avoided. The obtained solutions were in agreement with the homuncular organization in all subjects, neurophysiologically reacting properly and with negligible residual activity. On this basis, the separated sources were interpreted as satisfactorily describing highly superimposed and interconnected neural networks devoted to cortical finger representation. The proposed procedure significantly improves the quality of the extraction with respect to a standard BSS algorithm. Moreover, it is very flexible in including different functional constraints, providing a promising tool to identify neuronal networks in very general cerebral processing. Hum Brain Mapp, 2006.


The Journal of Neuroscience | 2014

Brain-Based Translation: fMRI Decoding of Spoken Words in Bilinguals Reveals Language-Independent Semantic Representations in Anterior Temporal Lobe

João Mendonça Correia; Elia Formisano; Giancarlo Valente; Lars Hausfeld; Bernadette M. Jansma; Milene Bonte

Bilinguals derive the same semantic concepts from equivalent, but acoustically different, words in their first and second languages. The neural mechanisms underlying the representation of language-independent concepts in the brain remain unclear. Here, we measured fMRI in human bilingual listeners and reveal that response patterns to individual spoken nouns in one language (e.g., “horse” in English) accurately predict the response patterns to equivalent nouns in the other language (e.g., “paard” in Dutch). Stimuli were four monosyllabic words in both languages, all from the category of “animal” nouns. For each word, pronunciations from three different speakers were included, allowing the investigation of speaker-independent representations of individual words. We used multivariate classifiers and a searchlight method to map the informative fMRI response patterns that enable decoding spoken words within languages (within-language discrimination) and across languages (across-language generalization). Response patterns discriminative of spoken words within language were distributed in multiple cortical regions, reflecting the complexity of the neural networks recruited during speech and language processing. Response patterns discriminative of spoken words across language were limited to localized clusters in the left anterior temporal lobe, the left angular gyrus and the posterior bank of the left postcentral gyrus, the right posterior superior temporal sulcus/superior temporal gyrus, the right medial anterior temporal lobe, the right anterior insula, and bilateral occipital cortex. These results corroborate the existence of “hub” regions organizing semantic-conceptual knowledge in abstract form at the fine-grained level of within semantic category discriminations.


The Journal of Neuroscience | 2009

Dynamic and Task-Dependent Encoding of Speech and Voice by Phase Reorganization of Cortical Oscillations

Milene Bonte; Giancarlo Valente; Elia Formisano

Speech and vocal sounds are at the core of human communication. Cortical processing of these sounds critically depends on behavioral demands. However, the neurocomputational mechanisms enabling this adaptive processing remain elusive. Here we examine the task-dependent reorganization of electroencephalographic responses to natural speech sounds (vowels /a/, /i/, /u/) spoken by three speakers (two female, one male) while listeners perform a one-back task on either vowel or speaker identity. We show that dynamic changes of sound-evoked responses and phase patterns of cortical oscillations in the alpha band (8–12 Hz) closely reflect the abstraction and analysis of the sounds along the task-relevant dimension. Vowel categorization leads to a significant temporal realignment of responses to the same vowel, e.g., /a/, independent of who pronounced this vowel, whereas speaker categorization leads to a significant temporal realignment of responses to the same speaker, e.g., speaker 1, independent of which vowel she/he pronounced. This transient and goal-dependent realignment of neuronal responses to physically different external events provides a robust cortical coding mechanism for forming and processing abstract representations of auditory (speech) input.


NeuroImage | 2007

Somatosensory dynamic gamma-band synchrony: a neural code of sensorimotor dexterity.

Franca Tecchio; Sara Graziadio; Roberto Sigismondi; Filippo Zappasodi; Camillo Porcaro; Giancarlo Valente; Marco Balsi; Paolo Maria Rossini

To investigate neural coding characteristics in the human primary somatosensory cortex, two fingers with different levels of functional skill were studied. Their dexterity was scored by the Fingertip writing test. Each finger was separately provided by a passive simple sensory stimulation and the responsiveness of each finger cortical representation was studied by a novel source extraction method applied to magnetoencephalographic signals recorded in a 14 healthy right handed subject cohort. In the two hemispheres, neural oscillatory activity synchronization was analysed in the three characteristic alpha, beta and gamma frequency bands by two dynamic measures, one isolating the phase locking between neural network components, the other reflecting the total number of synchronous recruited neurons. In the dominant hemisphere, the gamma band phase locking was higher for the thumb than for the little finger and it correlated with the contra-lateral finger dexterity. Neither in the dominant nor in the non-dominant hemisphere, any effect was observed in the alpha and beta bands. In the gamma band, the amplitude showed similar tendency to the phase locking, without reaching statistical significance. These findings suggest the dynamic gamma band phase locking as a code for finger dexterity, in addition to the magnification of somatotopic central maps.


Magnetic Resonance Imaging | 2010

Multimodal imaging: an evaluation of univariate and multivariate methods for simultaneous EEG/fMRI

Federico De Martino; Giancarlo Valente; Aline W. de Borst; Fabrizio Esposito; Alard Roebroeck; Rainer Goebel; Elia Formisano

The combination of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) has been proposed as a tool to study brain dynamics with both high temporal and high spatial resolution. Multimodal imaging techniques rely on the assumption of a common neuronal source for the different recorded signals. In order to maximally exploit the combination of these techniques, one needs to understand the coupling (i.e., the relation) between electroencephalographic (EEG) and fMRI blood oxygen level-dependent (BOLD) signals. Recently, simultaneous EEG-fMRI measurements have been used to investigate the relation between the two signals. Previous attempts at the analysis of simultaneous EEG-fMRI data reported significant correlations between regional BOLD activations and modulation of both event-related potential (ERP) and oscillatory EEG power, mostly in the alpha but also in other frequency bands. Beyond the correlation of the two measured brain signals, the relevant issue we address here is the ability of predicting the signal in one modality using information from the other modality. Using multivariate machine learning-based regression, we show how it is possible to predict EEG power oscillations from simultaneously acquired fMRI data during an eyes-open/eyes-closed task using either the original channels or the underlying cortically distributed sources as the relevant EEG signal for the analysis of multimodal data.


NeuroImage | 2016

The effect of spatial resolution on decoding accuracy in fMRI multivariate pattern analysis.

Anna Gardumi; Dimo Ivanov; Lars Hausfeld; Giancarlo Valente; Elia Formisano; Kâmil Uludağ

Multivariate pattern analysis (MVPA) in fMRI has been used to extract information from distributed cortical activation patterns, which may go undetected in conventional univariate analysis. However, little is known about the physical and physiological underpinnings of MVPA in fMRI as well as about the effect of spatial smoothing on its performance. Several studies have addressed these issues, but their investigation was limited to the visual cortex at 3T with conflicting results. Here, we used ultra-high field (7T) fMRI to investigate the effect of spatial resolution and smoothing on decoding of speech content (vowels) and speaker identity from auditory cortical responses. To that end, we acquired high-resolution (1.1mm isotropic) fMRI data and additionally reconstructed them at 2.2 and 3.3mm in-plane spatial resolutions from the original k-space data. Furthermore, the data at each resolution were spatially smoothed with different 3D Gaussian kernel sizes (i.e. no smoothing or 1.1, 2.2, 3.3, 4.4, or 8.8mm kernels). For all spatial resolutions and smoothing kernels, we demonstrate the feasibility of decoding speech content (vowel) and speaker identity at 7T using support vector machine (SVM) MVPA. In addition, we found that high spatial frequencies are informative for vowel decoding and that the relative contribution of high and low spatial frequencies is different across the two decoding tasks. Moderate smoothing (up to 2.2mm) improved the accuracies for both decoding of vowels and speakers, possibly due to reduction of noise (e.g. residual motion artifacts or instrument noise) while still preserving information at high spatial frequency. In summary, our results show that - even with the same stimuli and within the same brain areas - the optimal spatial resolution for MVPA in fMRI depends on the specific decoding task of interest.

Collaboration


Dive into the Giancarlo Valente's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marco Balsi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Filippo Zappasodi

University of Chieti-Pescara

View shared research outputs
Top Co-Authors

Avatar

Roberto Sigismondi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Franca Tecchio

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge