Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matteo Visconti di Oleggio Castello is active.

Publication


Featured researches published by Matteo Visconti di Oleggio Castello.


GigaScience | 2016

2015 Brainhack Proceedings

R. Cameron Craddock; Pierre Bellec; Daniel S. Margules; B. Nolan Nichols; Jörg P. Pfannmöller; AmanPreet Badhwar; David N. Kennedy; Jean-Baptiste Poline; Roberto Toro; Ben Cipollini; Ariel Rokem; Daniel Clark; Krzysztof J. Gorgolewski; Daniel J. Clark; Samir Das; Cécile Madjar; Ayan Sengupta; Zia Mohades; Sebastien Dery; Weiran Deng; Eric Earl; Damion V. Demeter; Kate Mills; Glad Mihai; Luka Ruzic; Nick Ketz; Andrew Reineberg; Marianne C. Reddan; Anne-Lise Goddings; Javier Gonzalez-Castillo

Table of contentsI1 Introduction to the 2015 Brainhack ProceedingsR. Cameron Craddock, Pierre Bellec, Daniel S. Margules, B. Nolan Nichols, Jörg P. PfannmöllerA1 Distributed collaboration: the case for the enhancement of Brainspell’s interfaceAmanPreet Badhwar, David Kennedy, Jean-Baptiste Poline, Roberto ToroA2 Advancing open science through NiDataBen Cipollini, Ariel RokemA3 Integrating the Brain Imaging Data Structure (BIDS) standard into C-PACDaniel Clark, Krzysztof J. Gorgolewski, R. Cameron CraddockA4 Optimized implementations of voxel-wise degree centrality and local functional connectivity density mapping in AFNIR. Cameron Craddock, Daniel J. ClarkA5 LORIS: DICOM anonymizerSamir Das, Cécile Madjar, Ayan Sengupta, Zia MohadesA6 Automatic extraction of academic collaborations in neuroimagingSebastien DeryA7 NiftyView: a zero-footprint web application for viewing DICOM and NIfTI filesWeiran DengA8 Human Connectome Project Minimal Preprocessing Pipelines to NipypeEric Earl, Damion V. Demeter, Kate Mills, Glad Mihai, Luka Ruzic, Nick Ketz, Andrew Reineberg, Marianne C. Reddan, Anne-Lise Goddings, Javier Gonzalez-Castillo, Krzysztof J. GorgolewskiA9 Generating music with resting-state fMRI dataCaroline Froehlich, Gil Dekel, Daniel S. Margulies, R. Cameron CraddockA10 Highly comparable time-series analysis in NitimeBen D. FulcherA11 Nipype interfaces in CBRAINTristan Glatard, Samir Das, Reza Adalat, Natacha Beck, Rémi Bernard, Najmeh Khalili-Mahani, Pierre Rioux, Marc-Étienne Rousseau, Alan C. EvansA12 DueCredit: automated collection of citations for software, methods, and dataYaroslav O. Halchenko, Matteo Visconti di Oleggio CastelloA13 Open source low-cost device to register dog’s heart rate and tail movementRaúl Hernández-Pérez, Edgar A. Morales, Laura V. CuayaA14 Calculating the Laterality Index Using FSL for Stroke Neuroimaging DataKaori L. Ito, Sook-Lei LiewA15 Wrapping FreeSurfer 6 for use in high-performance computing environmentsHans J. JohnsonA16 Facilitating big data meta-analyses for clinical neuroimaging through ENIGMA wrapper scriptsErik Kan, Julia Anglin, Michael Borich, Neda Jahanshad, Paul Thompson, Sook-Lei LiewA17 A cortical surface-based geodesic distance package for PythonDaniel S Margulies, Marcel Falkiewicz, Julia M HuntenburgA18 Sharing data in the cloudDavid O’Connor, Daniel J. Clark, Michael P. Milham, R. Cameron CraddockA19 Detecting task-based fMRI compliance using plan abandonment techniquesRamon Fraga Pereira, Anibal Sólon Heinsfeld, Alexandre Rosa Franco, Augusto Buchweitz, Felipe MeneguzziA20 Self-organization and brain functionJörg P. Pfannmöller, Rickson Mesquita, Luis C.T. Herrera, Daniela DenticoA21 The Neuroimaging Data Model (NIDM) APIVanessa Sochat, B Nolan NicholsA22 NeuroView: a customizable browser-base utilityAnibal Sólon Heinsfeld, Alexandre Rosa Franco, Augusto Buchweitz, Felipe MeneguzziA23 DIPY: Brain tissue classificationJulio E. Villalon-Reina, Eleftherios Garyfallidis


The Journal of Neuroscience | 2016

How the Human Brain Represents Perceived Dangerousness or "Predacity" of Animals.

Andrew C. Connolly; Long Sha; J. Swaroop Guntupalli; Nikolaas N. Oosterhof; Yaroslav O. Halchenko; Samuel A. Nastase; Matteo Visconti di Oleggio Castello; Hervé Abdi; Barbara C. Jobst; M. Ida Gobbini; James V. Haxby

Common or folk knowledge about animals is dominated by three dimensions: (1) level of cognitive complexity or “animacy;” (2) dangerousness or “predacity;” and (3) size. We investigated the neural basis of the perceived dangerousness or aggressiveness of animals, which we refer to more generally as “perception of threat.” Using functional magnetic resonance imaging (fMRI), we analyzed neural activity evoked by viewing images of animal categories that spanned the dissociable semantic dimensions of threat and taxonomic class. The results reveal a distributed network for perception of threat extending along the right superior temporal sulcus. We compared neural representational spaces with target representational spaces based on behavioral judgments and a computational model of early vision and found a processing pathway in which perceived threat emerges as a dominant dimension: whereas visual features predominate in early visual cortex and taxonomy in lateral occipital and ventral temporal cortices, these dimensions fall away progressively from posterior to anterior temporal cortices, leaving threat as the dominant explanatory variable. Our results suggest that the perception of threat in the human brain is associated with neural structures that underlie perception and cognition of social actions and intentions, suggesting a broader role for these regions than has been thought previously, one that includes the perception of potential threat from agents independent of their biological class. SIGNIFICANCE STATEMENT For centuries, philosophers have wondered how the human mind organizes the world into meaningful categories and concepts. Today this question is at the core of cognitive science, but our focus has shifted to understanding how knowledge manifests in dynamic activity of neural systems in the human brain. This study advances the young field of empirical neuroepistemology by characterizing the neural systems engaged by an important dimension in our cognitive representation of the animal kingdom ontological subdomain: how the brain represents the perceived threat, dangerousness, or “predacity” of animals. Our findings reveal how activity for domain-specific knowledge of animals overlaps the social perception networks of the brain, suggesting domain-general mechanisms underlying the representation of conspecifics and other animals.


Cerebral Cortex | 2017

Attention Selectively Reshapes the Geometry of Distributed Semantic Representation

Samuel A. Nastase; Andrew C. Connolly; Nikolaas N. Oosterhof; Yaroslav O. Halchenko; J. Swaroop Guntupalli; Matteo Visconti di Oleggio Castello; Jason Gors; M. Ida Gobbini; James V. Haxby

Abstract Humans prioritize different semantic qualities of a complex stimulus depending on their behavioral goals. These semantic features are encoded in distributed neural populations, yet it is unclear how attention might operate across these distributed representations. To address this, we presented participants with naturalistic video clips of animals behaving in their natural environments while the participants attended to either behavior or taxonomy. We used models of representational geometry to investigate how attentional allocation affects the distributed neural representation of animal behavior and taxonomy. Attending to animal behavior transiently increased the discriminability of distributed population codes for observed actions in anterior intraparietal, pericentral, and ventral temporal cortices. Attending to animal taxonomy while viewing the same stimuli increased the discriminability of distributed animal category representations in ventral temporal cortex. For both tasks, attention selectively enhanced the discriminability of response patterns along behaviorally relevant dimensions. These findings suggest that behavioral goals alter how the brain extracts semantic features from the visual world. Attention effectively disentangles population responses for downstream read-out by sculpting representational geometry in late-stage perceptual areas.


PLOS ONE | 2015

Familiar Face Detection in 180ms

Matteo Visconti di Oleggio Castello; M. Ida Gobbini

The visual system is tuned for rapid detection of faces, with the fastest choice saccade to a face at 100ms. Familiar faces have a more robust representation than do unfamiliar faces, and are detected faster in the absence of awareness and with reduced attentional resources. Faces of family and close friends become familiar over a protracted period involving learning the unique visual appearance, including a view-invariant representation, as well as person knowledge. We investigated the effect of personal familiarity on the earliest stages of face processing by using a saccadic-choice task to measure how fast familiar face detection can happen. Subjects made correct and reliable saccades to familiar faces when unfamiliar faces were distractors at 180ms—very rapid saccades that are 30 to 70ms earlier than the earliest evoked potential modulated by familiarity. By contrast, accuracy of saccades to unfamiliar faces with familiar faces as distractors did not exceed chance. Saccades to faces with object distractors were even faster (110 to 120 ms) and equivalent for familiar and unfamiliar faces, indicating that familiarity does not affect ultra-rapid saccades. We propose that detectors of diagnostic facial features for familiar faces develop in visual cortices through learning and allow rapid detection that precedes explicit recognition of identity.


Frontiers in Human Neuroscience | 2014

Facilitated detection of social cues conveyed by familiar faces

Matteo Visconti di Oleggio Castello; J. Swaroop Guntupalli; Hua Yang; M. Ida Gobbini

Recognition of the identity of familiar faces in conditions with poor visibility or over large changes in head angle, lighting and partial occlusion is far more accurate than recognition of unfamiliar faces in similar conditions. Here we used a visual search paradigm to test if one class of social cues transmitted by faces—direction of anothers attention as conveyed by gaze direction and head orientation—is perceived more rapidly in personally familiar faces than in unfamiliar faces. We found a strong effect of familiarity on the detection of these social cues, suggesting that the times to process these signals in familiar faces are markedly faster than the corresponding processing times for unfamiliar faces. In the light of these new data, hypotheses on the organization of the visual system for processing faces are formulated and discussed.


Scientific Reports | 2017

The neural representation of personally familiar and unfamiliar faces in the distributed system for face perception

Matteo Visconti di Oleggio Castello; Yaroslav O. Halchenko; J. Swaroop Guntupalli; Jason Gors; M. Ida Gobbini

Personally familiar faces are processed more robustly and efficiently than unfamiliar faces. The human face processing system comprises a core system that analyzes the visual appearance of faces and an extended system for the retrieval of person-knowledge and other nonvisual information. We applied multivariate pattern analysis to fMRI data to investigate aspects of familiarity that are shared by all familiar identities and information that distinguishes specific face identities from each other. Both identity-independent familiarity information and face identity could be decoded in an overlapping set of areas in the core and extended systems. Representational similarity analysis revealed a clear distinction between the two systems and a subdivision of the core system into ventral, dorsal and anterior components. This study provides evidence that activity in the extended system carries information about both individual identities and personal familiarity, while clarifying and extending the organization of the core system for face perception.


PLOS ONE | 2017

Familiarity facilitates feature-based face processing

Matteo Visconti di Oleggio Castello; Kelsey G. Wheeler; Carlo Cipolli; M. Ida Gobbini

Recognition of personally familiar faces is remarkably efficient, effortless and robust. We asked if feature-based face processing facilitates detection of familiar faces by testing the effect of face inversion on a visual search task for familiar and unfamiliar faces. Because face inversion disrupts configural and holistic face processing, we hypothesized that inversion would diminish the familiarity advantage to the extent that it is mediated by such processing. Subjects detected personally familiar and stranger target faces in arrays of two, four, or six face images. Subjects showed significant facilitation of personally familiar face detection for both upright and inverted faces. The effect of familiarity on target absent trials, which involved only rejection of unfamiliar face distractors, suggests that familiarity facilitates rejection of unfamiliar distractors as well as detection of familiar targets. The preserved familiarity effect for inverted faces suggests that facilitation of face detection afforded by familiarity reflects mostly feature-based processes.


Frontiers in Psychology | 2017

Social Saliency of the Cue Slows Attention Shifts

Vassiki Chauhan; Matteo Visconti di Oleggio Castello; Alireza Soltani; Maria Ida Gobbini

Eye gaze is a powerful cue that indicates where another person’s attention is directed in the environment. Seeing another person’s eye gaze shift spontaneously and reflexively elicits a shift of one’s own attention to the same region in space. Here, we investigated whether reallocation of attention in the direction of eye gaze is modulated by personal familiarity with faces. On the one hand, the eye gaze of a close friend should be more effective in redirecting our attention as compared to the eye gaze of a stranger. On the other hand, the social relevance of a familiar face might itself hold attention and, thereby, slow lateral shifts of attention. To distinguish between these possibilities, we measured the efficacy of the eye gaze of personally familiar and unfamiliar faces as directional attention cues using adapted versions of the Posner paradigm with saccadic and manual responses. We found that attention shifts were slower when elicited by a perceived change in the eye gaze of a familiar individual as compared to attention shifts elicited by unfamiliar faces at short latencies (100 ms). We also measured simple detection of change in direction of gaze in personally familiar and unfamiliar faces to test whether slower attention shifts were due to slower detection. Participants detected changes in eye gaze faster for familiar faces than for unfamiliar faces. Our results suggest that personally familiar faces briefly hold attention due to their social relevance, thereby slowing shifts of attention, even though the direction of eye movements are detected faster in familiar faces.


PLOS ONE | 2017

Concurrent development of facial identity and expression discrimination

Kirsten A. Dalrymple; Matteo Visconti di Oleggio Castello; Jed T. Elison; M. Ida Gobbini

Facial identity and facial expression processing both appear to follow a protracted developmental trajectory, yet these trajectories have been studied independently and have not been directly compared. Here we investigated whether these processes develop at the same or different rates using matched identity and expression discrimination tasks. The Identity task begins with a target face that is a morph between two identities (Identity A/Identity B). After a brief delay, the target face is replaced by two choice faces: 100% Identity A and 100% Identity B. Children 5-12-years-old were asked to pick the choice face that is most similar to the target identity. The Expression task is matched in format and difficulty to the Identity task, except the targets are morphs between two expressions (Angry/Happy, or Disgust/Surprise). The same children were asked to pick the choice face with the expression that is most similar to the target expression. There were significant effects of age, with performance improving (becoming more accurate and faster) on both tasks with increasing age. Accuracy and reaction times were not significantly different across tasks and there was no significant Age x Task interaction. Thus, facial identity and facial expression discrimination appear to develop at a similar rate, with comparable improvement on both tasks from age five to twelve. Because our tasks are so closely matched in format and difficulty, they may prove useful for testing face identity and face expression processing in special populations, such as autism or prosopagnosia, where one of these abilities might be impaired.


bioRxiv | 2018

Idiosyncratic, retinotopic bias in face identification modulated by familiarity

Matteo Visconti di Oleggio Castello; Morgan Taylor; Patrick Cavanagh; M. Ida Gobbini

Collaboration


Dive into the Matteo Visconti di Oleggio Castello's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hervé Abdi

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge