Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Leonardo Fernandino is active.

Publication


Featured researches published by Leonardo Fernandino.


Cerebral Cortex | 2010

Common and Dissociable Prefrontal Loci Associated with Component Mechanisms of Analogical Reasoning

Soohyun Cho; Teena D. Moody; Leonardo Fernandino; Jeanette A. Mumford; Russell A. Poldrack; Tyrone D. Cannon; Barbara J. Knowlton; Keith J. Holyoak

The ability to draw analogies requires 2 key cognitive processes, relational integration and resolution of interference. The present study aimed to identify the neural correlates of both component processes of analogical reasoning within a single, nonverbal analogy task using event-related functional magnetic resonance imaging. Participants verified whether a visual analogy was true by considering either 1 or 3 relational dimensions. On half of the trials, there was an additional need to resolve interference in order to make a correct judgment. Increase in the number of dimensions to integrate was associated with increased activation in the lateral prefrontal cortex as well as lateral frontal pole in both hemispheres. When there was a need to resolve interference during reasoning, activation increased in the lateral prefrontal cortex but not in the frontal pole. We identified regions in the middle and inferior frontal gyri which were exclusively sensitive to demands on each component process, in addition to a partial overlap between these neural correlates of each component process. These results indicate that analogical reasoning is mediated by the coordination of multiple regions of the prefrontal cortex, of which some are sensitive to demands on only one of these 2 component processes, whereas others are sensitive to both.


Cognitive Neuropsychology | 2016

Toward a brain-based componential semantic representation

Jeffrey R. Binder; Lisa L. Conant; Colin Humphries; Leonardo Fernandino; Stephen B. Simons; Mario Aguilar; Rutvik H. Desai

ABSTRACT Componential theories of lexical semantics assume that concepts can be represented by sets of features or attributes that are in some sense primitive or basic components of meaning. The binary features used in classical category and prototype theories are problematic in that these features are themselves complex concepts, leaving open the question of what constitutes a primitive feature. The present availability of brain imaging tools has enhanced interest in how concepts are represented in brains, and accumulating evidence supports the claim that these representations are at least partly “embodied” in the perception, action, and other modal neural systems through which concepts are experienced. In this study we explore the possibility of devising a componential model of semantic representation based entirely on such functional divisions in the human brain. We propose a basic set of approximately 65 experiential attributes based on neurobiological considerations, comprising sensory, motor, spatial, temporal, affective, social, and cognitive experiences. We provide normative data on the salience of each attribute for a large set of English nouns, verbs, and adjectives, and show how these attribute vectors distinguish a priori conceptual categories and capture semantic similarity. Robust quantitative differences between concrete object categories were observed across a large number of attribute dimensions. A within- versus between-category similarity metric showed much greater separation between categories than representations derived from distributional (latent semantic) analysis of text. Cluster analyses were used to explore the similarity structure in the data independent of a priori labels, revealing several novel category distinctions. We discuss how such a representation might deal with various longstanding problems in semantic theory, such as feature selection and weighting, representation of abstract concepts, effects of context on semantic retrieval, and conceptual combination. In contrast to componential models based on verbal features, the proposed representation systematically relates semantic content to large-scale brain networks and biologically plausible accounts of concept acquisition.


Neuropsychologia | 2013

Where is the action? Action sentence processing in Parkinson's disease

Leonardo Fernandino; Lisa L. Conant; Jeffrey R. Binder; Karen Blindauer; Bradley Hiner; Katie Spangler; Rutvik H. Desai

According to an influential view of conceptual representation, action concepts are understood through motoric simulations, involving motor networks of the brain. A stronger version of this embodied account suggests that even figurative uses of action words (e.g., grasping the concept) are understood through motoric simulations. We investigated these claims by assessing whether Parkinsons disease (PD), a disorder affecting the motor system, is associated with selective deficits in comprehending action-related sentences. Twenty PD patients and 21 age-matched controls performed a sentence comprehension task, where sentences belonged to one of four conditions: literal action, non-idiomatic metaphoric action, idiomatic action, and abstract. The same verbs (referring to hand/arm actions) were used in the three action-related conditions. Patients, but not controls, were slower to respond to literal and idiomatic action than to abstract sentences. These results indicate that sensory-motor systems play a functional role in semantic processing, including processing of figurative action language.


Cerebral Cortex | 2016

Predicting Neural Activity Patterns Associated with Sentences Using a Neurobiologically Motivated Model of Semantic Representation

Andrew J. Anderson; Jeffrey R. Binder; Leonardo Fernandino; Colin Humphries; Lisa L. Conant; Mario Aguilar; Xixi Wang; Donias Doko; Rajeev D. S. Raizada

We introduce an approach that predicts neural representations of word meanings contained in sentences then superposes these to predict neural representations of new sentences. A neurobiological semantic model based on sensory, motor, social, emotional, and cognitive attributes was used as a foundation to define semantic content. Previous studies have predominantly predicted neural patterns for isolated words, using models that lack neurobiological interpretation. Fourteen participants read 240 sentences describing everyday situations while undergoing fMRI. To connect sentence-level fMRI activation patterns to the word-level semantic model, we devised methods to decompose the fMRI data into individual words. Activation patterns associated with each attribute in the model were then estimated using multiple-regression. This enabled synthesis of activation patterns for trained and new words, which were subsequently averaged to predict new sentences. Region-of-interest analyses revealed that prediction accuracy was highest using voxels in the left temporal and inferior parietal cortex, although a broad range of regions returned statistically significant results, showing that semantic information is widely distributed across the brain. The results show how a neurobiologically motivated semantic model can decompose sentence-level fMRI data into activation features for component words, which can be recombined to predict activation patterns for new sentences.


Neuropsychologia | 2015

Predicting brain activation patterns associated with individual lexical concepts based on five sensory-motor attributes

Leonardo Fernandino; Colin Humphries; Mark S. Seidenberg; William L. Gross; Lisa L. Conant; Jeffrey R. Binder

While major advances have been made in uncovering the neural processes underlying perceptual representations, our grasp of how the brain gives rise to conceptual knowledge remains relatively poor. Recent work has provided strong evidence that concepts rely, at least in part, on the same sensory and motor neural systems through which they were acquired, but it is still unclear whether the neural code for concept representation uses information about sensory-motor features to discriminate between concepts. In the present study, we investigate this question by asking whether an encoding model based on five semantic attributes directly related to sensory-motor experience - sound, color, visual motion, shape, and manipulation - can successfully predict patterns of brain activation elicited by individual lexical concepts. We collected ratings on the relevance of these five attributes to the meaning of 820 words, and used these ratings as predictors in a multiple regression model of the fMRI signal associated with the words in a separate group of participants. The five resulting activation maps were then combined by linear summation to predict the distributed activation pattern elicited by a novel set of 80 test words. The encoding model predicted the activation patterns elicited by the test words significantly better than chance. As expected, prediction was successful for concrete but not for abstract concepts. Comparisons between encoding models based on different combinations of attributes indicate that all five attributes contribute to the representation of concrete concepts. Consistent with embodied theories of semantics, these results show, for the first time, that the distributed activation pattern associated with a concept combines information about different sensory-motor attributes according to their respective relevance. Future research should investigate how additional features of phenomenal experience contribute to the neural representation of conceptual knowledge.


The Journal of Neuroscience | 2016

Heteromodal Cortical Areas Encode Sensory-Motor Features of Word Meaning

Leonardo Fernandino; Colin Humphries; Lisa L. Conant; Mark S. Seidenberg; Jeffrey R. Binder

The capacity to process information in conceptual form is a fundamental aspect of human cognition, yet little is known about how this type of information is encoded in the brain. Although the role of sensory and motor cortical areas has been a focus of recent debate, neuroimaging studies of concept representation consistently implicate a network of heteromodal areas that seem to support concept retrieval in general rather than knowledge related to any particular sensory-motor content. We used predictive machine learning on fMRI data to investigate the hypothesis that cortical areas in this “general semantic network” (GSN) encode multimodal information derived from basic sensory-motor processes, possibly functioning as convergence–divergence zones for distributed concept representation. An encoding model based on five conceptual attributes directly related to sensory-motor experience (sound, color, shape, manipulability, and visual motion) was used to predict brain activation patterns associated with individual lexical concepts in a semantic decision task. When the analysis was restricted to voxels in the GSN, the model was able to identify the activation patterns corresponding to individual concrete concepts significantly above chance. In contrast, a model based on five perceptual attributes of the word form performed at chance level. This pattern was reversed when the analysis was restricted to areas involved in the perceptual analysis of written word forms. These results indicate that heteromodal areas involved in semantic processing encode information about the relative importance of different sensory-motor attributes of concepts, possibly by storing particular combinations of sensory and motor features. SIGNIFICANCE STATEMENT The present study used a predictive encoding model of word semantics to decode conceptual information from neural activity in heteromodal cortical areas. The model is based on five sensory-motor attributes of word meaning (color, shape, sound, visual motion, and manipulability) and encodes the relative importance of each attribute to the meaning of a word. This is the first demonstration that heteromodal areas involved in semantic processing can discriminate between different concepts based on sensory-motor information alone. This finding indicates that the brain represents concepts as multimodal combinations of sensory and motor representations.


Brain and Cognition | 2008

Dynamic modularity in interhemispheric interaction

Eran Zaidel; Leonardo Fernandino

lateralized cartoon faces (happy, angry) with color backgrounds (yellow, red). The pairs ‘‘happy-yellow” and ‘‘angry-red” were considered congruent and the pairs ‘‘happy-red” and ‘‘angry-yellow” were considered incongruent. We asked whether the faster task (‘‘speed account”) or more automatic task (‘‘automaticity account”) affects the slower or less automatic one. In Experiment 1, participants identified the emotion or the color of the stimulus. Color identification was faster than Emotion identification, and there was a larger Stroop effect in the Emotion task. There was also a significant correlation between the difference in the speeds of the two tasks and the Stroop effect in the Emotion task. This supports the ‘‘speed account.” In Experiment 2, participants repeated Experiment 1 and also rated their familiarity with the associations ‘‘happy-yellow” and ‘‘angry-red.” Familiarity did not correlate with the Stroop effect. This argues against the ‘‘automaticity account.” In Experiment 3, participants repeated Experiment 2 and also rated the degree to which the faces engaged the relevant emotions. Emotional engagement did not correlate with the Stroop effect. In Experiment 4, we replaced the cartoon faces with real faces. The resulting Stroop effect was significant but smaller than with the cartoon faces. In Experiment 5, using the same stimuli as Experiment 4, participants identified the sex of the poser. The goal was to determine whether the Stroop effect can be generated automatically and grab attentional resources away from the primary task. We found no Stroop effect, suggesting that in order for a Stroop effect to occur, the task decision must apply to the stimulus dimensions that generate the effect. In Experiment 6, we used the same faces as in Experiment 5, but replaced the color by the name of the emotion, printed next to it. There was a larger Stroop effect than in Experiment 5 and, again, the speed account was supported. We conclude that the color-emotion Stroop task showed a significant but modest Stroop effect (about 20 ms), consistent with the ‘‘speed” but not with the ‘‘automaticity” account. There were no hemispheric differences in any of the Stroop effects.


Cerebral Cortex | 2016

Concept Representation Reflects Multimodal Abstraction: A Framework for Embodied Semantics

Leonardo Fernandino; Jeffrey R. Binder; Rutvik H. Desai; Suzanne L. Pendl; Colin Humphries; William L. Gross; Lisa L. Conant; Mark S. Seidenberg


Brain and Language | 2013

Parkinson's disease disrupts both automatic and controlled processing of action verbs

Leonardo Fernandino; Lisa L. Conant; Jeffrey R. Binder; Karen Blindauer; Bradley Hiner; Katie Spangler; Rutvik H. Desai


Brain and Language | 2010

Are cortical motor maps based on body parts or coordinated actions? Implications for embodied semantics.

Leonardo Fernandino; Marco Iacoboni

Collaboration


Dive into the Leonardo Fernandino's collaboration.

Top Co-Authors

Avatar

Jeffrey R. Binder

Medical College of Wisconsin

View shared research outputs
Top Co-Authors

Avatar

Lisa L. Conant

Medical College of Wisconsin

View shared research outputs
Top Co-Authors

Avatar

Colin Humphries

Medical College of Wisconsin

View shared research outputs
Top Co-Authors

Avatar

Rutvik H. Desai

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Mark S. Seidenberg

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Bradley Hiner

Medical College of Wisconsin

View shared research outputs
Top Co-Authors

Avatar

Eran Zaidel

University of California

View shared research outputs
Top Co-Authors

Avatar

Karen Blindauer

Medical College of Wisconsin

View shared research outputs
Top Co-Authors

Avatar

Katie Spangler

Medical College of Wisconsin

View shared research outputs
Top Co-Authors

Avatar

Marco Iacoboni

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge