Barry Devereux
University of Cambridge
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Barry Devereux.
Brain | 2011
Lorraine K. Tyler; William D. Marslen-Wilson; Billi Randall; Paul Wright; Barry Devereux; Jie Zhuang; Marina Papoutsi; Emmanuel A. Stamatakis
For the past 150 years, neurobiological models of language have debated the role of key brain regions in language function. One consistently debated set of issues concern the role of the left inferior frontal gyrus in syntactic processing. Here we combine measures of functional activity, grey matter integrity and performance in patients with left hemisphere damage and healthy participants to ask whether the left inferior frontal gyrus is essential for syntactic processing. In a functional neuroimaging study, participants listened to spoken sentences that either contained a syntactically ambiguous or matched unambiguous phrase. Behavioural data on three tests of syntactic processing were subsequently collected. In controls, syntactic processing co-activated left hemisphere Brodmann areas 45/47 and posterior middle temporal gyrus. Activity in a left parietal cluster was sensitive to working memory demands in both patients and controls. Exploiting the variability in lesion location and performance in the patients, voxel-based correlational analyses showed that tissue integrity and neural activity—primarily in left Brodmann area 45 and posterior middle temporal gyrus—were correlated with preserved syntactic performance, but unlike the controls, patients were insensitive to syntactic preferences, reflecting their syntactic deficit. These results argue for the essential contribution of the left inferior frontal gyrus in syntactic analysis and highlight the functional relationship between left Brodmann area 45 and the left posterior middle temporal gyrus, suggesting that when this relationship breaks down, through damage to either region or to the connections between them, syntactic processing is impaired. On this view, the left inferior frontal gyrus may not itself be specialized for syntactic processing, but plays an essential role in the neural network that carries out syntactic computations.
Journal of Cognitive Neuroscience | 2013
Lorraine K. Tyler; Shannon Chiu; Jie Zhuang; Billi Randall; Barry Devereux; Paul Wright; Alex Clarke; Kirsten I. Taylor
Recognizing an object involves more than just visual analyses; its meaning must also be decoded. Extensive research has shown that processing the visual properties of objects relies on a hierarchically organized stream in ventral occipitotemporal cortex, with increasingly more complex visual features being coded from posterior to anterior sites culminating in the perirhinal cortex (PRC) in the anteromedial temporal lobe (aMTL). The neurobiological principles of the conceptual analysis of objects remain more controversial. Much research has focused on two neural regions—the fusiform gyrus and aMTL, both of which show semantic category differences, but of different types. fMRI studies show category differentiation in the fusiform gyrus, based on clusters of semantically similar objects, whereas category-specific deficits, specifically for living things, are associated with damage to the aMTL. These category-specific deficits for living things have been attributed to problems in differentiating between highly similar objects, a process that involves the PRC. To determine whether the PRC and the fusiform gyri contribute to different aspects of an objects meaning, with differentiation between confusable objects in the PRC and categorization based on object similarity in the fusiform, we carried out an fMRI study of object processing based on a feature-based model that characterizes the degree of semantic similarity and difference between objects and object categories. Participants saw 388 objects for which feature statistic information was available and named the objects at the basic level while undergoing fMRI scanning. After controlling for the effects of visual information, we found that feature statistics that capture similarity between objects formed category clusters in fusiform gyri, such that objects with many shared features (typical of living things) were associated with activity in the lateral fusiform gyri whereas objects with fewer shared features (typical of nonliving things) were associated with activity in the medial fusiform gyri. Significantly, a feature statistic reflecting differentiation between highly similar objects, enabling object-specific representations, was associated with bilateral PRC activity. These results confirm that the statistical characteristics of conceptual object features are coded in the ventral stream, supporting a conceptual feature-based hierarchy, and integrating disparate findings of category responses in fusiform gyri and category deficits in aMTL into a unifying neurocognitive framework.
Cerebral Cortex | 2013
Alex Clarke; Kirsten I. Taylor; Barry Devereux; Billi Randall; Lorraine K. Tyler
To recognize visual objects, our sensory perceptions are transformed through dynamic neural interactions into meaningful representations of the world but exactly how visual inputs invoke object meaning remains unclear. To address this issue, we apply a regression approach to magnetoencephalography data, modeling perceptual and conceptual variables. Key conceptual measures were derived from semantic feature-based models claiming shared features (e.g., has eyes) provide broad category information, while distinctive features (e.g., has a hump) are additionally required for more specific object identification. Our results show initial perceptual effects in visual cortex that are rapidly followed by semantic feature effects throughout ventral temporal cortex within the first 120 ms. Moreover, these early semantic effects reflect shared semantic feature information supporting coarse category-type distinctions. Post-200 ms, we observed the effects along the extent of ventral temporal cortex for both shared and distinctive features, which together allow for conceptual differentiation and object identification. By relating spatiotemporal neural activity to statistical feature-based measures of semantic knowledge, we demonstrate that qualitatively different kinds of perceptual and semantic information are extracted from visual objects over time, with rapid activation of shared object features followed by concomitant activation of distinctive features that together enable meaningful visual object recognition.
The Journal of Neuroscience | 2013
Barry Devereux; Alex Clarke; Andreas Marouchos; Lorraine K. Tyler
Understanding the meanings of words and objects requires the activation of underlying conceptual representations. Semantic representations are often assumed to be coded such that meaning is evoked regardless of the input modality. However, the extent to which meaning is coded in modality-independent or amodal systems remains controversial. We address this issue in a human fMRI study investigating the neural processing of concepts, presented separately as written words and pictures. Activation maps for each individual word and picture were used as input for searchlight-based multivoxel pattern analyses. Representational similarity analysis was used to identify regions correlating with low-level visual models of the words and objects and the semantic category structure common to both. Common semantic category effects for both modalities were found in a left-lateralized network, including left posterior middle temporal gyrus (LpMTG), left angular gyrus, and left intraparietal sulcus (LIPS), in addition to object- and word-specific semantic processing in ventral temporal cortex and more anterior MTG, respectively. To explore differences in representational content across regions and modalities, we developed novel data-driven analyses, based on k-means clustering of searchlight dissimilarity matrices and seeded correlation analysis. These revealed subtle differences in the representations in semantic-sensitive regions, with representations in LIPS being relatively invariant to stimulus modality and representations in LpMTG being uncorrelated across modality. These results suggest that, although both LpMTG and LIPS are involved in semantic processing, only the functional role of LIPS is the same regardless of the visual input, whereas the functional role of LpMTG differs for words and objects.
Cerebral Cortex | 2015
Alex Clarke; Barry Devereux; Billi Randall; Lorraine K. Tyler
To respond appropriately to objects, we must process visual inputs rapidly and assign them meaning. This involves highly dynamic, interactive neural processes through which information accumulates and cognitive operations are resolved across multiple time scales. However, there is currently no model of object recognition which provides an integrated account of how visual and semantic information emerge over time; therefore, it remains unknown how and when semantic representations are evoked from visual inputs. Here, we test whether a model of individual objects—based on combining the HMax computational model of vision with semantic-feature information—can account for and predict time-varying neural activity recorded with magnetoencephalography. We show that combining HMax and semantic properties provides a better account of neural object representations compared with the HMax alone, both through model fit and classification performance. Our results show that modeling and classifying individual objects is significantly improved by adding semantic-feature information beyond ∼200 ms. These results provide important insights into the functional properties of visual processing across time.
Cognition | 2012
Kirsten I. Taylor; Barry Devereux; K. Acres; Billi Randall; Lorraine K. Tyler
Conceptual representations are at the heart of our mental lives, involved in every aspect of cognitive functioning. Despite their centrality, a long-standing debate persists as to how the meanings of concepts are represented and processed. Many accounts agree that the meanings of concrete concepts are represented by their individual features, but disagree about the importance of different feature-based variables: some views stress the importance of the information carried by distinctive features in conceptual processing, others the features which are shared over many concepts, and still others the extent to which features co-occur. We suggest that previously disparate theoretical positions and experimental findings can be unified by an account which claims that task demands determine how concepts are processed in addition to the effects of feature distinctiveness and co-occurrence. We tested these predictions in a basic-level naming task which relies on distinctive feature information (Experiment 1) and a domain decision task which relies on shared feature information (Experiment 2). Both used large-scale regression designs with the same visual objects, and mixed-effects models incorporating participant, session, stimulus-related and feature statistic variables to model the performance. We found that concepts with relatively more distinctive and more highly correlated distinctive relative to shared features facilitated basic-level naming latencies, while concepts with relatively more shared and more highly correlated shared relative to distinctive features speeded domain decisions. These findings demonstrate that the feature statistics of distinctiveness (shared vs. distinctive) and correlational strength, as well as the task demands, determine how concept meaning is processed in the conceptual system.
Language and Cognitive Processes | 2011
Kirsten I. Taylor; Barry Devereux; Lorraine K. Tyler
How are the meanings of concepts represented and processed? We present a cognitive model of conceptual representations and processing—the Conceptual Structure Account (CSA; Tyler & Moss, 2001)—as an example of a distributed, feature-based approach. In the first section, we describe the CSA and evaluate relevant neuropsychological and experimental behavioural data. We discuss studies using linguistic and nonlinguistic stimuli, which are both presumed to access the same conceptual system. We then take the CSA as a framework for hypothesising how conceptual knowledge is represented and processed in the brain. This neurocognitive approach attempts to integrate the distributed feature-based characteristics of the CSA with a distributed and feature-based model of sensory object processing. Based on a review of relevant functional imaging and neuropsychological data, we argue that distributed accounts of feature-based representations have considerable explanatory power, and that a cognitive model of conceptual representations is needed to understand their neural bases.
Behavior Research Methods | 2014
Barry Devereux; Lorraine K. Tyler; Jeroen Geertzen; Billi Randall
Theories of the representation and processing of concepts have been greatly enhanced by models based on information available in semantic property norms. This information relates both to the identity of the features produced in the norms and to their statistical properties. In this article, we introduce a new and large set of property norms that are designed to be a more flexible tool to meet the demands of many different disciplines interested in conceptual knowledge representation, from cognitive psychology to computational linguistics. As well as providing all features listed by 2 or more participants, we also show the considerable linguistic variation that underlies each normalized feature label and the number of participants who generated each variant. Our norms are highly comparable with the largest extant set (McRae, Cree, Seidenberg, & McNorgan, 2005) in terms of the number and distribution of features. In addition, we show how the norms give rise to a coherent category structure. We provide these norms in the hope that the greater detail available in the Centre for Speech, Language and the Brain norms should further promote the development of models of conceptual knowledge. The norms can be downloaded at www.csl.psychol.cam.ac.uk/propertynorms.
Frontiers in Psychology | 2013
Lorraine K. Tyler; Teresa Pl Cheung; Barry Devereux; Alex Clarke
The core human capacity of syntactic analysis involves a left hemisphere network involving left inferior frontal gyrus (LIFG) and posterior middle temporal gyrus (LMTG) and the anatomical connections between them. Here we use magnetoencephalography (MEG) to determine the spatio-temporal properties of syntactic computations in this network. Listeners heard spoken sentences containing a local syntactic ambiguity (e.g., “… landing planes …”), at the offset of which they heard a disambiguating verb and decided whether it was an acceptable/unacceptable continuation of the sentence. We charted the time-course of processing and resolving syntactic ambiguity by measuring MEG responses from the onset of each word in the ambiguous phrase and the disambiguating word. We used representational similarity analysis (RSA) to characterize syntactic information represented in the LIFG and left posterior middle temporal gyrus (LpMTG) over time and to investigate their relationship to each other. Testing a variety of lexico-syntactic and ambiguity models against the MEG data, our results suggest early lexico-syntactic responses in the LpMTG and later effects of ambiguity in the LIFG, pointing to a clear differentiation in the functional roles of these two regions. Our results suggest the LpMTG represents and transmits lexical information to the LIFG, which responds to and resolves the ambiguity.
Artificial Intelligence Review | 2005
Barry Devereux; Fintan Costello
How do people understand noun–noun compounds such as volcano science and pear bowl? In this paper, we present evidence against one approach to noun–noun compounds, namely that of arranging the meanings of compounds into a small, finite taxonomy of general semantic relations. Using a typical relation taxonomy, we conducted an experiment examining how people classify compounds into the taxonomy’s relation categories. We found that people often select not one but several relations for each compound; for example, people classify coffee stain as coffee MAKES stain, stain MADE OF coffee, coffee CAUSES stain and stain DERIVED FROM coffee. A natural metric for relational similarity follows from our experimental data; we found that using cluster analysis to group compounds’ interpretations with respect to this metric produced groupings that were different from the original taxonomic categories, suggesting that there is more than one way to classify the meanings of compounds. We also found that compounds which had similar constituent concepts tended to be interpreted with similar relations, indicating that the intrinsic properties of a compound’s constituent concepts help determine how that compound is interpreted. Such findings are problematic for taxonomic theories of conceptual combination