Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Leila Wehbe is active.

Publication


Featured researches published by Leila Wehbe.


PLOS ONE | 2014

Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses

Leila Wehbe; Brian Murphy; Partha Pratim Talukdar; Alona Fyshe; Aaditya Ramdas; Tom M. Mitchell

Story understanding involves many perceptual and cognitive subprocesses, from perceiving individual words, to parsing sentences, to understanding the relationships among the story characters. We present an integrated computational model of reading that incorporates these and additional subprocesses, simultaneously discovering their fMRI signatures. Our model predicts the fMRI activity associated with reading arbitrary text passages, well enough to distinguish which of two story segments is being read with 74% accuracy. This approach is the first to simultaneously track diverse reading subprocesses during complex story processing and predict the detailed neural representation of diverse story features, ranging from visual word properties to the mention of different story characters and different actions they perform. We construct brain representation maps that replicate many results from a wide range of classical studies that focus each on one aspect of language processing and offer new insights on which type of information is processed by different areas involved in language processing. Additionally, this approach is promising for studying individual differences: it can be used to create single subject maps that may potentially be used to measure reading comprehension and diagnose reading disorders.


NeuroImage | 2012

Tracking neural coding of perceptual and semantic features of concrete nouns

Gustavo Sudre; Dean A. Pomerleau; Mark Palatucci; Leila Wehbe; Alona Fyshe; Riitta Salmelin; Tom M. Mitchell

We present a methodological approach employing magnetoencephalography (MEG) and machine learning techniques to investigate the flow of perceptual and semantic information decodable from neural activity in the half second during which the brain comprehends the meaning of a concrete noun. Important information about the cortical location of neural activity related to the representation of nouns in the human brain has been revealed by past studies using fMRI. However, the temporal sequence of processing from sensory input to concept comprehension remains unclear, in part because of the poor time resolution provided by fMRI. In this study, subjects answered 20 questions (e.g. is it alive?) about the properties of 60 different nouns prompted by simultaneous presentation of a pictured item and its written name. Our results show that the neural activity observed with MEG encodes a variety of perceptual and semantic features of stimuli at different times relative to stimulus onset, and in different cortical locations. By decoding these features, our MEG-based classifier was able to reliably distinguish between two different concrete nouns that it had never seen before. The results demonstrate that there are clear differences between the time course of the magnitude of MEG activity and that of decodable semantic information. Perceptual features were decoded from MEG activity earlier in time than semantic features, and features related to animacy, size, and manipulability were decoded consistently across subjects. We also observed that regions commonly associated with semantic processing in the fMRI literature may not show high decoding results in MEG. We believe that this type of approach and the accompanying machine learning methods can form the basis for further modeling of the flow of neural information during language processing and a variety of other cognitive processes.


north american chapter of the association for computational linguistics | 2015

A Compositional and Interpretable Semantic Space

Alona Fyshe; Leila Wehbe; Partha Pratim Talukdar; Brian Murphy; Tom M. Mitchell

Vector Space Models (VSMs) of Semantics are useful tools for exploring the semantics of single words, and the composition of words to make phrasal meaning. While many methods can estimate the meaning (i.e. vector) of a phrase, few do so in an interpretable way. We introduce a new method (CNNSE) that allows word and phrase vectors to adapt to the notion of composition. Our method learns a VSM that is both tailored to support a chosen semantic composition operation, and whose resulting features have an intuitive interpretation. Interpretability allows for the exploration of phrasal semantics, which we leverage to analyze performance on a behavioral task.


empirical methods in natural language processing | 2014

Aligning context-based statistical models of language with brain activity during reading

Leila Wehbe; Ashish Vaswani; Kevin Knight; Tom M. Mitchell

Many statistical models for natural language processing exist, including context-based neural networks that (1) model the previously seen context as a latent feature vector, (2) integrate successive words into the context using some learned representation (embedding), and (3) compute output probabilities for incoming words given the context. On the other hand, brain imaging studies have suggested that during reading, the brain (a) continuously builds a context from the successive words and every time it encounters a word it (b) fetches its properties from memory and (c) integrates it with the previous context with a degree of effort that is inversely proportional to how probable the word is. This hints to a parallelism between the neural networks and the brain in modeling context (1 and a), representing the incoming words (2 and b) and integrating it (3 and c). We explore this parallelism to better understand the brain processes and the neural networks representations. We study the alignment between the latent vectors used by neural networks and brain activity observed via Magnetoencephalography (MEG) when subjects read a story. For that purpose we apply the neural network to the same text the subjects are reading, and explore the ability of these three vector representations to predict the observed word-by-word brain activity. Our novel results show that: before a new word i is read, brain activity is well predicted by the neural network latent representation of context and the predictability decreases as the brain integrates the word and changes its own representation of context. Secondly, the neural network embedding of word i can predict the MEG activity when word i is presented to the subject, revealing that it is correlated with the brain’s own representation of word i. Moreover, we obtain that the activity is predicted in different regions of the brain with varying delay. The delay is consistent with the placement of each region on the processing pathway that starts in the visual cortex and moves to higher level regions. Finally, we show that the output probability computed by the neural networks agrees with the brain’s own assessment of the probability of word i, as it can be used to predict the brain activity after the word i’s properties have been fetched from memory and the brain is in the process of integrating it into the context.


Archive | 2012

Machine Learning and Interpretation in Neuroimaging

Irina Rish; Georg Langs; Leila Wehbe; Guillermo A. Cecchi; Kai-min Kevin Chang; Brian Murphy

Improving the interpretability of multivariate models is of primary interest for many neuroimaging studies. In this study, we present an application of multi-task learning (MTL) to enhance the interpretability of linear classifiers once applied to neuroimaging data. To attain our goal, we propose to divide the data into spatial fractions and define the temporal data of each spatial unit as a task in MTL paradigm. Our result on magnetoencephalography (MEG) data reveals preliminary evidence that, (1) dividing the brain recordings into spatial fractions based on spatial units of data and (2) considering each spatial fraction as a task, are two factors that provide more stability and consequently more interpretability for brain decoding models.


Archive | 2017

Decoding Language from the Brain

Brian Murphy; Leila Wehbe; Alona Fyshe

Abstract In this paper we review recent computational approaches to the study of language with neuroimaging data. Recordings of brain activity have long played a central role in furthering our understanding of how human language works, with researchers usually choosing to focus tightly on one aspect of the language system. This choice is driven both by the complexity of that system, and by the noise and complexity in neuroimaging data itself. State-of-the-art computational methods can help in two respects: in teasing more information from recordings of brain activity and by allowing us to test broader and more articulated theories and detailed representations of language tasks. In this chapter, we first set the scene with a succinct review of neuroimaging techniques and what they have taught us about language processing in the brain. We then describe how recent work has used machine learning methods with brain data and computational models of language to investigate how words and phrases are processed. We finish by introducing emerging naturalistic paradigms that combine authentic language tasks (e.g., reading or listening to a story) with rich models of lexical, sentential, and suprasentential representations to enable an allround view of language processing. Introduction The study of language, like other cognitive sciences, requires of us to indulge in a kind of mind reading. We use a variety of methods in an attempt to access the hidden representations and processes that allow humans to converse. In formal linguistics intuitive judgments by the theorist are used as primary evidence – an approach that brings well-understood dangers of bias (Gibson and Fedorenko, 2010), but in practice can work well (Sprouse et al., 2013). Aggregating judgments over groups of informants is widely used in cognitive and computational linguistics, through both experts in controlled environments and crowdsourcing of naive annotators (Snow et al., 2008). Experimental psycholinguists have used a range of methods that do not rely on intuition, judgments, or subjective reflection, such as the speed of self-paced reading, or the order and timing of gaze events as recorded with eye-tracking technologies (Rayner, 1998). Brain-recording technologies offer a different kind of evidence, as they are the closest we can get empirically to the object of interest: human cognition. Despite the technical challenges involved, especially the complexity of the recorded signals and the extraneous noise that they contain, brain imaging has a decades-long history in psycholinguistics.


The Annals of Applied Statistics | 2015

Regularized brain reading with shrinkage and smoothing

Leila Wehbe; Aaditya Ramdas; Rebecca C. Steorts; Cosma Shalizi

Functional neuroimaging measures how the brain responds to complex stimuli. However, sample sizes are modest, noise is substantial, and stimuli are high dimensional. Hence, direct estimates are inherently imprecise and call for regularization. We compare a suite of approaches which regularize via shrinkage: ridge regression, the elastic net (a generalization of ridge regression and the lasso), and a hierarchical Bayesian model based on small area estimation (SAE). We contrast regularization with spatial smoothing and combinations of smoothing and shrinkage. All methods are tested on functional magnetic resonance imaging (fMRI) data from multiple subjects participating in two different experiments related to reading, for both predicting neural response to stimuli and decoding stimuli from responses. Interestingly, when the regularization parameters are chosen by cross-validation independently for every voxel, low/high regularization is chosen in voxels where the classification accuracy is high/low, indicating that the regularization intensity is a good tool for identification of relevant voxels for the cognitive task. Surprisingly, all the regularization methods work about equally well, suggesting that beating basic smoothing and shrinkage will take not only clever methods, but also careful modeling.


bioRxiv | 2016

The Semantics of Adjective Noun Phrases in the Human Brain

Alona Fyshe; Gustavo Sudre; Leila Wehbe; Nicole S. Rafidi; Tom M. Mitchell

As a person reads, the brain performs complex operations to create higher order semantic representations from individual words. While these steps are effortless for competent readers, we are only beginning to understand how the brain performs these actions. Here, we explore semantic composition using magnetoencephalography (MEG) recordings of people reading adjective-noun phrases presented one word at a time. We track the neural representation of semantic information over time, through different brain regions. Our results reveal two novel findings: 1) a neural representation of the adjective is present during noun presentation, but this neural representation is different from that observed during adjective presentation 2) the neural representation of adjective semantics observed during adjective reading is reactivated after phrase reading, with remarkable consistency. We also note that while the semantic representation of the adjective during the reading of the adjective is very distributed, the later representations are concentrated largely to temporal and frontal areas previously associated with composition. Taken together, these results paint a picture of information flow in the brain as phrases are read and understood.


international conference on artificial intelligence | 2015

Nonparametric independence testing for small sample sizes

Aaditya Ramdas; Leila Wehbe


Archive | 2014

Stein Shrinkage for Cross-Covariance Operators and Kernel Independence Testing

Aaditya Ramdas; Leila Wehbe

Collaboration


Dive into the Leila Wehbe's collaboration.

Top Co-Authors

Avatar

Alona Fyshe

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Tom M. Mitchell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Brian Murphy

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Aaditya Ramdas

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Gustavo Sudre

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georg Langs

Medical University of Vienna

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge