Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alona Fyshe is active.

Publication


Featured researches published by Alona Fyshe.


BMC Public Health | 2013

FRED (A Framework for Reconstructing Epidemic Dynamics): an open-source software system for modeling infectious diseases and control strategies using census-based populations

John J. Grefenstette; Shawn T. Brown; Roni Rosenfeld; Jay V. DePasse; Nathan Stone; Phillip Cooley; William D. Wheaton; Alona Fyshe; David Galloway; Anuroop Sriram; Hasan Guclu; Thomas Abraham; Donald S. Burke

BackgroundMathematical and computational models provide valuable tools that help public health planners to evaluate competing health interventions, especially for novel circumstances that cannot be examined through observational or controlled studies, such as pandemic influenza. The spread of diseases like influenza depends on the mixing patterns within the population, and these mixing patterns depend in part on local factors including the spatial distribution and age structure of the population, the distribution of size and composition of households, employment status and commuting patterns of adults, and the size and age structure of schools. Finally, public health planners must take into account the health behavior patterns of the population, patterns that often vary according to socioeconomic factors such as race, household income, and education levels.ResultsFRED (a Framework for Reconstructing Epidemic Dynamics) is a freely available open-source agent-based modeling system based closely on models used in previously published studies of pandemic influenza. This version of FRED uses open-access census-based synthetic populations that capture the demographic and geographic heterogeneities of the population, including realistic household, school, and workplace social networks. FRED epidemic models are currently available for every state and county in the United States, and for selected international locations.ConclusionsState and county public health planners can use FRED to explore the effects of possible influenza epidemics in specific geographic regions of interest and to help evaluate the effect of interventions such as vaccination programs and school closure policies. FRED is available under a free open source license in order to contribute to the development of better modeling tools and to encourage open discussion of modeling tools being used to evaluate public health policies. We also welcome participation by other researchers in the further development of FRED.


PLOS ONE | 2014

Simultaneously uncovering the patterns of brain regions involved in different story reading subprocesses

Leila Wehbe; Brian Murphy; Partha Pratim Talukdar; Alona Fyshe; Aaditya Ramdas; Tom M. Mitchell

Story understanding involves many perceptual and cognitive subprocesses, from perceiving individual words, to parsing sentences, to understanding the relationships among the story characters. We present an integrated computational model of reading that incorporates these and additional subprocesses, simultaneously discovering their fMRI signatures. Our model predicts the fMRI activity associated with reading arbitrary text passages, well enough to distinguish which of two story segments is being read with 74% accuracy. This approach is the first to simultaneously track diverse reading subprocesses during complex story processing and predict the detailed neural representation of diverse story features, ranging from visual word properties to the mention of different story characters and different actions they perform. We construct brain representation maps that replicate many results from a wide range of classical studies that focus each on one aspect of language processing and offer new insights on which type of information is processed by different areas involved in language processing. Additionally, this approach is promising for studying individual differences: it can be used to create single subject maps that may potentially be used to measure reading comprehension and diagnose reading disorders.


NeuroImage | 2012

Tracking neural coding of perceptual and semantic features of concrete nouns

Gustavo Sudre; Dean A. Pomerleau; Mark Palatucci; Leila Wehbe; Alona Fyshe; Riitta Salmelin; Tom M. Mitchell

We present a methodological approach employing magnetoencephalography (MEG) and machine learning techniques to investigate the flow of perceptual and semantic information decodable from neural activity in the half second during which the brain comprehends the meaning of a concrete noun. Important information about the cortical location of neural activity related to the representation of nouns in the human brain has been revealed by past studies using fMRI. However, the temporal sequence of processing from sensory input to concept comprehension remains unclear, in part because of the poor time resolution provided by fMRI. In this study, subjects answered 20 questions (e.g. is it alive?) about the properties of 60 different nouns prompted by simultaneous presentation of a pictured item and its written name. Our results show that the neural activity observed with MEG encodes a variety of perceptual and semantic features of stimuli at different times relative to stimulus onset, and in different cortical locations. By decoding these features, our MEG-based classifier was able to reliably distinguish between two different concrete nouns that it had never seen before. The results demonstrate that there are clear differences between the time course of the magnitude of MEG activity and that of decodable semantic information. Perceptual features were decoded from MEG activity earlier in time than semantic features, and features related to animacy, size, and manipulability were decoded consistently across subjects. We also observed that regions commonly associated with semantic processing in the fMRI literature may not show high decoding results in MEG. We believe that this type of approach and the accompanying machine learning methods can form the basis for further modeling of the flow of neural information during language processing and a variety of other cognitive processes.


Bioinformatics | 2008

Improving subcellular localization prediction using text classification and the gene ontology

Alona Fyshe; Yifeng Liu; Duane Szafron; Russell Greiner; Paul Lu

MOTIVATION Each protein performs its functions within some specific locations in a cell. This subcellular location is important for understanding protein function and for facilitating its purification. There are now many computational techniques for predicting location based on sequence analysis and database information from homologs. A few recent techniques use text from biological abstracts: our goal is to improve the prediction accuracy of such text-based techniques. We identify three techniques for improving text-based prediction: a rule for ambiguous abstract removal, a mechanism for using synonyms from the Gene Ontology (GO) and a mechanism for using the GO hierarchy to generalize terms. We show that these three techniques can significantly improve the accuracy of protein subcellular location predictors that use text extracted from PubMed abstracts whose references are recorded in Swiss-Prot.


Nucleic Acids Research | 2004

PA-GOSUB: a searchable database of model organism protein sequences with their predicted Gene Ontology molecular function and subcellular localization

Paul Lu; Duane Szafron; Russell Greiner; David S. Wishart; Alona Fyshe; Brandon Pearcy; Brett Poulin; Roman Eisner; Danny Ngo; Nicholas Lamb

PA-GOSUB (Proteome Analyst: Gene Ontology Molecular Function and Subcellular Localization) is a publicly available, web-based, searchable and downloadable database that contains the sequences, predicted GO molecular functions and predicted subcellular localizations of more than 107 000 proteins from 10 model organisms (and growing), covering the major kingdoms and phyla for which annotated proteomes exist (http://www.cs.ualberta.ca/~bioinfo/PA/GOSUB). The PA-GOSUB database effectively expands the coverage of subcellular localization and GO function annotations by a significant factor (already over five for subcellular localization, compared with Swiss-Prot v42.7), and more model organisms are being added to PA-GOSUB as their sequenced proteomes become available. PA-GOSUB can be used in three main ways. First, a researcher can browse the pre-computed PA-GOSUB annotations on a per-organism and per-protein basis using annotation-based and text-based filters. Second, a user can perform BLAST searches against the PA-GOSUB database and use the annotations from the homologs as simple predictors for the new sequences. Third, the whole of PA-GOSUB can be downloaded in either FASTA or comma-separated values (CSV) formats.


north american chapter of the association for computational linguistics | 2015

A Compositional and Interpretable Semantic Space

Alona Fyshe; Leila Wehbe; Partha Pratim Talukdar; Brian Murphy; Tom M. Mitchell

Vector Space Models (VSMs) of Semantics are useful tools for exploring the semantics of single words, and the composition of words to make phrasal meaning. While many methods can estimate the meaning (i.e. vector) of a phrase, few do so in an interpretable way. We introduce a new method (CNNSE) that allows word and phrase vectors to adapt to the notion of composition. Our method learns a VSM that is both tailored to support a chosen semantic composition operation, and whose resulting features have an intuitive interpretation. Interpretability allows for the exploration of phrasal semantics, which we leverage to analyze performance on a behavioral task.


meeting of the association for computational linguistics | 2014

Interpretable Semantic Vectors from a Joint Model of Brain- and Text- Based Meaning

Alona Fyshe; Partha Pratim Talukdar; Brian Murphy; Tom M. Mitchell

Vector space models (VSMs) represent word meanings as points in a high dimensional space. VSMs are typically created using a large text corpora, and so represent word semantics as observed in text. We present a new algorithm (JNNSE) that can incorporate a measure of semantics not previously used to create VSMs: brain activation data recorded while people read words. The resulting model takes advantage of the complementary strengths and weaknesses of corpus and brain activation data to give a more complete representation of semantics. Evaluations show that the model 1) matches a behavioral measure of semantics more closely, 2) can be used to predict corpus data for unseen words and 3) has predictive power that generalizes across brain imaging technologies and across subjects. We believe that the model is thus a more faithful representation of mental vocabularies.


Archive | 2017

Decoding Language from the Brain

Brian Murphy; Leila Wehbe; Alona Fyshe

Abstract In this paper we review recent computational approaches to the study of language with neuroimaging data. Recordings of brain activity have long played a central role in furthering our understanding of how human language works, with researchers usually choosing to focus tightly on one aspect of the language system. This choice is driven both by the complexity of that system, and by the noise and complexity in neuroimaging data itself. State-of-the-art computational methods can help in two respects: in teasing more information from recordings of brain activity and by allowing us to test broader and more articulated theories and detailed representations of language tasks. In this chapter, we first set the scene with a succinct review of neuroimaging techniques and what they have taught us about language processing in the brain. We then describe how recent work has used machine learning methods with brain data and computational models of language to investigate how words and phrases are processed. We finish by introducing emerging naturalistic paradigms that combine authentic language tasks (e.g., reading or listening to a story) with rich models of lexical, sentential, and suprasentential representations to enable an allround view of language processing. Introduction The study of language, like other cognitive sciences, requires of us to indulge in a kind of mind reading. We use a variety of methods in an attempt to access the hidden representations and processes that allow humans to converse. In formal linguistics intuitive judgments by the theorist are used as primary evidence – an approach that brings well-understood dangers of bias (Gibson and Fedorenko, 2010), but in practice can work well (Sprouse et al., 2013). Aggregating judgments over groups of informants is widely used in cognitive and computational linguistics, through both experts in controlled environments and crowdsourcing of naive annotators (Snow et al., 2008). Experimental psycholinguists have used a range of methods that do not rely on intuition, judgments, or subjective reflection, such as the speed of self-paced reading, or the order and timing of gaze events as recorded with eye-tracking technologies (Rayner, 1998). Brain-recording technologies offer a different kind of evidence, as they are the closest we can get empirically to the object of interest: human cognition. Despite the technical challenges involved, especially the complexity of the recorded signals and the extraneous noise that they contain, brain imaging has a decades-long history in psycholinguistics.


empirical methods in natural language processing | 2016

BrainBench: A Brain-Image Test Suite for Distributional Semantic Models

Haoyan Xu; Brian Murphy; Alona Fyshe

The brain is the locus of our language ability, and so brain images can be used to ground linguistic theories. Here we introduce BrainBench, a lightweight system for testing distributional models of word semantics. We compare the performance of several models, and show that the performance on brain-image tasks differs from the performance on behavioral tasks. We release our benchmark test as part of a web service.


north american chapter of the association for computational linguistics | 2006

Term Generalization and Synonym Resolution for Biological Abstracts: Using the Gene Ontology for Subcellular Localization Prediction

Alona Fyshe; Duane Szafron

The field of molecular biology is growing at an astounding rate and research findings are being deposited into public databases, such as Swiss-Prot. Many of the over 200,000 protein entries in Swiss-Prot 49.1 lack annotations such as subcellular localization or function, but the vast majority have references to journal abstracts describing related research. These abstracts represent a huge amount of information that could be used to generate annotations for proteins automatically. Training classifiers to perform text categorization on abstracts is one way to accomplish this task. We present a method for improving text classification for biological journal abstracts by generating additional text features using the knowledge represented in a biological concept hierarchy (the Gene Ontology). The structure of the ontology, as well as the synonyms recorded in it, are leveraged by our simple technique to significantly improve the F-measure of subcellular localization text classifiers by as much as 0.078 and we achieve F-measures as high as 0.935.

Collaboration


Dive into the Alona Fyshe's collaboration.

Top Co-Authors

Avatar

Tom M. Mitchell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Brian Murphy

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Lu

University of Alberta

View shared research outputs
Top Co-Authors

Avatar

Leila Wehbe

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge