Javid Ebrahimi
University of Oregon
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Javid Ebrahimi.
north american chapter of the association for computational linguistics | 2015
Javid Ebrahimi; Dejing Dou
We present a novel approach for relation classification, using a recursive neural network (RNN), based on the shortest path between two entities in a dependency graph. Previous works on RNN are based on constituencybased parsing because phrasal nodes in a parse tree can capture compositionality in a sentence. Compared with constituency-based parse trees, dependency graphs can represent relations more compactly. This is particularly important in sentences with distant entities, where the parse tree spans words that are not relevant to the relation. In such cases RNN cannot be trained effectively in a timely manner. However, due to the lack of phrasal nodes in dependency graphs, application of RNN is not straightforward. In order to tackle this problem, we utilize dependency constituent units called chains. Our experiments on two relation classification datasets show that Chain based RNN provides a shallower network, which performs considerably faster and achieves better classification results.
empirical methods in natural language processing | 2016
Javid Ebrahimi; Dejing Dou; Daniel Lowd
Supervised stance classification, in such domains as Congressional debates and online forums, has been a topic of interest in the past decade. Approaches have evolved from text classification to structured output prediction, including collective classification and sequence labeling. In this work, we investigate collective classification of stances on Twitter, using hinge-loss Markov random fields (HLMRFs). Given the graph of all posts, users, and their relationships, we constrain the predicted post labels and latent user labels to correspond with the network structure. We focus on a weakly supervised setting, in which only a small set of hashtags or phrases is labeled. Using our relational approach, we are able to go beyond the stance-indicative patterns and harvest more stance-indicative tweets, which can also be used to train any linear text classifier when the network structure is not available or is costly.
ACM Transactions on Intelligent Systems and Technology | 2016
NhatHai Phan; Javid Ebrahimi; David Kil; Brigitte Piniewski; Dejing Dou
Modeling physical activity propagation, such as activity level and intensity, is a key to preventing obesity from cascading through communities, and to helping spread wellness and healthy behavior in a social network. However, there have not been enough scientific and quantitative studies to elucidate how social communication may deliver physical activity interventions. In this work, we introduce a novel model named Topic-aware Community-level Physical Activity Propagation with Temporal Dynamics (TCPT) to analyze physical activity propagation and social influence at different granularities (i.e., individual level and community level). Given a social network, the TCPT model first integrates the correlations between the content of social communication, social influences, and temporal dynamics. Then, a hierarchical approach is utilized to detect a set of communities and their reciprocal influence strength of physical activities. The experimental evaluation shows not only the effectiveness of our approach but also the correlation of the detected communities with various health outcome measures. Our promising results pave a way for knowledge discovery in health social networks.
conference on information and knowledge management | 2016
Javid Ebrahimi; Dejing Dou
Distributed word representations are able to capture syntactic and semantic regularities in text. In this paper, we present a word representation scheme that incorporates authorship information. While maintaining similarity among related words in the induced distributed space, our word vectors can be effectively used for some text classification tasks too. We build on a log-bilinear document model (lbDm), which extracts document features, and word vectors based on word co-occurrence counts. First, we propose a log-bilinear author model (lbAm), which contains an additional author matrix. We show that by directly learning author feature vectors, as opposed to document vectors, we can learn better word representations for the authorship attribution task. Furthermore, authorship information has been found to be useful for sentiment classification. We enrich the author model with a sentiment tensor, and demonstrate the effectiveness of this hybrid model (lbHm) through our experiments on a movie review-classification dataset.
IEEE Intelligent Systems | 2016
NhatHai Phan; Javid Ebrahimi; David Kil; Brigitte Piniewski; Dejing Dou
arXiv: Computation and Language | 2017
Javid Ebrahimi; Anyi Rao; Daniel Lowd; Dejing Dou
international conference on computational linguistics | 2016
Javid Ebrahimi; Dejing Dou; Daniel Lowd
international conference on digital health | 2016
Javid Ebrahimi; NhatHai Phan; Dejing Dou; Brigitte Piniewski; David Kil
meeting of the association for computational linguistics | 2018
Javid Ebrahimi; Anyi Rao; Daniel Lowd; Dejing Dou
international conference on computational linguistics | 2018
Javid Ebrahimi; Daniel Lowd; Dejing Dou