Samira Shaikh
State University of New York System
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Samira Shaikh.
Natural Language Engineering | 2013
George Aaron Broadwell; Jennifer Stromer-Galley; Tomek Strzalkowski; Samira Shaikh; Sarah M. Taylor; Ting Liu; Umit Boz; Alana Elia; Laura Jiao; Nick Webb
In this paper, we describe a novel approach to computational modeling and understanding of social and cultural phenomena in multi-party dialogues. We developed a two-tier approach in which we first detect and classify certain sociolinguistic behaviors, including topic control, disagreement, and involvement, that serve as first-order models from which presence the higher level social roles, such as leadership, may be inferred.
international conference on social computing | 2013
George Aaron Broadwell; Umit Boz; Ignacio Cases; Tomek Strzalkowski; Sarah M. Taylor; Samira Shaikh; Ting Liu; Kit W. Cho; Nick Webb
The reliable automated identification of metaphors still remains a challenge in metaphor research due to ambiguity between semantic and contextual interpretation of individual lexical items. In this article, we describe a novel approach to metaphor identification which is based on three intersecting methods: imageability, topic chaining, and semantic clustering. Our hypothesis is that metaphors are likely to use highly imageable words that do not generally have a topical or semantic association with the surrounding context. Our method is thus the following: (1) identify the highly imageable portions of a paragraph, using psycholinguistic measures of imageability, (2) exclude imageability peaks that are part of a topic chain, and (3) exclude imageability peaks that show a semantic relationship to the main topics. We are currently working towards fully automating this method for a number of languages.
Simulation & Gaming | 2014
Rosa Mikeal Martey; Kate Kenski; James E. Folkestad; Elana B. Gordis; Adrienne Shaw; Jennifer Stromer-Galley; Ben Clegg; Hui Zhang; Nissim Kaufman; Ari N. Rabkin; Samira Shaikh; Tomek Strzalkowski
Background. Engagement has been identified as a crucial component of learning in games research. However, the conceptualization and operationalization of engagement vary widely in the literature. Many valuable approaches illuminate ways in which presence, flow, arousal, participation, and other concepts constitute or contribute to engagement. However, few studies examine multiple conceptualizations of engagement in the same project. Method. This article discusses the results of two experiments that measure engagement in five different ways: survey self-report, content analyses of player videos, electro-dermal activity, mouse movements, and game click logs. We examine the relationships among these measures and assess how they are affected by the technical characteristics of a 30-minute, custom-built, educational game: use of a customized character, level of narrative complexity, and level of art complexity. Results. We found that the five measures of engagement correlated in limited ways, and that they revealed substantially different relationships with game characteristics. We conclude that engagement as a construct is more complex than is captured in any of these measures individually and that using multiple methods to assess engagement can illuminate aspects of engagement not detectable by a single method of measurement.
Archive | 2017
Samira Shaikh; Eliza Barach; Yousri Marzouki
We describe the emergence of an online community from naturally occurring social media data. Our method uses patterns of word choice in an online social platform to characterize how a community forms in response to adverse events such as a terrorist attack. Our focus is English Twitter messages after the Charlie Hebdo terrorist attack in Paris in January 2015). We examined the text to find lexical variation associated with measures of valence, arousal and concreteness. We also examine the patterns of language use of the most prolific twitter users (top 2 % by number of tweets) and the most frequent tweets in our collection (top 2 % by number of retweets). Differences between users and tweets based on frequency are revealing about how lexical variation in tweeting behavior reflects evolution of a community in reaction to crisis events on an international scale.
Proceedings of the Second Workshop on Metaphor in NLP | 2014
Tomek Strzalkowski; Samira Shaikh; Kit W. Cho; George Aaron Broadwell; Sarah M. Taylor; Boris Yamrom; Ting Liu; Ignacio Cases; Yuliya Peshkova; Kyle Elliot
This article describes a novel approach to automated determination of affect associated with metaphorical language. Affect in language is understood to mean the attitude toward a topic that a writer attempts to convey to the reader by using a particular metaphor. This affect, which we will classify as positive, negative or neutral with various degrees of intensity, may arise from the target of the metaphor, from the choice of words used to describe it, or from other elements in its immediate linguistic context. We attempt to capture all these contributing elements in an Affect Calculus and demonstrate experimentally that the resulting method can accurately approximate human judgment. The work reported here is part of a larger effort to develop a highly accurate system for identifying, classifying, and comparing metaphors occurring in large volumes of text across four different languages: English, Spanish, Russian, and Farsi.
joint conference on lexical and computational semantics | 2015
Vinodkumar Prabhakaran; Tomas By; Julia Hirschberg; Owen Rambow; Samira Shaikh; Tomek Strzalkowski; Jennifer Tracey; Michael Arrigo; Rupayan Basu; Micah Clark; Adam Dalton; Mona T. Diab; Louise Guthrie; Anna Prokofieva; Stephanie M. Strassel; Gregory Werner; Yorick Wilks; Janyce Wiebe
The terms “belief” and “factuality” both refer to the intention of the writer to present the propositional content of an utterance as firmly believed by the writer, not firmly believed, or having some other status. This paper presents an ongoing annotation effort and an associated evaluation.
ieee international conference semantic computing | 2012
Sarah M. Taylor; Ting Liu; Samira Shaikh; Tomek Strzalkowski; Aaron Broadwell; Jennifer Stromer-Galley; Umit Boz; Xiaoai Ren; Jingsi Wu; Feifei Zhang
Recent advances in automated analysis of on-line chat data allow us to draw conclusions about social behavior, such as leadership, in small groups previously possible only through manual methods of observation and analysis. We have applied such methods to comparable English and Chinese language data, defined a new language use called Tension Focus, and demonstrate its different effects in the data in these two languages.
International Conference on Applied Human Factors and Ergonomics | 2018
Eliza Barach; Samira Shaikh; Vidhushini Srinivasan
We analyze naturally occurring social media data that derive from Twitter messages posted over a 24-h period in immediate reaction to the Paris terrorist attacks in November 2015. We separately examine patterns for tweets with first-person singular pronouns (I) and first-person plural pronouns (WE), the corresponding variations in valence, arousal, proportion of words in various LIWC categories, and diversity of word choices within those categories. Negatively valenced word choices revealed greater mean differences between I and WE than did positively valenced words. Novel was that tweets with I exhibited a more uniform distribution of word choices and greater linguistic alignment, for most of the LIWC categories and for both positively and negatively valenced word choices, relative to tweets in WE. Greater diversity differences associated with pronoun choice when valence is negative than when it is positive suggest less self-disclosure when tweeting with first-person singular than plural pronouns.
International Conference on Applied Human Factors and Ergonomics | 2018
Sara M. Levens; Omar ElTayeby; Bradley Aleshire; Sagar Nandu; Ryan Wesslen; Tiffany Gallicano; Samira Shaikh
This study presents the Social Media Cognitive Processing model, which explains and predicts the depth of processing on social media based on three classic concepts from the offline literature about cognitive processing: self-generation, psychological distance, and self-reference. Together, these three dimensions have tremendous explanatory power in predicting the depth of processing a receiver will have in response to a sender’s message. Moreover, the model can be used to explain and predict the direction and degree of information proliferation. This model can be used in a variety of contexts (e.g., isolating influencers to persuade others about the merits of vaccination, to dispel fake news, or to spread political messages). We developed the model in the context of Brexit tweets.
International Conference on Applied Human Factors and Ergonomics | 2017
Samira Shaikh; Prasanna Lalingkar; Eliza Barach
We analyze language and emoticon use on a social media platform to describe large-scale human reaction to a crisis event. We focus on a targeted corpus of 2 million tweets defined by a set of hashtags and posted on the social media platform Twitter. We analyze these data under the framework of Construal Level Theory and compute lexical diversity and concreteness values of words across subsets of the data that differ with respect to geographical distance from the event. We find that word count and lexical variation among concrete words (but not average concreteness) decreased with increasing geographical distance. In addition, we investigate the use of non-verbal signals through the use of emoticons and emojis in these subsets. Overall, our findings make significant contributions in quantifying and contrasting cross-cultural communication with respect to large-scale human response to a crisis event, specifically a terrorist attack. The results presented here are novel in that they demonstrate what we learn from large-scale nonverbal as well as verbal communication analyzed in the framework of the Construal Level Theory.