Featured Researches

Social And Information Networks

Blind Inference of Eigenvector Centrality Rankings

We consider the problem of estimating a network's eigenvector centrality only from data on the nodes, with no information about network topology. Leveraging the versatility of graph filters to model network processes, data supported on the nodes is modeled as a graph signal obtained via the output of a graph filter applied to white noise. We seek to simplify the downstream task of centrality ranking by bypassing network topology inference methods and, instead, inferring the centrality structure of the graph directly from the graph signals. To this end, we propose two simple algorithms for ranking a set of nodes connected by an unobserved set of edges. We derive asymptotic and non-asymptotic guarantees for these algorithms, revealing key features that determine the complexity of the task at hand. Finally, we illustrate the behavior of the proposed algorithms on synthetic and real-world datasets.

Read more
Social And Information Networks

Bot Development for Social Engineering Attacks on Twitter

A series of bots performing simulated social engineering attacks using phishing in the Twitter platform was developed to identify potentially unsafe user behavior. In this work different bot versions were developed to collect feedback data after stimuli directed to 1,287 twitter accounts for 38 consecutive days. The results were not conclusive about the existence of preceptors for unsafe behavior, but we conclude that despite Twiter's security this kind of attack is still feasible.

Read more
Social And Information Networks

Bot-Match: Social Bot Detection with Recursive Nearest Neighbors Search

Social bots have emerged over the last decade, initially creating a nuisance while more recently used to intimidate journalists, sway electoral events, and aggravate existing social fissures. This social threat has spawned a bot detection algorithms race in which detection algorithms evolve in an attempt to keep up with increasingly sophisticated bot accounts. This cat and mouse cycle has illuminated the limitations of supervised machine learning algorithms, where researchers attempt to use yesterday's data to predict tomorrow's bots. This gap means that researchers, journalists, and analysts daily identify malicious bot accounts that are undetected by state of the art supervised bot detection algorithms. These analysts often desire to find similar bot accounts without labeling/training a new model, where similarity can be defined by content, network position, or both. A similarity based algorithm could complement existing supervised and unsupervised methods and fill this gap. To this end, we present the Bot-Match methodology in which we evaluate social media embeddings that enable a semi-supervised recursive nearest neighbors search to map an emerging social cybersecurity threat given one or more seed accounts.

Read more
Social And Information Networks

CD-SEIZ: Cognition-Driven SEIZ Compartmental Model for the Prediction of Information Cascades on Twitter

Information spreading social media platforms has become ubiquitous in our lives due to viral information propagation regardless of its veracity. Some information cascades turn out to be viral since they circulated rapidly on the Internet. The uncontrollable virality of manipulated or disorientated true information (fake news) might be quite harmful, while the spread of the true news is advantageous, especially in emergencies. We tackle the problem of predicting information cascades by presenting a novel variant of SEIZ (Susceptible/ Exposed/ Infected/ Skeptics) model that outperforms the original version by taking into account the cognitive processing depth of users. We define an information cascade as the set of social media users' reactions to the original content which requires at least minimal physical and cognitive effort; therefore, we considered retweet/ reply/ quote (mention) activities and tested our framework on the Syrian White Helmets Twitter data set from April 1st, 2018 to April 30th, 2019. In the prediction of cascade pattern via traditional compartmental models, all the activities are grouped, and their summation is taken into account; however, transition rates between compartments should vary according to the activity type since their requirements of physical and cognitive efforts are not same. Based on this assumption, we design a cognition-driven SEIZ (CD-SEIZ) model in the prediction of information cascades on Twitter. We tested SIS, SEIZ, and CD-SEIZ models on 1000 Twitter cascades and found that CD-SEIZ has a significantly low fitting error and provides a statistically more accurate estimation.

Read more
Social And Information Networks

CML-COVID: A Large-Scale COVID-19 Twitter Dataset with Latent Topics, Sentiment and Location Information

As a platform, Twitter has been a significant public space for discussion related to the COVID-19 pandemic. Public social media platforms such as Twitter represent important sites of engagement regarding the pandemic and these data can be used by research teams for social, health, and other research. Understanding public opinion about COVID-19 and how information diffuses in social media is important for governments and research institutions. Twitter is a ubiquitous public platform and, as such, has tremendous utility for understanding public perceptions, behavior, and attitudes related to COVID-19. In this research, we present CML-COVID, a COVID-19 Twitter data set of 19,298,967 million tweets from 5,977,653 unique individuals and summarize some of the attributes of these data. These tweets were collected between March 2020 and July 2020 using the query terms coronavirus, covid and mask related to COVID-19. We use topic modeling, sentiment analysis, and descriptive statistics to describe the tweets related to COVID-19 we collected and the geographical location of tweets, where available. We provide information on how to access our tweet dataset (archived using twarc).

Read more
Social And Information Networks

COVID, BLM, and the polarization of US politicians on Twitter

We mapped the tweets of 520 US Congress members, focusing on analyzing their engagement with two broad topics: first, the COVID-19 pandemic, and second, the recent wave of anti-racist protest. We find that, in discussing COVID-19, Democrats frame the issue in terms of public health, while Republicans are more likely to focus on small businesses and the economy. When looking at the discourse around anti-Black violence, we find that Democrats are far more likely to name police brutality as a specific concern. In contrast, Republicans not only discuss the issue far less, but also keep their terms more general, as well as criticizing perceived protest violence.

Read more
Social And Information Networks

COVID-19's (mis)information ecosystem on Twitter: How partisanship boosts the spread of conspiracy narratives on German speaking Twitter

In late 2019, the gravest pandemic in a century began spreading across the world. A state of uncertainty related to what has become known as SARS-CoV-2 has since fueled conspiracy narratives on social media about the origin, transmission and medical treatment of and vaccination against the resulting disease, COVID-19. Using social media intelligence to monitor and understand the proliferation of conspiracy narratives is one way to analyze the distribution of misinformation on the pandemic. We analyzed more than 9.5M German language tweets about COVID-19. The results show that only about 0.6% of all those tweets deal with conspiracy theory narratives. We also found that the political orientation of users correlates with the volume of content users contribute to the dissemination of conspiracy narratives, implying that partisan communicators have a higher motivation to take part in conspiratorial discussions on Twitter. Finally, we showed that contrary to other studies, automated accounts do not significantly influence the spread of misinformation in the German speaking Twitter sphere. They only represent about 1.31% of all conspiracy-related activities in our database.

Read more
Social And Information Networks

CRISP: A Probabilistic Model for Individual-Level COVID-19 Infection Risk Estimation Based on Contact Data

We present CRISP (COVID-19 Risk Score Prediction), a probabilistic graphical model for COVID-19 infection spread through a population based on the SEIR model where we assume access to (1) mutual contacts between pairs of individuals across time across various channels (e.g., Bluetooth contact traces), as well as (2) test outcomes at given times for infection, exposure and immunity tests. Our micro-level model keeps track of the infection state for each individual at every point in time, ranging from susceptible, exposed, infectious to recovered. We develop a Monte Carlo EM algorithm to infer contact-channel specific infection transmission probabilities. Our algorithm uses Gibbs sampling to draw samples of the latent infection status of each individual over the entire time period of analysis, given the latent infection status of all contacts and test outcome data. Experimental results with simulated data demonstrate our CRISP model can be parametrized by the reproduction factor R 0 and exhibits population-level infectiousness and recovery time series similar to those of the classical SEIR model. However, due to the individual contact data, this model allows fine grained control and inference for a wide range of COVID-19 mitigation and suppression policy measures. Moreover, the algorithm is able to support efficient testing in a test-trace-isolate approach to contain COVID-19 infection spread. To the best of our knowledge, this is the first model with efficient inference for COVID-19 infection spread based on individual-level contact data; most epidemic models are macro-level models that reason over entire populations. The implementation of CRISP is available in Python and C++ at this https URL.

Read more
Social And Information Networks

Calibration of Google Trends Time Series

Google Trends is a tool that allows researchers to analyze the popularity of Google search queries across time and space. In a single request, users can obtain time series for up to 5 queries on a common scale, normalized to the range from 0 to 100 and rounded to integer precision. Despite the overall value of Google Trends, rounding causes major problems, to the extent that entirely uninformative, all-zero time series may be returned for unpopular queries when requested together with more popular queries. We address this issue by proposing Google Trends Anchor Bank (G-TAB), an efficient solution for the calibration of Google Trends data. Our method expresses the popularity of an arbitrary number of queries on a common scale without being compromised by rounding errors. The method proceeds in two phases. In the offline preprocessing phase, an "anchor bank" is constructed, a set of queries spanning the full spectrum of popularity, all calibrated against a common reference query by carefully chaining together multiple Google Trends requests. In the online deployment phase, any given search query is calibrated by performing an efficient binary search in the anchor bank. Each search step requires one Google Trends request, but few steps suffice, as we demonstrate in an empirical evaluation. We make our code publicly available as an easy-to-use library at this https URL.

Read more
Social And Information Networks

Can Predominant Credible Information Suppress Misinformation in Crises? Empirical Studies of Tweets Related to Prevention Measures during COVID-19

During COVID-19, misinformation on social media affects the adoption of appropriate prevention behaviors. It is urgent to suppress the misinformation to prevent negative public health consequences. Although an array of studies has proposed misinformation suppression strategies, few have investigated the role of predominant credible information during crises. None has examined its effect quantitatively using longitudinal social media data. Therefore, this research investigates the temporal correlations between credible information and misinformation, and whether predominant credible information can suppress misinformation for two prevention measures (i.e. topics), i.e. wearing masks and social distancing using tweets collected from February 15 to June 30, 2020. We trained Support Vector Machine classifiers to retrieve relevant tweets and classify tweets containing credible information and misinformation for each topic. Based on cross-correlation analyses of credible and misinformation time series for both topics, we find that the previously predominant credible information can lead to the decrease of misinformation (i.e. suppression) with a time lag. The research findings provide empirical evidence for suppressing misinformation with credible information in complex online environments and suggest practical strategies for future information management during crises and emergencies.

Read more

Ready to get started?

Join us today