Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Margaret E. Roberts is active.

Publication


Featured researches published by Margaret E. Roberts.


American Political Science Review | 2013

How Censorship in China Allows Government Criticism But Silences Collective Expression

Gary King; Jennifer Pan; Margaret E. Roberts

We offer the first large scale, multiple source analysis of the outcome of what may be the most extensive effort to selectively censor human expression ever implemented. To do this, we have devised a system to locate, download, and analyze the content of millions of social media posts originating from nearly 1,400 different social media services all over China before the Chinese government is able to find, evaluate, and censor (i.e., remove from the Internet) the subset they deem objectionable. Using modern computer-assisted text analytic methods that we adapt to and validate in the Chinese language, we compare the substantive content of posts censored to those not censored over time in each of 85 topic areas. Contrary to previous understandings, posts with negative, even vitriolic, criticism of the state, its leaders, and its policies are not more likely to be censored. Instead, we show that the censorship program is aimed at curtailing collective action by silencing comments that represent, reinforce, or spur social mobilization, regardless of content. Censorship is oriented toward attempting to forestall collective activities that are occurring now or may occur in the future—and, as such, seem to clearly expose government intent.


Science | 2014

Reverse-engineering censorship in China: Randomized experimentation and participant observation

Gary King; Jennifer Pan; Margaret E. Roberts

Introduction Censorship has a long history in China, extending from the efforts of Emperor Qin to burn Confucian texts in the third century BCE to the control of traditional broadcast media under Communist Party rule. However, with the rise of the Internet and new media platforms, more than 1.3 billion people can now broadcast their individual views, making information far more diffuse and considerably harder to control. In response, the government has built a massive social media censorship organization, the result of which constitutes the largest selective suppression of human communication in the recorded history of any country. We show that this large system, designed to suppress information, paradoxically leaves large footprints and so reveals a great deal about itself and the intentions of the government. The Chinese censorship decision tree. The pictures shown are examples of real (and typical) websites, along with our translations. Rationale Chinese censorship of individual social media posts occurs at two levels: (i) Many tens of thousands of censors, working inside Chinese social media firms and government at several levels, read individual social media posts, and decide which ones to take down. (ii) They also read social media submissions that are prevented from being posted by automated keyword filters, and decide which ones to publish. To study the first level, we devised an observational study to download published Chinese social media posts before the government could censor them, and to revisit each from a worldwide network of computers to see which was censored. To study the second level, we conducted the first largescale experimental study of censorship by creating accounts on numerous social media sites throughout China, submitting texts with different randomly assigned content to each, and detecting from a worldwide network of computers which ones were censored. To find out the details of how the system works, we supplemented the typical current approach (conducting uncertain and potentially unsafe confidential interviews with insiders) with a participant observation study, in which we set up our own social media site in China. While also attempting not to alter the system we were studying, we purchased a URL, rented server space, contracted with Chinese firms to acquire the same software as used by existing social media sites, and—with direct access to their software, documentation, and even customer service help desk support—reverseengineered how it all works. Results Criticisms of the state, its leaders, and their policies are routinely published, whereas posts with collective action potential are much more likely to be censored—regardless of whether they are for or against the state (two concepts not previously distinguished in the literature). Chinese people can write the most vitriolic blog posts about even the top Chinese leaders without fear of censorship, but if they write in support of or opposition to an ongoing protest—or even about a rally in favor of a popular policy or leader—they will be censored. We clarify the internal mechanisms of the Chinese censorship apparatus and show how changes in censorship behavior reveal government intent by presaging their action on the ground. That is, it appears that criticism on the web, which was thought to be censored, is used by Chinese leaders to determine which officials are not doing their job of mollifying the people and need to be replaced. Conclusion Censorship in China is used to muzzle those outside government who attempt to spur the creation of crowds for any reason—in opposition to, in support of, or unrelated to the government. The government allows the Chinese people to say whatever they like about the state, its leaders, or their policies, because talk about any subject unconnected to collective action is not censored. The value that Chinese leaders find in allowing and then measuring criticism by hundreds of millions of Chinese people creates actionable information for them and, as a result, also for academic scholars and public policy analysts. Censorship of social media in China Figuring out how many and which social media comments are censored by governments is difficult because those comments, by definition, cannot be read. King et al. have posted comments to social media sites in China and then waited to see which of these never appeared, which appeared and were then removed, and which appeared and survived. About 40% of their submissions were reviewed by an army of censors, and more than half of these never appeared. By varying the content of posts across topics, they conclude that any mention of collective action is selectively suppressed. Science, this issue 10.1126/science.1251722 China censors online posts that advocate collective action. Existing research on the extensive Chinese censorship organization uses observational methods with well-known limitations. We conducted the first large-scale experimental study of censorship by creating accounts on numerous social media sites, randomly submitting different texts, and observing from a worldwide network of computers which texts were censored and which were not. We also supplemented interviews with confidential sources by creating our own social media site, contracting with Chinese firms to install the same censoring technologies as existing sites, and—with their software, documentation, and even customer support—reverse-engineering how it all works. Our results offer rigorous support for the recent hypothesis that criticisms of the state, its leaders, and their policies are published, whereas posts about real-world events with collective action potential are censored.


Journal of the American Statistical Association | 2016

A Model of Text for Experimentation in the Social Sciences

Margaret E. Roberts; Brandon M. Stewart; Edoardo M. Airoldi

ABSTRACT Statistical models of text have become increasingly popular in statistics and computer science as a method of exploring large document collections. Social scientists often want to move beyond exploration, to measurement and experimentation, and make inference about social and political processes that drive discourse and content. In this article, we develop a model of text data that supports this type of substantive research. Our approach is to posit a hierarchical mixed membership model for analyzing topical content of documents, in which mixing weights are parameterized by observed covariates. In this model, topical prevalence and topical content are specified as a simple generalized linear model on an arbitrary number of document-level covariates, such as news source and time of release, enabling researchers to introduce elements of the experimental design that informed document collection into the model, within a generally applicable framework. We demonstrate the proposed methodology by analyzing a collection of news reports about China, where we allow the prevalence of topics to evolve over time and vary across newswire services. Our methods quantify the effect of news wire source on both the frequency and nature of topic coverage. Supplementary materials for this article are available online.


north american chapter of the association for computational linguistics | 2015

TopicCheck: Interactive Alignment for Assessing Topic Model Stability.

Jason Chuang; Margaret E. Roberts; Brandon M. Stewart; Rebecca Weiss; Dustin Tingley; Justin Grimmer; Jeffrey Heer

Content analysis, a widely-applied social science research method, is increasingly being supplemented by topic modeling. However, while the discourse on content analysis centers heavily on reproducibility, computer scientists often focus more on scalability and less on coding reliability, leading to growing skepticism on the usefulness of topic models for automated content analysis. In response, we introduce TopicCheck, an interactive tool for assessing topic model stability. Our contributions are threefold. First, from established guidelines on reproducible content analysis, we distill a set of design requirements on how to computationally assess the stability of an automated coding process. Second, we devise an interactive alignment algorithm for matching latent topics from multiple models, and enable sensitivity evaluation across a large number of models. Finally, we demonstrate that our tool enables social scientists to gain novel insights into three active research questions.


social informatics | 2016

On the Influence of Social Bots in Online Protests

Pablo Suárez-Serrato; Margaret E. Roberts; Clayton A. Davis; Filippo Menczer

Social bots can affect online communication among humans. We study this phenomenon by focusing on #YaMeCanse, the most active protest hashtag in the history of Twitter in Mexico. Accounts using the hashtag are classified using the BotOrNot bot detection tool. Our preliminary analysis suggests that bots played a critical role in disrupting online communication about the protest movement.


Political Analysis | 2015

How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It

Gary King; Margaret E. Roberts


Political Analysis | 2015

Computer-Assisted Text Analysis for Comparative Politics

Christopher Lucas; Richard A. Nielsen; Margaret E. Roberts; Brandon M. Stewart; Alex Storer; Dustin Tingley


Archive | 2014

stm: R Package for Structural Topic Models

Margaret E. Roberts; Brandon M. Stewart; Dustin Tingley


international conference on neural information processing | 2013

The structural topic model and applied social science

Margaret E. Roberts; Brandon M. Stewart; Dustin Tingley; Edoardo M. Airoldi


computational social science | 2016

Navigating the Local Modes of Big Data: The Case of Topic Models.

Margaret E. Roberts; Brandon M. Stewart; Dustin Tingley

Collaboration


Dive into the Margaret E. Roberts's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge