R. Stuart Geiger
University of California, Berkeley
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by R. Stuart Geiger.
American Behavioral Scientist | 2013
Aaron Halfaker; R. Stuart Geiger; Jonathan T. Morgan; John Riedl
Open collaboration systems, such as Wikipedia, need to maintain a pool of volunteer contributors to remain relevant. Wikipedia was created through a tremendous number of contributions by millions of contributors. However, recent research has shown that the number of active contributors in Wikipedia has been declining steadily for years and suggests that a sharp decline in the retention of newcomers is the cause. This article presents data that show how several changes the Wikipedia community made to manage quality and consistency in the face of a massive growth in participation have ironically crippled the very growth they were designed to manage. Specifically, the restrictiveness of the encyclopedia’s primary quality control mechanism and the algorithmic tools used to reject contributions are implicated as key causes of decreased newcomer retention. Furthermore, the community’s formal mechanisms for norm articulation are shown to have calcified against changes—especially changes proposed by newer editors.
Information, Communication & Society | 2014
R. Stuart Geiger
This article introduces and discusses the role of bespoke code in Wikipedia, which is code that runs alongside a platform or system, rather than being integrated into server-side codebases by individuals with privileged access to the server. Bespoke code complicates the common metaphors of platforms and sovereignty that we typically use to discuss the governance and regulation of software systems through code. Specifically, the work of automated software agents (bots) in the operation and administration of Wikipedia is examined, with a focus on the materiality of code. As bots extend and modify the functionality of sites like Wikipedia, but must be continuously operated on computers that are independent from the servers hosting the site, they involve alternative relations of power and code. Instead of taking for granted the pre-existing stability of Wikipedia as a platform, bots and other bespoke code require that we examine not only the software code itself, but also the concrete, historically contingent material conditions under which this code is run. To this end, this article weaves a series of autobiographical vignettes about the authors experiences as a bot developer alongside more traditional academic discourse.
international symposium on wikis and open collaboration | 2013
R. Stuart Geiger; Aaron Halfaker
In the first half of 2011, ClueBot NG -- one of the most prolific counter-vandalism bots in the English-language Wikipedia -- went down for four distinct periods, each period of downtime lasting from days to weeks. In this paper, we use these periods of breakdown as naturalistic experiments to study Wikipedias heterogeneous quality control network, which we analyze as a multi-tiered system in which distinct classes of reviewers use various reviewing technologies to patrol for different kinds of damage at staggered time periods. Our analysis showed that the overall time-to-revert edits was almost doubled when this software agent was down. Yet while a significantly fewer proportion of edits made during the bots downtime were reverted, we found that those edits were later eventually reverted. This suggests that other agents in Wikipedia took over this quality control work, but performed it at a far slower rate.
Information, Communication & Society | 2016
R. Stuart Geiger
ABSTRACT This article introduces and discusses bot-based collective blocklists (or blockbots) in Twitter, which have been developed by volunteers to combat harassment in the social networking site. Blockbots support the curation of a shared blocklist of accounts, where subscribers to a blockbot will not receive any notifications or messages from accounts on the blocklist. Blockbots support counterpublic communities, helping people moderate their own experiences of a site. This article provides an introduction and overview of blockbots and the issues that they raise about networked publics and platform governance, extending an intersecting literature on online harassment, platform governance, and the politics of algorithms. Such projects involve a more reflective, intentional, transparent, collaborative, and decentralized way of using algorithmic systems to respond to issues of platform governance like harassment. I argue that blockbots are not just technical solutions but social ones as well, a notable exception to common technologically determinist solutions that often push responsibility for issues like harassment to the individual user. Beyond the case of Twitter, blockbots call our attention to collective, bottom-up modes of computationally assisted moderation that can be deployed by counterpublic groups who want to participate in networked publics where hegemonic and exclusionary practices are increasingly prevalent.
international symposium on wikis and open collaboration | 2009
R. Stuart Geiger
This paper investigates software programs as non-human social actors in Wikipedia, arguing that influence must not be overlooked in social scientific research of the on-line encyclopedia project. Using statistical and archival methods, the roles of assisted editing programs and bots are examined. proportion of edits made by these non-human actors is shown to be significantly more than previously described in earlier research.
Ecology and Society | 2013
Michele Romolini; Sydne Record; Rebecca Garvoille; Yevgeniy Marusenko; R. Stuart Geiger
By integrating the research and resources of hundreds of scientists from dozens of institutions, network-level science is fast becoming one scientific model of choice to address complex problems. In the pursuit to confront pressing environmental issues such as climate change, many scientists, practitioners, policy makers, and institutions are promoting network-level research that integrates the social and ecological sciences. To understand how this scientific trend is unfolding among rising scientists, we examined how graduate students experienced one such emergent social-ecological research initiative, Integrated Science for Society and Environment, within the large-scale, geographically distributed Long Term Ecological Research (LTER) Network. Through workshops, surveys, and interviews, we found that graduate students faced challenges in how they conceptualized and practiced social-ecological research within the LTER Network. We have presented these conceptual challenges at three scales: the individual/project, the LTER site, and the LTER Network. The level of student engagement with and knowledge of the LTER Network was varied, and students faced different institutional, cultural, and logistic barriers to practicing social-ecological research. These types of challenges are unlikely to be unique to LTER graduate students; thus, our findings are relevant to other scientific networks implementing new social-ecological research initiatives.
conference on computer supported cooperative work | 2017
Anna Filippova; Brad Chapman; R. Stuart Geiger; James D. Herbsleb; Arun Kalyanasundaram; Erik H. Trainer; Aurelia Moser; Arlin Stoltzfus
Time-bounded collaborative events in which teams work together under intense time pressure are becoming increasingly popular. While hackathons, that is, competitive overnight coding events, are one of the more prevalent examples of this phenomenon, there are many more distinct event design variations for different audiences and with divergent aims, such as sprints, codefests, hack-days, edit-a-thons and so on. Taken together, these events offer new opportunities and challenges for cooperative work by affording explicit, predictable, time-bounded spaces for interdependent work and access to new audiences of collaborators. This one-day workshop brings together researchers interested in the phenomenon, experienced event organizers, and participants interested in running their own events to consolidate research to-date, share practical experiences, and understand what benefits different event variations may offer, how they may be applied in other contexts, and how insights from studying these events may contribute to CSCW knowledge.
conference on computer supported cooperative work | 2013
Aaron Halfaker; R. Stuart Geiger; Cliff Lampe; Loren G. Terveen; Amy Bruckman; Brian Keegan; Aniket Kittur; Geraldine Fitzpatrick
We (the authors of CSCWs program) have finite time and energy that can be invested into our publications and the research communities we value. While we want our work to have the most impact possible, we also want to grow and support productive research communities within which to have this impact. This panel discussion explores the costs and benefits of submitting papers to various tiers of conferences and journals surrounding CSCW and reflects on the value of investing hours into building up a research community.
Big Data & Society | 2017
R. Stuart Geiger
Scholars and practitioners across domains are increasingly concerned with algorithmic transparency and opacity, interrogating the values and assumptions embedded in automated, black-boxed systems, particularly in user-generated content platforms. I report from an ethnography of infrastructure in Wikipedia to discuss an often understudied aspect of this topic: the local, contextual, learned expertise involved in participating in a highly automated social–technical environment. Today, the organizational culture of Wikipedia is deeply intertwined with various data-driven algorithmic systems, which Wikipedians rely on to help manage and govern the “anyone can edit” encyclopedia at a massive scale. These bots, scripts, tools, plugins, and dashboards make Wikipedia more efficient for those who know how to work with them, but like all organizational culture, newcomers must learn them if they want to fully participate. I illustrate how cultural and organizational expertise is enacted around algorithmic agents by discussing two autoethnographic vignettes, which relate my personal experience as a veteran in Wikipedia. I present thick descriptions of how governance and gatekeeping practices are articulated through and in alignment with these automated infrastructures. Over the past 15 years, Wikipedian veterans and administrators have made specific decisions to support administrative and editorial workflows with automation in particular ways and not others. I use these cases of Wikipedia’s bot-supported bureaucracy to discuss several issues in the fields of critical algorithms studies; critical data studies; and fairness, accountability, and transparency in machine learning—most principally arguing that scholarship and practice must go beyond trying to “open up the black box” of such systems and also examine sociocultural processes like newcomer socialization.
Journal of Broadcasting & Electronic Media | 2014
R. Stuart Geiger; Airi Lampinen
“Broadcasting” is often cast as an outdated term—we are constantly told that we are in the midst of a digital/social media revolution that will make the unidirectional, mass communication model obsolete. In response, we argue that to consider the continued relevance of terms like “broadcasting” in an era of electronic media is to neither hastily disregard the legacy of these terms, nor cling to them too rigidly. In this special issue of the Journal of Broadcasting and Electronic Media written and edited by graduate students, we begin a new thread in the longstanding conversation about what it means for media to be “old” and “new.” While this distinction is not one we should take for granted, the articles in this issue all show how we can strategically approach the intricate intersections and interconnections of different media, old and new. As such, this issue collectively calls our attention not to the familiar trope of “old against new,” but rather to the tensions that arise around a “coming of age.” Presenting a wide range of international scholarship from graduate students across many different disciplinary backgrounds, topical literatures, methodological approaches, and theoretical frameworks, this special issue represents an emerging approach to what it means to study broadcasting in an era of electronic media.