Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aniket Kittur is active.

Publication


Featured researches published by Aniket Kittur.


human factors in computing systems | 2008

Crowdsourcing user studies with Mechanical Turk

Aniket Kittur; Ed H. Chi; Bongwon Suh

User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved in engaging users often requires practitioners to trade off between sample size, time requirements, and monetary costs. Micro-task markets, such as Amazons Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks. Although micro-task markets have great potential for rapidly collecting user measurements at low costs, we found that special care is needed in formulating tasks in order to harness the capabilities of the approach.


conference on computer supported cooperative work | 2013

The future of crowd work

Aniket Kittur; Jeffrey V. Nickerson; Michael S. Bernstein; Elizabeth M. Gerber; Aaron D. Shaw; John Zimmerman; Matthew Lease; John J. Horton

Paid crowd work offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale. But it is also possible that crowd work will fail to achieve its potential, focusing on assembly-line piecework. Can we foresee a future crowd workplace in which we would want our children to participate? This paper frames the major challenges that stand in the way of this goal. Drawing on theory from organizational behavior and distributed computing, as well as direct feedback from workers, we outline a framework that will enable crowd work that is complex, collaborative, and sustainable. The framework lays out research challenges in twelve major areas: workflow, task assignment, hierarchy, real-time response, synchronous collaboration, quality control, crowds guiding AIs, AIs guiding crowds, platforms, job design, reputation, and motivation.


human factors in computing systems | 2007

He says, she says: conflict and coordination in Wikipedia

Aniket Kittur; Bongwon Suh; Bryan A. Pendleton; Ed H. Chi

Wikipedia, a wiki-based encyclopedia, has become one of the most successful experiments in collaborative knowledge building on the Internet. As Wikipedia continues to grow, the potential for conflict and the need for coordination increase as well. This article examines the growth of such non-direct work and describes the development of tools to characterize conflict and coordination costs in Wikipedia. The results may inform the design of new collaborative knowledge systems.


human factors in computing systems | 2011

CrowdForge: crowdsourcing complex work

Aniket Kittur; Boris Smus; Robert E. Kraut

Micro-task markets such as Amazons Mechanical Turk represent a new paradigm for accomplishing work, in which employers can tap into a large population of workers around the globe to accomplish tasks in a fraction of the time and money of more traditional methods. However, such markets typically support only simple, independent tasks, such as labeling an image or judging the relevance of a search result. Here we present a general purpose framework for micro-task markets that provides a scaffolding for more complex human computation tasks which require coordination among many individuals, such as writing an article.


international conference on weblogs and social media | 2011

An Assessment of Intrinsic and Extrinsic Motivation on Task Performance in Crowdsourcing Markets

Jakob Rogstadius; Vassilis Kostakos; Aniket Kittur; Boris Smus; Jim Laredo; Maja Vukovic

Crowdsourced labor markets represent a powerful new paradigm for accomplishing work. Understanding the motivating factors that lead to high quality work could have significant benefits. However, researchers have so far found that motivating factors such as increased monetary reward generally increase workers’ willingness to accept a task or the speed at which a task is completed, but do not improve the quality of the work. We hypothesize that factors that increase the intrinsic motivation of a task – such as framing a task as helping others – may succeed in improving output quality where extrinsic motivators such as increased pay do not. In this paper we present an experiment testing this hypothesis along with a novel experimental design that enables controlled experimentation with intrinsic and extrinsic motivators in Amazon’s Mechanical Turk, a popular crowdsourcing task market. Results suggest that intrinsic motivation can indeed improve the quality of workers’ output, confirming our hypothesis. Furthermore, we find a synergistic interaction between intrinsic and extrinsic motivators that runs contrary to previous literature suggesting “crowding out” effects. Our results have significant practical and theoretical implications for crowd work.


human factors in computing systems | 2011

Apolo: making sense of large network data by combining rich user interaction and machine learning

Duen Horng Chau; Aniket Kittur; Jason I. Hong; Christos Faloutsos

Extracting useful knowledge from large network datasets has become a fundamental challenge in many domains, from scientific literature to social networks and the web. We introduce Apolo, a system that uses a mixed-initiative approach - combining visualization, rich user interaction and machine learning - to guide the user to incrementally and interactively explore large network data and make sense of it. Apolo engages the user in bottom-up sensemaking to gradually build up an understanding over time by starting small, rather than starting big and drilling down. Apolo also helps users find relevant information by specifying exemplars, and then using a machine learning method called Belief Propagation to infer which other nodes may be of interest. We evaluated Apolo with twelve participants in a between-subjects study, with the task being to find relevant new papers to update an existing survey paper. Using expert judges, participants using Apolo found significantly more relevant papers. Subjective feedback of Apolo was also very positive.


conference on computer supported cooperative work | 2010

Beyond Wikipedia: coordination and conflict in online production groups

Aniket Kittur; Robert E. Kraut

Online production groups have the potential to transform the way that knowledge is produced and disseminated. One of the most widely used forms of online production is the wiki, which has been used in domains ranging from science to education to enterprise. We examined the development of and interactions between coordination and conflict in a sample of 6811 wiki production groups. We investigated the influence of four coordination mechanisms: intra-article communication, inter-user communication, concentration of workgroup structure, and policy and procedures. We also examined the growth of conflict, finding the density of users in an information space to be a significant predictor. Finally, we analyzed the effectiveness of the four coordination mechanisms on managing conflict, finding differences in how each scaled to large numbers of contributors. Our results suggest that coordination mechanisms effective for managing conflict are not always the same as those effective for managing task quality, and that designers must take into account the social benefits of coordination mechanisms in addition to their production benefits.


human factors in computing systems | 2008

Lifting the veil: improving accountability and social transparency in Wikipedia with wikidashboard

Bongwon Suh; Ed H. Chi; Aniket Kittur; Bryan A. Pendleton

Wikis are collaborative systems in which virtually anyone can edit anything. Although wikis have become highly popular in many domains, their mutable nature often leads them to be distrusted as a reliable source of information. Here we describe a social dynamic analysis tool called WikiDashboard which aims to improve social transparency and accountability on Wikipedia articles. Early reactions from users suggest that the increased transparency afforded by the tool can improve the interpretation, communication, and trustworthiness of Wikipedia articles.


Frontiers in Neuroinformatics | 2011

The Cognitive Atlas: Toward a Knowledge Foundation for Cognitive Neuroscience

Russell A. Poldrack; Aniket Kittur; Donald J. Kalar; Eric N. Miller; Christian Seppa; Yolanda Gil; D. Stott Parker; Fred W. Sabb; Robert M. Bilder

Cognitive neuroscience aims to map mental processes onto brain function, which begs the question of what “mental processes” exist and how they relate to the tasks that are used to manipulate and measure them. This topic has been addressed informally in prior work, but we propose that cumulative progress in cognitive neuroscience requires a more systematic approach to representing the mental entities that are being mapped to brain function and the tasks used to manipulate and measure mental processes. We describe a new open collaborative project that aims to provide a knowledge base for cognitive neuroscience, called the Cognitive Atlas (accessible online at http://www.cognitiveatlas.org), and outline how this project has the potential to drive novel discoveries about both mind and brain.


international symposium on wikis and open collaboration | 2011

Don't bite the newbies: how reverts affect the quantity and quality of Wikipedia work

Aaron Halfaker; Aniket Kittur; John Riedl

Reverts are important to maintaining the quality of Wikipedia. They fix mistakes, repair vandalism, and help enforce policy. However, reverts can also be damaging, especially to the aspiring editor whose work they destroy. In this research we analyze 400,000 Wikipedia revisions to understand the effect that reverts had on editors. We seek to understand the extent to which they demotivate users, reducing the workforce of contributors, versus the extent to which they help users improve as encyclopedia editors. Overall we find that reverts are powerfully demotivating, but that their net influence is that more quality work is done in Wikipedia as a result of reverts than is lost by chasing editors away. However, we identify key conditions -- most specifically new editors being reverted by much more experienced editors - under which reverts are particularly damaging. We propose that reducing the damage from reverts might be one effective path for Wikipedia to solve the newcomer retention problem.

Collaboration


Dive into the Aniket Kittur's collaboration.

Top Co-Authors

Avatar

Robert E. Kraut

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Haiyi Zhu

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bongwon Suh

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Duen Horng Chau

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jason I. Hong

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lixiu Yu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Nathan Hahn

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge