Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John J. Horton is active.

Publication


Featured researches published by John J. Horton.


conference on computer supported cooperative work | 2013

The future of crowd work

Aniket Kittur; Jeffrey V. Nickerson; Michael S. Bernstein; Elizabeth M. Gerber; Aaron D. Shaw; John Zimmerman; Matthew Lease; John J. Horton

Paid crowd work offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale. But it is also possible that crowd work will fail to achieve its potential, focusing on assembly-line piecework. Can we foresee a future crowd workplace in which we would want our children to participate? This paper frames the major challenges that stand in the way of this goal. Drawing on theory from organizational behavior and distributed computing, as well as direct feedback from workers, we outline a framework that will enable crowd work that is complex, collaborative, and sustainable. The framework lays out research challenges in twelve major areas: workflow, task assignment, hierarchy, real-time response, synchronous collaboration, quality control, crowds guiding AIs, AIs guiding crowds, platforms, job design, reputation, and motivation.


electronic commerce | 2010

The labor economics of paid crowdsourcing

John J. Horton; Lydia B. Chilton

We present a model of workers supplying labor to paid crowdsourcing projects. We also introduce a novel method for estimating a workers reservation wage - the key parameter in our labor supply model. We tested our model by presenting experimental subjects with real-effort work scenarios that varied in the offered payment and difficulty. As predicted, subjects worked less when the pay was lower. However, they did not work less when the task was more time-consuming. Interestingly, at least some subjects appear to be target earners, contrary to the assumptions of the rational model. The strongest evidence for target earning is an observed preference for earning total amounts evenly divisible by 5, presumably because these amounts make good targets. Despite its predictive failures, we calibrate our model with data pooled from both experiments. We find that the reservation wages of our sample are approximately log normally distributed, with a median wage of


conference on computer supported cooperative work | 2011

Designing incentives for inexpert human raters

Aaron Shaw; John J. Horton; Daniel L. Chen

1.38/hour. We discuss how to use our calibrated model in applications.


knowledge discovery and data mining | 2010

Task search in a human computation market

Lydia B. Chilton; John J. Horton; Robert C. Miller; Shiri Azenkot

The emergence of online labor markets makes it far easier to use individual human raters to evaluate materials for data collection and analysis in the social sciences. In this paper, we report the results of an experiment - conducted in an online labor market - that measured the effectiveness of a collection of social and financial incentive schemes for motivating workers to conduct a qualitative, content analysis task. Overall, workers performed better than chance, but results varied considerably depending on task difficulty. We find that treatment conditions which asked workers to prospectively think about the responses of their peers - when combined with financial incentives - produced more accurate performance. Other treatments generally had weak effects on quality. Workers in India performed significantly worse than US workers, regardless of treatment group.


workshop on internet and network economics | 2010

Online labor markets

John J. Horton

In order to understand how a labor market for human computation functions, it is important to know how workers search for tasks. This paper uses two complementary methods to gain insight into how workers search for tasks on Mechanical Turk. First, we perform a high frequency scrape of 36 pages of search results and analyze it by looking at the rate of disappearance of tasks across key ways Mechanical Turk allows workers to sort tasks. Second, we present the results of a survey in which we paid workers for self-reported information about how they search for tasks. Our main findings are that on a large scale, workers sort by which tasks are most recently posted and which have the largest number of tasks available. Furthermore, we find that workers look mostly at the first page of the most recently posted tasks and the first two pages of the tasks with the most available instances but in both categories the position on the result page is unimportant to workers. We observe that at least some employers try to manipulate the position of their task in the search results to exploit the tendency to search for recently posted tasks. On an individual level, we observed workers searching by almost all the possible categories and looking more than 10 pages deep. For a task we posted to Mechanical Turk, we confirmed that a favorable position in the search results do matter: our task with favorable positioning was completed 30 times faster and for less money than when its position was unfavorable.


ACM Crossroads Student Magazine | 2010

Heads in the cloud

Robert C. Miller; Greg Little; Michael S. Bernstein; Jeffrey P. Bigham; Lydia B. Chilton; Max Goldman; John J. Horton; Rajeev Nayak

In recent years, a number of online labor markets have emerged that allow workers from around the world to sell their labor to an equally global pool of buyers. The creators of these markets play the role of labor market intermediary by providing institutional support and remedying informational asymmetries. In this paper, I explore market creators choices of price structure, price level and investment in platforms. I also discuss competition among markets and the business strategies employed by market creators. The paper concludes with a discussion of the productivity and welfare effects of online labor.


National Bureau of Economic Research | 2010

The Online Laboratory: Conducting Experiments in a Real Labor Market

John J. Horton; David G. Rand; Richard J. Zeckhauser

A professor and several PhD students at MIT examine the challenges and opportunities in human computation.


Economics Letters | 2011

The condition of the Turking class: Are online employers fair and honest?

John J. Horton


Archive | 2009

The Wages of Pay Cuts: Evidence from a Field Experiment

Daniel L. Chen; John J. Horton


Archive | 2010

h eads in the Cloud

Robert C. Miller; Greg Little; Michael S. Bernstein; Jeffrey P. Bigham; Lydia B. Chilton; Max Goldman; John J. Horton; Rajeev Nayak

Collaboration


Dive into the John J. Horton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert C. Miller

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Greg Little

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jeffrey P. Bigham

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Max Goldman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rajeev Nayak

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron Shaw

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge