Jonathan Bragg
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jonathan Bragg.
north american chapter of the association for computational linguistics | 2016
Angli Liu; Stephen Soderland; Jonathan Bragg; Christopher H. Lin; Xiao Ling; Daniel S. Weld
Can crowdsourced annotation of training data boost performance for relation extraction over methods based solely on distant supervision? While crowdsourcing has been shown effective for many NLP tasks, previous researchers found only minimal improvement when applying the method to relation extraction. This paper demonstrates that a much larger boost is possible, e.g., raising F1 from 0.40 to 0.60. Furthermore, the gains are due to a simple, generalizable technique, Gated Instruction, which combines an interactive tutorial, feedback to correct errors during training, and improved screening.
human factors in computing systems | 2017
Meredith Ringel Morris; Jeffrey P. Bigham; Robin Brewer; Jonathan Bragg; Anand Kulkarni; Jessie Li; Saiph Savage
Mainstream crowdwork platforms treat microtasks as indivisible units; however, in this article, we propose that there is value in re-examining this assumption. We argue that crowdwork platforms can improve their value proposition for all stakeholders by supporting subcontracting within microtasks. After describing the value proposition of subcontracting, we then define three models for microtask subcontracting: real-time assistance, task management, and task improvement, and reflect on potential use cases and implementation considerations associated with each. Finally, we describe the outcome of two tasks on Mechanical Turk meant to simulate aspects of subcontracting. We reflect on the implications of these findings for the design of future crowd work platforms that effectively harness the potential of subcontracting workflows.
user interface software and technology | 2018
Jonathan Bragg; Daniel S. Weld
While crowdsourcing enables data collection at scale, ensuring high-quality data remains a challenge. In particular, effective task design underlies nearly every reported crowdsourcing success, yet remains difficult to accomplish. Task design is hard because it involves a costly iterative process: identifying the kind of work output one wants, conveying this information to workers, observing worker performance, understanding what remains ambiguous, revising the instructions, and repeating the process until the resulting output is satisfactory. To facilitate this process, we propose a novel meta-workflow that helps requesters optimize crowdsourcing task designs and Sprout, our open-source tool, which implements this workflow. Sprout improves task designs by (1) eliciting points of confusion from crowd workers, (2) enabling requesters to quickly understand these misconceptions and the overall space of questions, and (3) guiding requesters to improve the task design in response. We report the results of a user study with two labeling tasks demonstrating that requesters strongly prefer Sprout and produce higher-rated instructions compared to current best practices for creating gated instructions (instructions plus a workflow for training and testing workers). We also offer a set of design recommendations for future tools that support crowdsourcing task design.
conference on computer supported cooperative work | 2017
Anand Sriraman; Jonathan Bragg; Anand Kulkarni
Artificial intelligence (AI) is widely expected to reduce the need for human labor in a variety of sectors. Workers on virtual labor marketplaces accelerate this process by generating training data for AI systems. We propose a new model where workers earn ownership of trained AI systems, allowing them to draw a long-term royalty from a tool that replaces their labor. This concept offers benefits for workers and requesters alike, reducing the upfront costs of model training while increasing longer-term rewards to workers. We identify design and technical problems associated with this new concept, including finding market opportunities for trained models, financing model training, and compensating workers fairly for training contributions. A survey of workers on Amazon Mechanical Turk about this idea finds that workers are willing to give up 25% of their earnings in exchange for an investment in the future performance of a machine learning system.
national conference on artificial intelligence | 2013
Jonathan Bragg; Daniel S. Weld
national conference on artificial intelligence | 2014
Jonathan Bragg; Andrey Kolobov; Mausam Mausam; Daniel S. Weld
adaptive agents and multi-agents systems | 2016
Jonathan Bragg; Daniel S. Weld
national conference on artificial intelligence | 2016
Ryan Drapeau; Lydia B. Chilton; Jonathan Bragg; Daniel S. Weld
Archive | 2014
Daniel S. Weld; Christopher H. Lin; Jonathan Bragg
conference on computer supported cooperative work | 2016
Shih Wen Huang; Jonathan Bragg; Isaac Cowhey; Oren Etzioni; Daniel S. Weld