Lydia B. Chilton
University of Washington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lydia B. Chilton.
user interface software and technology | 2010
Greg Little; Lydia B. Chilton; Max Goldman; Robert C. Miller
Mechanical Turk (MTurk) provides an on-demand source of human computation. This provides a tremendous opportunity to explore algorithms which incorporate human computation as a function call. However, various systems challenges make this difficult in practice, and most uses of MTurk post large numbers of independent tasks. TurKit is a toolkit for prototyping and exploring algorithmic human computation, while maintaining a straight-forward imperative programming style. We present the crash-and-rerun programming model that makes TurKit possible, along with a variety of applications for human computation algorithms. We also present case studies of TurKit used for real experiments across different fields.
knowledge discovery and data mining | 2009
Greg Little; Lydia B. Chilton; Max Goldman; Robert C. Miller
Mechanical Turk (MTurk) is an increasingly popular web service for paying people small rewards to do human computation tasks. Current uses of MTurk typically post independent parallel tasks. I am exploring an alternative iterative paradigm, in which workers build on or evaluate each others work. Part of my proposal is a toolkit called TurKit which facilitates deployment of iterative tasks on MTurk. I want to explore using this technology as a new form of end-user programming, where end-users are writing “programs” that are really instructions executed by humans on MTurk.
human factors in computing systems | 2013
Lydia B. Chilton; Greg Little; Darren Edge; Daniel S. Weld; James A. Landay
Taxonomies are a useful and ubiquitous way of organizing information. However, creating organizational hierarchies is difficult because the process requires a global understanding of the objects to be categorized. Usually one is created by an individual or a small group of people working together for hours or even days. Unfortunately, this centralized approach does not work well for the large, quickly changing datasets found on the web. Cascade is an automated workflow that allows crowd workers to spend as little at 20 seconds each while collectively making a taxonomy. We evaluate Cascade and show that on three datasets its quality is 80-90% of that of experts. Cascade has a competitive cost to expert information architects, despite taking six times more human labor. Fortunately, this labor can be parallelized such that Cascade will run in as fast as four minutes instead of hours or days.
knowledge discovery and data mining | 2010
Greg Little; Lydia B. Chilton; Max Goldman; Robert C. Miller
Services like Amazons Mechanical Turk have opened the door for exploration of processes that outsource computation to humans. These human computation processes hold tremendous potential to solve a variety of problems in novel and interesting ways. However, we are only just beginning to understand how to design such processes. This paper explores two basic approaches: one where workers work alone in parallel and one where workers iteratively build on each others work. We present a series of experiments exploring tradeoffs between each approach in several problem domains: writing, brainstorming, and transcription. In each of our experiments, iteration increases the average quality of responses. The increase is statistically significant in writing and brainstorming. However, in brainstorming and transcription, it is not clear that iteration is the best overall approach, in part because both of these tasks benefit from a high variability of responses, which is more prevalent in the parallel process. Also, poor guesses in the transcription task can lead subsequent workers astray.
knowledge discovery and data mining | 2010
Lydia B. Chilton; John J. Horton; Robert C. Miller; Shiri Azenkot
In order to understand how a labor market for human computation functions, it is important to know how workers search for tasks. This paper uses two complementary methods to gain insight into how workers search for tasks on Mechanical Turk. First, we perform a high frequency scrape of 36 pages of search results and analyze it by looking at the rate of disappearance of tasks across key ways Mechanical Turk allows workers to sort tasks. Second, we present the results of a survey in which we paid workers for self-reported information about how they search for tasks. Our main findings are that on a large scale, workers sort by which tasks are most recently posted and which have the largest number of tasks available. Furthermore, we find that workers look mostly at the first page of the most recently posted tasks and the first two pages of the tasks with the most available instances but in both categories the position on the result page is unimportant to workers. We observe that at least some employers try to manipulate the position of their task in the search results to exploit the tendency to search for recently posted tasks. On an individual level, we observed workers searching by almost all the possible categories and looking more than 10 pages deep. For a task we posted to Mechanical Turk, we confirmed that a favorable position in the search results do matter: our task with favorable positioning was completed 30 times faster and for less money than when its position was unfavorable.
international world wide web conferences | 2011
Lydia B. Chilton; Jaime Teevan
Web search engines have historically focused on connecting people with information resources. For example, if a person wanted to know when their flight to Hyderabad was leaving, a search engine might connect them with the airline where they could find flight status information. However, search engines have recently begun to try to meet peoples search needs directly, providing, for example, flight status information in response to queries that include an airline and a flight number. In this paper, we use large scale query log analysis to explore the challenges a search engine faces when trying to meet an information need directly in the search result page. We look at how peoples interaction behavior changes when inline content is returned, finding that such content can cannibalize clicks from the algorithmic results. We see that in the absence of interaction behavior, an individuals repeat search behavior can be useful in understanding the contents value. We also discuss some of the ways user behavior can be used to provide insight into when inline answers might better trigger and what types of additional information might be included in the results.
knowledge discovery and data mining | 2009
Lydia B. Chilton; Clayton T. Sims; Max Goldman; Greg Little; Robert C. Miller
Seaweed is a web application for experimental economists with no programming background to design two-player symmetric games in a visual-oriented interface. Games are automatically published to the web where players can play against each other remotely and game play is logged so that the games designer can analyze the data. The design and implementation challenge in Seaweed is to provide an end user programming environment that creates games responsive to events and controlled by logic without the designer understanding programming concepts such as events and synchronization, or being burdened by specifying low-level programming detail. Seaweed achieves this by providing high-level visual representations for variables, control flow, and logic, and by automating behaviors for event handling, synchronization, and function evaluation. Seaweeds evaluation demonstrates that Amazons Mechanical Turk (MTurk) is a viable platform for forming partnerships between people and paying them to perform cooperative tasks in real-time, cheaply and with high throughput
human factors in computing systems | 2014
Lydia B. Chilton; Juho Kim; Paul André; Felicia Cordeiro; James A. Landay; Daniel S. Weld; Steven P. Dow; Robert C. Miller; Haoqi Zhang
Organizing conference sessions around themes improves the experience for attendees. However, the session creation process can be difficult and time-consuming due to the amount of expertise and effort required to consider alternative paper groupings. We present a collaborative web application called Frenzy to draw on the efforts and knowledge of an entire program committee. Frenzy comprises (a) interfaces to support large numbers of experts working collectively to create sessions, and (b) a two-stage process that decomposes the session-creation problem into meta-data elicitation and global constraint satisfaction. Meta-data elicitation involves a large group of experts working simultaneously, while global constraint satisfaction involves a smaller group that uses the meta-data to form sessions. We evaluated Frenzy with 48 people during a deployment at the CSCW 2014 program committee meeting. The session making process was much faster than the traditional process, taking 88 minutes instead of a full day. We found that meta-data elicitation was useful for session creation. Moreover, the sessions created by Frenzy were the basis of the CSCW 2014 schedule.
human factors in computing systems | 2011
Michael S. Bernstein; Ed H. Chi; Lydia B. Chilton; Björn Hartmann; Aniket Kittur; Robert C. Miller
Crowdsourcing and human computation are transforming human-computer interaction, and CHI has led the way. The seminal publication in human computation was initially published in CHI in 2004 [1], and the first paper investigating Mechanical Turk as a user study platform has amassed over one hundred citations in two years [5]. However, we are just beginning to stake out a coherent research agenda for the field. This workshop will bring together researchers in the young field of crowdsourcing and human computation and produce three artifacts: a research agenda for the field, a vision for ideal crowdsourcing platforms, and a group-edited bibliography. These resources will be publically disseminated on the web and evolved and maintained by the community.
ACM Crossroads Student Magazine | 2010
Robert C. Miller; Greg Little; Michael S. Bernstein; Jeffrey P. Bigham; Lydia B. Chilton; Max Goldman; John J. Horton; Rajeev Nayak
A professor and several PhD students at MIT examine the challenges and opportunities in human computation.