Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Greg Little is active.

Publication


Featured researches published by Greg Little.


user interface software and technology | 2010

VizWiz: nearly real-time answers to visual questions

Jeffrey P. Bigham; Chandrika Jayant; Hanjie Ji; Greg Little; Andrew Miller; Robert C. Miller; Robin Miller; Aubrey Tatarowicz; Brandyn White; Samual White; Tom Yeh

The lack of access to visual information like text labels, icons, and colors can cause frustration and decrease independence for blind people. Current access technology uses automatic approaches to address some problems in this space, but the technology is error-prone, limited in scope, and quite expensive. In this paper, we introduce VizWiz, a talking application for mobile phones that offers a new alternative to answering visual questions in nearly real-time - asking multiple people on the web. To support answering questions quickly, we introduce a general approach for intelligently recruiting human workers in advance called quikTurkit so that workers are available when new questions arrive. A field deployment with 11 blind participants illustrates that blind people can effectively use VizWiz to cheaply answer questions in their everyday lives, highlighting issues that automatic approaches will need to address to be useful. Finally, we illustrate the potential of using VizWiz as part of the participatory design of advanced tools by using it to build and evaluate VizWiz::LocateIt, an interactive mobile tool that helps blind people solve general visual search problems.


user interface software and technology | 2010

TurKit: human computation algorithms on mechanical turk

Greg Little; Lydia B. Chilton; Max Goldman; Robert C. Miller

Mechanical Turk (MTurk) provides an on-demand source of human computation. This provides a tremendous opportunity to explore algorithms which incorporate human computation as a function call. However, various systems challenges make this difficult in practice, and most uses of MTurk post large numbers of independent tasks. TurKit is a toolkit for prototyping and exploring algorithmic human computation, while maintaining a straight-forward imperative programming style. We present the crash-and-rerun programming model that makes TurKit possible, along with a variety of applications for human computation algorithms. We also present case studies of TurKit used for real experiments across different fields.


knowledge discovery and data mining | 2009

TurKit: tools for iterative tasks on mechanical Turk

Greg Little; Lydia B. Chilton; Max Goldman; Robert C. Miller

Mechanical Turk (MTurk) is an increasingly popular web service for paying people small rewards to do human computation tasks. Current uses of MTurk typically post independent parallel tasks. I am exploring an alternative iterative paradigm, in which workers build on or evaluate each others work. Part of my proposal is a toolkit called TurKit which facilitates deployment of iterative tasks on MTurk. I want to explore using this technology as a new form of end-user programming, where end-users are writing “programs” that are really instructions executed by humans on MTurk.


symposium on usable privacy and security | 2006

Web wallet: preventing phishing attacks by revealing user intentions

Min Wu; Robert C. Miller; Greg Little

We introduce a new anti-phishing solution, the Web Wallet. The Web Wallet is a browser sidebar which users can use to submit their sensitive information online. It detects phishing attacks by determining where users intend to submit their information and suggests an alternative safe path to their intended site if the current site does not match it. It integrates security questions into the users workflow so that its protection cannot be ignored by the user. We conducted a user study on the Web Wallet prototype and found that the Web Wallet is a promising approach. In the study, it significantly decreased the spoof rate of typical phishing attacks from 63% to 7%, and it effectively prevented all phishing attacks as long as it was used. A majority of the subjects successfully learned to depend on the Web Wallet to submit their login information. However, the study also found that spoofing the Web Wallet interface itself was an effective attack. Moreover, it was not easy to completely stop all subjects from typing sensitive information directly into web forms.


human factors in computing systems | 2013

Cascade: crowdsourcing taxonomy creation

Lydia B. Chilton; Greg Little; Darren Edge; Daniel S. Weld; James A. Landay

Taxonomies are a useful and ubiquitous way of organizing information. However, creating organizational hierarchies is difficult because the process requires a global understanding of the objects to be categorized. Usually one is created by an individual or a small group of people working together for hours or even days. Unfortunately, this centralized approach does not work well for the large, quickly changing datasets found on the web. Cascade is an automated workflow that allows crowd workers to spend as little at 20 seconds each while collectively making a taxonomy. We evaluate Cascade and show that on three datasets its quality is 80-90% of that of experts. Cascade has a competitive cost to expert information architects, despite taking six times more human labor. Fortunately, this labor can be parallelized such that Cascade will run in as fast as four minutes instead of hours or days.


knowledge discovery and data mining | 2010

Exploring iterative and parallel human computation processes

Greg Little; Lydia B. Chilton; Max Goldman; Robert C. Miller

Services like Amazons Mechanical Turk have opened the door for exploration of processes that outsource computation to humans. These human computation processes hold tremendous potential to solve a variety of problems in novel and interesting ways. However, we are only just beginning to understand how to design such processes. This paper explores two basic approaches: one where workers work alone in parallel and one where workers iteratively build on each others work. We present a series of experiments exploring tradeoffs between each approach in several problem domains: writing, brainstorming, and transcription. In each of our experiments, iteration increases the average quality of responses. The increase is statistically significant in writing and brainstorming. However, in brainstorming and transcription, it is not clear that iteration is the best overall approach, in part because both of these tasks benefit from a high variability of responses, which is more prevalent in the parallel process. Also, poor guesses in the transcription task can lead subsequent workers astray.


automated software engineering | 2007

Keyword programming in java

Greg Little; Robert C. Miller

Keyword programming is a novel technique for reducing the need to remember details of programming language syntax and APIs, by translating a small number of keywords provided by the user into a valid expression. Prior work has demonstrated the feasibility and merit of this approach in limited domains. This paper presents a new algorithm that scales to the much larger domain of general-purpose Java programming. We tested the algorithm by extracting keywords from method calls in open source projects, and found that it could accurately reconstruct over 90% of the original expressions. We also conducted a study using keywords generated by users, whose results suggest that users can obtain correct Java code using keyword queries as accurately as they can write the correct Java code themselves


user interface software and technology | 2011

Real-time collaborative coding in a web IDE

Max Goldman; Greg Little; Robert C. Miller

This paper describes Collabode, a web-based Java integrated development environment designed to support close, synchronous collaboration between programmers. We examine the problem of collaborative coding in the face of program compilation errors introduced by other users which make collaboration more difficult, and describe an algorithm for error-mediated integration of program code. Concurrent editors see the text of changes made by collaborators, but the errors reported in their view are based only on their own changes. Editors may run the program at any time, using only error-free edits supplied so far, and ignoring incomplete or otherwise error-generating changes. We evaluate this algorithm and interface on recorded data from previous pilot experiments with Collabode, and via a user study with student and professional programmers. We conclude that it offers appreciable benefits over naive continuous synchronization without regard to errors and over manual version control.


Communications of The ACM | 2015

Soylent: a word processor with a crowd inside

Michael S. Bernstein; Greg Little; Robert C. Miller; Björn Hartmann; Mark S. Ackerman; David R. Karger; David Crowell; Katrina Panovich

This paper introduces architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces. We focus on writing and editing, complex endeavors that span many levels of conceptual and pragmatic activity. Authoring tools offer help with pragmatics, but for higher-level help, writers commonly turn to other people. We thus present Soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand. To improve worker quality, we introduce the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages. Evaluation studies demonstrate the feasibility of crowdsourced editing and investigate questions of reliability, cost, wait time, and work time for edits.


international conference on software engineering | 2011

Collabode: collaborative coding in the browser

Max Goldman; Greg Little; Robert C. Miller

Collaborating programmers should use a development environment designed specifically for collaboration, not the same one designed for solo programmers with a few collaborative processes and tools tacked on. This paper describes Collabode, a web-based Java integrated development environment built to support close, synchronous collaboration between programmers. We discuss three collaboration models in which participants take on distinct roles: micro-outsourcing to combine small contributions from many assistants; test-driven pair programming for effective pairwise development; and a mobile instructor connected to the work of many students. In particular, we report very promising preliminary results using Collabode to support micro-outsourcing.

Collaboration


Dive into the Greg Little's collaboration.

Top Co-Authors

Avatar

Robert C. Miller

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Max Goldman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey P. Bigham

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Chen-Hsiang Yu

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David R. Karger

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Goran Konjevod

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge