Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven P. Dow is active.

Publication


Featured researches published by Steven P. Dow.


conference on computer supported cooperative work | 2012

Shepherding the crowd yields better work

Steven P. Dow; Anand Kulkarni; Scott R. Klemmer; Björn Hartmann

Micro-task platforms provide massively parallel, on-demand labor. However, it can be difficult to reliably achieve high-quality work because online workers may behave irresponsibly, misunderstand the task, or lack necessary skills. This paper investigates whether timely, task-specific feedback helps crowd workers learn, persevere, and produce better results. We investigate this question through Shepherd, a feedback system for crowdsourced work. In a between-subjects study with three conditions, crowd workers wrote consumer reviews for six products they own. Participants in the None condition received no immediate feedback, consistent with most current crowdsourcing practices. Participants in the Self-assessment condition judged their own work. Participants in the External assessment condition received expert feedback. Self-assessment alone yielded better overall work than the None condition and helped workers improve over time. External assessment also yielded these benefits. Participants who received external assessment also revised their work more. We conclude by discussing interaction and infrastructure approaches for integrating real-time assessment into online work.


ACM Transactions on Computer-Human Interaction | 2010

Parallel prototyping leads to better design results, more divergence, and increased self-efficacy

Steven P. Dow; Alana Glassco; Jonathan Kass; Melissa Schwarz; Daniel L. Schwartz; Scott R. Klemmer

Iteration can help people improve ideas. It can also give rise to fixation, continuously refining one option without considering others. Does creating and receiving feedback on multiple prototypes in parallel, as opposed to serially, affect learning, self-efficacy, and design exploration? An experiment manipulated whether independent novice designers created graphic Web advertisements in parallel or in series. Serial participants received descriptive critique directly after each prototype. Parallel participants created multiple prototypes before receiving feedback. As measured by click-through data and expert ratings, ads created in the Parallel condition significantly outperformed those from the Serial condition. Moreover, independent raters found Parallel prototypes to be more diverse. Parallel participants also reported a larger increase in task-specific self-confidence. This article outlines a theoretical foundation for why parallel prototyping produces better design results and discusses the implications for design education.


IEEE Pervasive Computing | 2005

Wizard of Oz support throughout an iterative design process

Steven P. Dow; Blair MacIntyre; Jaemin Lee; Christopher Oezbek; Jay David Bolter; Maribeth Gandy

The Wizard of Oz prototyping approach, widely used in human-computer interaction research, is particularly useful in exploring user interfaces for pervasive, ubiquitous, or mixed-reality systems that combine complex sensing and intelligent control logic. The vast design space for such nontraditional interfaces provides many possibilities for user interaction through one or more modalities and often requires challenging hardware and software implementations. The WOz method helps designers avoid getting locked into a particular design or working under an incorrect set of assumptions about user preferences, because it lets them explore and evaluate designs before investing the considerable development time needed to build a complete prototype.


human factors in computing systems | 2011

Prototyping dynamics: sharing multiple designs improves exploration, group rapport, and results

Steven P. Dow; Julie Fortuna; Dan Schwartz; Beth Altringer; Daniel L. Schwartz; Scott R. Klemmer

Prototypes ground group communication and facilitate decision making. However, overly investing in a single design idea can lead to fixation and impede the collaborative process. Does sharing multiple designs improve collaboration? In a study, participants created advertisements individually and then met with a partner. In the Share Multiple condition, participants designed and shared three ads. In the Share Best condition, participants designed three ads and selected one to share. In the Share One condition, participants designed and shared one ad. Sharing multiple designs improved outcome, exploration, sharing, and group rapport. These participants integrated more of their partners ideas into their own subsequent designs, explored a more divergent set of ideas, and provided more productive critiques of their partners designs. Furthermore, their ads were rated more highly and garnered a higher click-through rate when hosted online.


conference on computer supported cooperative work | 2015

Structuring, Aggregating, and Evaluating Crowdsourced Design Critique

Kurt Luther; Jari Lee Tolentino; Wei Wu; Amy Pavel; Brian P. Bailey; Maneesh Agrawala; Björn Hartmann; Steven P. Dow

Feedback is an important component of the design process, but gaining access to high-quality critique outside a classroom or firm is challenging. We present CrowdCrit, a web-based system that allows designers to receive design critiques from non-expert crowd workers. We evaluated CrowdCrit in three studies focusing on the designers experience and benefits of the critiques. In the first study, we compared crowd and expert critiques and found evidence that aggregated crowd critique approaches expert critique. In a second study, we found that designers who got crowd feedback perceived that it improved their design process. The third study showed that designers were enthusiastic about crowd critiques and used them to change their designs. We conclude with implications for the design of crowd feedback services.


conference on computer supported cooperative work | 2014

Crowd synthesis: extracting categories and clusters from complex data

Paul André; Aniket Kittur; Steven P. Dow

Analysts synthesize complex, qualitative data to uncover themes and concepts, but the process is time-consuming, cognitively taxing, and automated techniques show mixed success. Crowdsourcing could help this process through on-demand harnessing of flexible and powerful human cognition, but incurs other challenges including limited attention and expertise. Further, text data can be complex, high-dimensional, and ill-structured. We address two major challenges unsolved in prior crowd clustering work: scaffolding expertise for novice crowd workers, and creating consistent and accurate categories when each worker only sees a small portion of the data. To address these challenges we present an empirical study of a two-stage approach to enable crowds to create an accurate and useful overview of a dataset: A) we draw on cognitive theory to assess how re-representing data can shorten and focus the data on salient dimensions; and B) introduce an iterative clustering approach that provides workers a global overview of data. We demonstrate a classification-plus-context approach elicits the most accurate categories at the most useful level of abstraction.


user interface software and technology | 2014

Glance: rapidly coding behavioral video with the crowd

Walter S. Lasecki; Mitchell Gordon; Danai Koutra; Malte F. Jung; Steven P. Dow; Jeffrey P. Bigham

Behavioral researchers spend considerable amount of time coding video data to systematically extract meaning from subtle human actions and emotions. In this paper, we present Glance, a tool that allows researchers to rapidly query, sample, and analyze large video datasets for behavioral events that are hard to detect automatically. Glance takes advantage of the parallelism available in paid online crowds to interpret natural language queries and then aggregates responses in a summary view of the video data. Glance provides analysts with rapid responses when initially exploring a dataset, and reliable codings when refining an analysis. Our experiments show that Glance can code nearly 50 minutes of video in 5 minutes by recruiting over 60 workers simultaneously, and can get initial feedback to analysts in under 10 seconds for most clips. We present and compare new methods for accurately aggregating the input of multiple workers marking the spans of events in video data, and for measuring the quality of their coding in real-time before a baseline is established by measuring the variance between workers. Glances rapid responses to natural language queries, feedback regarding question ambiguity and anomalies in the data, and ability to build on prior context in followup queries allow users to have a conversation-like interaction with their data - opening up new possibilities for naturally exploring video data.


human factors in computing systems | 2013

A pilot study of using crowds in the classroom

Steven P. Dow; Elizabeth M. Gerber; Audris Wong

Industry relies on higher education to prepare students for careers in innovation. Fulfilling this obligation is especially difficult in classroom settings, which often lack authentic interaction with the outside world. Online crowdsourcing has the potential to change this. Our research explores if and how online crowds can support student learning in the classroom. We explore how scalable, diverse, immediate (and often ambiguous and conflicting) input from online crowds affects student learning and motivation for project-based innovation work. In a pilot study with three classrooms, we explore interactions with the crowd at four key stages of the innovation process: needfinding, ideating, testing, and pitching. Students reported that online crowds helped them quickly and inexpensively identify needs and uncover issues with early-stage prototypes, although they favored face-to-face interactions for more contextual feed-back. We share early evidence and discuss implications for creating a socio-technical infrastructure to more effectively use crowdsourcing in education.


advances in computer entertainment technology | 2006

Initial lessons from AR Façade, an interactive augmented reality drama

Steven P. Dow; Manish Mehta; Annie Lausier; Blair MacIntyre; Micheal Mateas

In this paper, we describe an augmented reality version of the acclaimed desktop-based interactive drama, Façade [18]. Few entertainment experiences combine interactive virtual characters, non-linear narrative, and unconstrained embodied interaction. In AR Façade players move through a physical apartment and use gestures and speech to interact with two autonomous characters, Trip and Grace. Our experience converting a desktop based game to augmented reality sheds light on the design challenges of developing mixed physical/virtual AI-based drama. We share our initial observations of players from a live demonstration and talk about our work moving forward.


Cognitive Science | 2014

Early and Repeated Exposure to Examples Improves Creative Work

Chinmay Kulkarni; Steven P. Dow; Scott R. Klemmer

This article presents the results of an online creativity experiment (N = 81) that examines the effect of example timing on creative output. In the between-subjects experiment, participants drew animals to inhabit an alien Earth-like planet while being exposed to examples early, late, or repeatedly during the experiment. We find that exposure to examples increases conformity. Early exposure to examples improves creativity (measured by the number of common and novel features in drawings, and subjective ratings by independent raters). Repeated exposure to examples interspersed with prototyping leads to even better results. However, late exposure to examples increases conformity, but does not improve creativity.

Collaboration


Dive into the Steven P. Dow's collaboration.

Top Co-Authors

Avatar

Blair MacIntyre

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joel Chan

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haoqi Zhang

Northwestern University

View shared research outputs
Top Co-Authors

Avatar

Jeffrey P. Bigham

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Maribeth Gandy

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Paul André

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Robert C. Miller

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge