Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Scott R. Klemmer is active.

Publication


Featured researches published by Scott R. Klemmer.


designing interactive systems | 2006

How bodies matter: five themes for interaction design

Scott R. Klemmer; Björn Hartmann; Leila Takayama

Our physical bodies play a central role in shaping human experience in the world, understandingof the world, and interactions in the world. This paper draws on theories of embodiment - from psychology, sociology, and philosophy - synthesizing five themes we believe are particularly salient for interaction design: thinking through doing, performance, visibility, risk, and thick practice. We intro-duce aspects of human embodied engagement in the world with the goal of inspiring new interaction design ap-proaches and evaluations that better integrate the physical and computational worlds.


user interface software and technology | 2006

Reflective physical prototyping through integrated design, test, and analysis

Björn Hartmann; Scott R. Klemmer; Michael S. Bernstein; Leith Abdulla; Brandon Burr; Avi Robinson-Mosher; Jennifer Gee

Prototyping is the pivotal activity that structures innovation, collaboration, and creativity in design. Prototypes embody design hypotheses and enable designers to test them. Framin design as a thinking-by-doing activity foregrounds iteration as a central concern. This paper presents d.tools, a toolkit that embodies an iterative-design-centered approach to prototyping information appliances. This work offers contributions in three areas. First, d.tools introduces a statechart-based visual design tool that provides a low threshold for early-stage prototyping, extensible through code for higher-fidelity prototypes. Second, our research introduces three important types of hardware extensibility - at the hardware-to-PC interface, the intra-hardware communication level, and the circuit level. Third, d.tools integrates design, test, and analysis of information appliances. We have evaluated d.tools through three studies: a laboratory study with thirteen participants; rebuilding prototypes of existing and emerging devices; and by observing seven student teams who built prototypes with d.tools.


human factors in computing systems | 2006

ButterflyNet: a mobile capture and access system for field biology research

Ron B. Yeh; Chunyuan Liao; Scott R. Klemmer; François Guimbretière; Brian J. Lee; Boyko Kakaradov; Jeannie A. Stamberger; Andreas Paepcke

Through a study of field biology practices, we observed that biology fieldwork generates a wealth of heterogeneous information, requiring substantial labor to coordinate and distill. To manage this data, biologists leverage a diverse set of tools, organizing their effort in paper notebooks. These observations motivated ButterflyNet, a mobile capture and access system that integrates paper notes with digital photographs captured during field research. Through ButterflyNet, the activity of leafing through a notebook expands to browsing all associated digital photos. ButterflyNet also facilitates the transfer of captured content to spreadsheets, enabling biologists to share their work. A first-use study with 14 biologists found this system to offer rich data capture and transformation, in a manner felicitous with current practice.


human factors in computing systems | 2010

Example-centric programming: integrating web search into the development environment

Joel Brandt; Mira Dontcheva; Marcos Weskamp; Scott R. Klemmer

The ready availability of online source-code examples has fundamentally changed programming practices. However, current search tools are not designed to assist with programming tasks and are wholly separate from editing tools. This paper proposes that embedding a task-specific search engine in the development environment can significantly reduce the cost of finding information and thus enable programmers to write better code more easily. This paper describes the design, implementation, and evaluation of Blueprint, a Web search interface integrated into the Adobe Flex Builder development environment that helps users locate example code. Blueprint automatically augments queries with code context, presents a code-centric view of search results, embeds the search experience into the editor, and retains a link between copied code and its source. A comparative laboratory study found that Blueprint enables participants to write significantly better code and find example code significantly faster than with a standard Web browser. Analysis of three months of usage logs with 2,024 users suggests that task-specific search interfaces can significantly change how and when people search the Web.


conference on computer supported cooperative work | 2012

Shepherding the crowd yields better work

Steven P. Dow; Anand Kulkarni; Scott R. Klemmer; Björn Hartmann

Micro-task platforms provide massively parallel, on-demand labor. However, it can be difficult to reliably achieve high-quality work because online workers may behave irresponsibly, misunderstand the task, or lack necessary skills. This paper investigates whether timely, task-specific feedback helps crowd workers learn, persevere, and produce better results. We investigate this question through Shepherd, a feedback system for crowdsourced work. In a between-subjects study with three conditions, crowd workers wrote consumer reviews for six products they own. Participants in the None condition received no immediate feedback, consistent with most current crowdsourcing practices. Participants in the Self-assessment condition judged their own work. Participants in the External assessment condition received expert feedback. Self-assessment alone yielded better overall work than the None condition and helped workers improve over time. External assessment also yielded these benefits. Participants who received external assessment also revised their work more. We conclude by discussing interaction and infrastructure approaches for integrating real-time assessment into online work.


ACM Transactions on Computer-Human Interaction | 2013

Peer and self assessment in massive online classes

Chinmay Kulkarni; Koh Pang Wei; Huy Le; Daniel Chia; Kathryn Papadopoulos; Justin Cheng; Daphne Koller; Scott R. Klemmer

Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.


user interface software and technology | 2000

Suede: a Wizard of Oz prototyping tool for speech user interfaces

Scott R. Klemmer; Anoop K. Sinha; Jack Chen; James A. Landay; Nadeem Aboobaker; Annie Wang

Speech-based user interfaces are growing in popularity. Unfortunately, the technology expertise required to build speech UIs precludes many individuals from participating in the speech interface design process. Furthermore, the time and knowledge costs of building even simple speech systems make it difficult for designers to iteratively design speech UIs. SUEDE, the speech interface prototyping tool we describe in this paper, allows designers to rapidly create prompt/response speech interfaces. It offers an electronically supported Wizard of Oz (WOz) technique that captures test data, allowing designers to analyze the interface after testing. This informal tool enables speech user interface designers, even non-experts, to quickly create, test, and analyze speech user interface prototypes.


ACM Transactions on Computer-Human Interaction | 2010

Parallel prototyping leads to better design results, more divergence, and increased self-efficacy

Steven P. Dow; Alana Glassco; Jonathan Kass; Melissa Schwarz; Daniel L. Schwartz; Scott R. Klemmer

Iteration can help people improve ideas. It can also give rise to fixation, continuously refining one option without considering others. Does creating and receiving feedback on multiple prototypes in parallel, as opposed to serially, affect learning, self-efficacy, and design exploration? An experiment manipulated whether independent novice designers created graphic Web advertisements in parallel or in series. Serial participants received descriptive critique directly after each prototype. Parallel participants created multiple prototypes before receiving feedback. As measured by click-through data and expert ratings, ads created in the Parallel condition significantly outperformed those from the Serial condition. Moreover, independent raters found Parallel prototypes to be more diverse. Parallel participants also reported a larger increase in task-specific self-confidence. This article outlines a theoretical foundation for why parallel prototyping produces better design results and discusses the implications for design education.


human factors in computing systems | 2010

What would other programmers do: suggesting solutions to error messages

Björn Hartmann; Daniel MacDougall; Joel Brandt; Scott R. Klemmer

Interpreting compiler errors and exception messages is challenging for novice programmers. Presenting examples of how other programmers have corrected similar errors may help novices understand and correct such errors. This paper introduces HelpMeOut, a social recommender system that aids the debugging of error messages by suggesting solutions that peers have applied in the past. HelpMeOut comprises IDE instrumentation to collect examples of code changes that fix errors; a central database that stores fix reports from many users; and a suggestion interface that, given an error, queries the database for a list of relevant fixes and presents these to the programmer. We report on implementations of this architecture for two programming languages. An evaluation with novice programmers found that the technique can suggest useful fixes for 47% of errors after 39 person-hours of programming in an instrumented environment.


human factors in computing systems | 2007

Authoring sensor-based interactions by demonstration with direct manipulation and pattern recognition

Björn Hartmann; Leith Abdulla; Manas Mittal; Scott R. Klemmer

Sensors are becoming increasingly important in interaction design. Authoring a sensor-based interaction comprises three steps: choosing and connecting the appropriate hardware, creating application logic, and specifying the relationship between sensor values and application logic. Recent research has successfully addressed the first two issues. However, linking sensor input data to application logic remains an exercise in patience and trial-and-error testing for most designers. This paper introduces techniques for authoring sensor-based interactions by demonstration. A combination of direct manipulation and pattern recognition techniques enables designers to control how demonstrated examples are generalized to interaction rules. This approach emphasizes design exploration by enabling very rapid iterative demonstrate-edit-review cycles. This paper describes the manifestation of these techniques in a design tool, Exemplar, and presents evaluations through a first-use lab study and a theoretical analysis using the Cognitive Dimensions of Notation framework.

Collaboration


Dive into the Scott R. Klemmer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven P. Dow

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge