Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeffrey M. Rzeszotarski is active.

Publication


Featured researches published by Jeffrey M. Rzeszotarski.


user interface software and technology | 2011

Instrumenting the crowd: using implicit behavioral measures to predict task performance

Jeffrey M. Rzeszotarski; Aniket Kittur

Detecting and correcting low quality submissions in crowdsourcing tasks is an important challenge. Prior work has primarily focused on worker outcomes or reputation, using approaches such as agreement across workers or with a gold standard to evaluate quality. We propose an alternative and complementary technique that focuses on the way workers work rather than the products they produce. Our technique captures behavioral traces from online crowd workers and uses them to predict outcome measures such quality, errors, and the likelihood of cheating. We evaluate the effectiveness of the approach across three contexts including classification, generation, and comprehension tasks. The results indicate that we can build predictive models of task performance based on behavioral traces alone, and that these models generalize to related tasks. Finally, we discuss limitations and extensions of the approach.


user interface software and technology | 2012

CrowdScape: interactively visualizing user behavior and output

Jeffrey M. Rzeszotarski; Aniket Kittur

Crowdsourcing has become a powerful paradigm for accomplishing work quickly and at scale, but involves significant challenges in quality control. Researchers have developed algorithmic quality control approaches based on either worker outputs (such as gold standards or worker agreement) or worker behavior (such as task fingerprinting), but each approach has serious limitations, especially for complex or creative work. Human evaluation addresses these limitations but does not scale well with increasing numbers of workers. We present CrowdScape, a system that supports the human evaluation of complex crowd work through interactive visualization and mixed initiative machine learning. The system combines information about worker behavior with worker outputs, helping users to better understand and harness the crowd. We describe the system and discuss its utility through grounded case studies. We explore other contexts where CrowdScapes visualizations might be useful, such as in user studies.


human factors in computing systems | 2014

Kinetica: naturalistic multi-touch data visualization

Jeffrey M. Rzeszotarski; Aniket Kittur

Over the last several years there has been an explosion of powerful, affordable, multi-touch devices. This provides an outstanding opportunity for novel data visualization techniques that leverage new interaction methods and minimize their barriers to entry. In this paper we describe an approach for multivariate data visualization that uses physics-based affordances that are easy to intuit, constraints that are easy to apply and visualize, and a consistent view as data is manipulated in order to promote data exploration and interrogation. We provide a framework for exploring this problem space, and an example proof of concept system called Kinetica. We describe the results of a user study that suggest users of Kinetica were able to explore multiple dimensions of data at once, identify outliers, and discover trends with minimal training.


human factors in computing systems | 2015

The Effects of Sequence and Delay on Crowd Work

Walter S. Lasecki; Jeffrey M. Rzeszotarski; Adam Marcus; Jeffrey P. Bigham

A common approach in crowdsourcing is to break large tasks into small microtasks so that they can be parallelized across many crowd workers and so that redundant work can be more easily compared for quality control. In practice, this can result in the microtasks being presented out of their natural order and often introduces delays between individual microtasks. In this paper, we demonstrate in a study of 338 crowd workers that non-sequential microtasks and the introduction of delays significantly decreases worker performance. We show that interruptions where a large delay occurs between two related tasks can cause up to a 102% slowdown in completion time, and interruptions where workers are asked to perform different tasks in sequence can slow down completion time by 57%. We conclude with a set of design guidelines to improve both worker performance and realized pay, and instructions for implementing these changes in existing interfaces for crowd work.


conference on computer supported cooperative work | 2012

Learning from history: predicting reverted work at the word level in wikipedia

Jeffrey M. Rzeszotarski; Aniket Kittur

Wikipedias remarkable success in aggregating millions of contributions can pose a challenge for current editors, whose hard work may be reverted unless they understand and follow established norms, policies, and decisions and avoid contentious or proscribed terms. We present a machine learning model for predicting whether a contribution will be reverted based on word level features. Unlike previous models relying on editor-level characteristics, our model can make accurate predictions based only on the words a contribution changes. A key advantage of the model is that it can provide feedback on not only whether a contribution is likely to be rejected, but also the particular words that are likely to be controversial, enabling new forms of intelligent interfaces and visualizations. We examine the performance of the model across a variety of Wikipedia articles.


human factors in computing systems | 2013

TouchViz: (multi)touching multivariate data

Jeffrey M. Rzeszotarski; Aniket Kittur

In this paper we describe TouchViz, an information visualization system for tablets that encourages rich interaction, exploration, and play through references to physical models. TouchViz turns data into physical objects that experience forces and respond to the user. We describe the design of the system and conduct a user study to explore its use, finding that it supports many different models of data exploration and encourages users to have fun exploring data.


conference on computer supported cooperative work | 2015

And Now for Something Completely Different: Improving Crowdsourcing Workflows with Micro-Diversions

Peng Dai; Jeffrey M. Rzeszotarski; Praveen Paritosh; Ed H. Chi


human factors in computing systems | 2014

Estimating the social costs of friendsourcing

Jeffrey M. Rzeszotarski; Meredith Ringel Morris


national conference on artificial intelligence | 2013

Inserting Micro-Breaks into Crowdsourcing Workflows

Jeffrey M. Rzeszotarski; Ed H. Chi; Praveen Paritosh; Peng Dai


human factors in computing systems | 2014

Is anyone out there?: unpacking Q&A hashtags on twitter

Jeffrey M. Rzeszotarski; Emma S. Spiro; Jorge Nathan Matias; Andrés Monroy-Hernández; Meredith Ringel Morris

Collaboration


Dive into the Jeffrey M. Rzeszotarski's collaboration.

Top Co-Authors

Avatar

Aniket Kittur

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Marcus

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Emma S. Spiro

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Jeffrey P. Bigham

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jorge Nathan Matias

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge