Mitchell Gordon
University of Rochester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mitchell Gordon.
user interface software and technology | 2014
Walter S. Lasecki; Mitchell Gordon; Danai Koutra; Malte F. Jung; Steven P. Dow; Jeffrey P. Bigham
Behavioral researchers spend considerable amount of time coding video data to systematically extract meaning from subtle human actions and emotions. In this paper, we present Glance, a tool that allows researchers to rapidly query, sample, and analyze large video datasets for behavioral events that are hard to detect automatically. Glance takes advantage of the parallelism available in paid online crowds to interpret natural language queries and then aggregates responses in a summary view of the video data. Glance provides analysts with rapid responses when initially exploring a dataset, and reliable codings when refining an analysis. Our experiments show that Glance can code nearly 50 minutes of video in 5 minutes by recruiting over 60 workers simultaneously, and can get initial feedback to analysts in under 10 seconds for most clips. We present and compare new methods for accurately aggregating the input of multiple workers marking the spans of events in video data, and for measuring the quality of their coding in real-time before a baseline is established by measuring the variance between workers. Glances rapid responses to natural language queries, feedback regarding question ambiguity and anomalies in the data, and ability to build on prior context in followup queries allow users to have a conversation-like interaction with their data - opening up new possibilities for naturally exploring video data.
user interface software and technology | 2015
Mitchell Gordon; Jeffrey P. Bigham; Walter S. Lasecki
We introduce LegionTools, a toolkit and interface for managing large, synchronous crowds of online workers for experiments. This poster contributes the design and implementation of a state-of-the-art crowd management tool, along with a publicly-available, open-source toolkit that future system builders can use to coordinate synchronous crowds of online workers for their systems and studies. We describe the toolkit itself, along with the underlying design rationale, in order to make it clear to the community of system builders at UIST when and how this tool may be beneficial to their project. We also describe initial deployments of the system in which workers were synchronously recruited to support real-time crowdsourcing systems, including the largest synchronous recruitment and routing of workers from Mechanical Turk that we are aware of. While the version of LegionTools discussed here focuses on Amazons Mechanical Turk platform, it can be easily extended to other platforms as APIs become available.
symposium on visual languages and human-centric computing | 2015
Joyce Zhu; Jeremy Warner; Mitchell Gordon; Jeffery White; Renan Zanelatto; Philip J. Guo
Online discussion forums are one of the most ubiquitous kinds of resources for people who are learning computer programming. However, their user interface - a hierarchy of textual threads - has not changed much in the past four decades. We argue that generic forum interfaces are cumbersome for learning programming and that there is a need for a domain-specific visual discussion forum for programming. We support this argument with an empirical study of all 5,377 forum threads in Introduction to Computer Science and Programming Using Python, a popular edX MOOC. Specifically, we investigated how forum participants were hampered by its text-based format. Most notably, people often wanted to discuss questions about dynamic execution state - what happens “under the hood” as the computer runs code. We propose that a better forum for learning programming should be visual and domain-specific, integrating automatically-generated visualizations of execution state and enabling inline annotations of source code and output.
symposium on visual languages and human-centric computing | 2015
Mitchell Gordon; Philip J. Guo
A common way to learn is by studying written step-by-step tutorials such as worked examples. However, tutorials for computer programming can be tedious to create since a static text-based format cannot convey what happens as code executes. We created a system called Codepourri that enables people to easily create visual coding tutorials by annotating steps in an automatically-generated program visualization. Using Codepourri, we developed a novel crowdsourcing workflow where learners who are visiting an educational Web site (www. pythontutor.com) collectively create a tutorial by annotating execution steps in a piece of code and then voting on the best annotations. Since there are far more learners than experts, using learners as a crowd is a potentially more scalable way of creating tutorials. Our experiments with 4 expert judges and 101 learners adding 145 raw annotations to two pieces of textbook Python code show the learner crowds annotations to be accurate, informative, and containing some insights that even experts missed.
conference on computers and accessibility | 2014
Mitchell Gordon
Evaluating the results of user accessibility testing on the web can take a significant amount of time, training, and effort. Some of this work can be offloaded to others through coding video data from user tests to systematically extract meaning from subtle human actions and emotions. However, traditional video coding methods can take a considerable amount of time. We have created Glance, a tool that uses the crowd to allow researchers to rapidly query, sample, and analyze large video datasets for behavioral events that are hard to detect automatically. In this abstract, we discuss how Glance can be used to quickly code video of users with special needs interacting with a website by coding for whether or not websites conform with accessibility guidelines, in order to evaluate how accessible a website is and where potential problems lie.
human factors in computing systems | 2015
Walter S. Lasecki; Mitchell Gordon; Winnie Leung; Ellen Lim; Jeffrey P. Bigham; Steven P. Dow
human factors in computing systems | 2014
Walter S. Lasecki; Mitchell Gordon; Steven P. Dow; Jeffrey P. Bigham
HCOMP | 2017
Harmanpreet Kaur; Mitchell Gordon; Yi Wei Yang; Jeffrey P. Bigham; Jaime Teevan; Ece Kamar; Walter S. Lasecki
Archive | 2016
Walter S. Lasecki; Mitchell Gordon; Jaime Teevan; Ece Kamar; Jeff P. Bigham
national conference on artificial intelligence | 2014
Mitchell Gordon; Walter S. Lasecki; Winnie Leung; Ellen Lim; Steven P. Dow; Jeffrey P. Bigham