Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Justin Cheng is active.

Publication


Featured researches published by Justin Cheng.


ACM Transactions on Computer-Human Interaction | 2013

Peer and self assessment in massive online classes

Chinmay Kulkarni; Koh Pang Wei; Huy Le; Daniel Chia; Kathryn Papadopoulos; Justin Cheng; Daphne Koller; Scott R. Klemmer

Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9% of students’ grades within 5% of the staff grade, and 65.5% within 10%. On average, students assessed their work 7% higher than staff did. Students also rated peers’ work from their own country 3.6% higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4% to 9.9%.


privacy security risk and trust | 2011

Predicting Reciprocity in Social Networks

Justin Cheng; Daniel M. Romero; Brendan Meeder; Jon M. Kleinberg

In social media settings where users send messages to one another, the issue of reciprocity naturally arises: does the communication between two users take place only in one direction, or is it reciprocated? In this paper we study the problem of reciprocity prediction: given the characteristics of two users, we wish to determine whether the communication between them is reciprocated or not. We approach this problem using decision trees and regression models to determine good indicators of reciprocity. We extract a network based on directed@-messages sent between users on Twitter, and identify measures based on the attributes of nodes and their network neighborhoods that can be used to construct good predictors of reciprocity. Moreover, we find that reciprocity prediction forms interesting contrasts with earlier network prediction tasks, including link prediction, as well as the inference of strengths and signs of network links.


conference on computer supported cooperative work | 2014

Ensemble: exploring complementary strengths of leaders and crowds in creative collaboration

Joy Kim; Justin Cheng; Michael S. Bernstein

In story writing, the diverse perspectives of the crowd could support an authors search for the perfect character, setting, or plot. However, structuring crowd collaboration is challenging. Too little structure leads to unfocused, sprawling narratives, and too much structure stifles creativity. Motivated by the idea that individual creative leaders and the crowd have complementary creative strengths, we present an approach where a leader directs the high-level vision for a story and articulates creative constraints for the crowd. This approach is embodied in Ensemble, a novel collaborative story-writing platform. In a month-long short story competition, over one hundred volunteer users on the web started over fifty short stories using Ensemble. Leaders used the platform to direct collaborator work by establishing creative goals, and collaborators contributed meaningful, high-level ideas to stories through specific suggestions. This work suggests that asymmetric creative contributions may support a broad new class of creative collaborations.


human factors in computing systems | 2015

Break It Down: A Comparison of Macro- and Microtasks

Justin Cheng; Jaime Teevan; Shamsi T. Iqbal; Michael S. Bernstein

A large, seemingly overwhelming task can sometimes be transformed into a set of smaller, more manageable microtasks that can each be accomplished independently. For example, it may be hard to subjectively rank a large set of photographs, but easy to sort them in spare moments by making many pairwise comparisons. In crowdsourcing systems, microtasking enables unskilled workers with limited commitment to work together to complete tasks they would not be able to do individually. We explore the costs and benefits of decomposing macrotasks into microtasks for three task categories: arithmetic, sorting, and transcription. We find that breaking these tasks into microtasks results in longer overall task completion times, but higher quality outcomes and a better experience that may be more resilient to interruptions. These results suggest that microtasks can help people complete high quality work in interruption-driven environments.


conference on computer supported cooperative work | 2017

Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions

Justin Cheng; Michael S. Bernstein; Cristian Danescu-Niculescu-Mizil; Jure Leskovec

In online communities, antisocial behavior such as trolling disrupts constructive discussion. While prior work suggests that trolling behavior is confined to a vocal and antisocial minority, we demonstrate that ordinary people can engage in such behavior as well. We propose two primary trigger mechanisms: the individuals mood, and the surrounding context of a discussion (e.g., exposure to prior trolling behavior). Through an experiment simulating an online discussion, we find that both negative mood and seeing troll posts by others significantly increases the probability of a user trolling, and together double this probability. To support and extend these results, we study how these same mechanisms play out in the wild via a data-driven, longitudinal analysis of a large online news discussion community. This analysis exposes temporal mood effects, and explores long range patterns of repeated exposure to trolling. A predictive model of trolling behavior reveals that mood and discussion context together can explain trolling behavior better than an individuals history of trolling. These results combine to suggest that ordinary people can, under the right circumstances, behave like trolls.


international world wide web conferences | 2016

Do Cascades Recur

Justin Cheng; Lada A. Adamic; Jon M. Kleinberg; Jure Leskovec

Cascades of information-sharing are a primary mechanism by which content reaches its audience on social media, and an active line of research has studied how such cascades, which form as content is reshared from person to person, develop and subside. In this paper, we perform a large-scale analysis of cascades on Facebook over significantly longer time scales, and find that a more complex picture emerges, in which many large cascades recur, exhibiting multiple bursts of popularity with periods of quiescence in between. We characterize recurrence by measuring the time elapsed between bursts, their overlap and proximity in the social network, and the diversity in the demographics of individuals participating in each peak. We discover that content virality, as revealed by its initial popularity, is a main driver of recurrence, with the availability of multiple copies of that content helping to spark new bursts. Still, beyond a certain popularity of content, the rate of recurrence drops as cascades start exhausting the population of interested individuals. We reproduce these observed patterns in a simple model of content recurrence simulated on a real social network. Using only characteristics of a cascades initial burst, we demonstrate strong performance in predicting whether it will recur in the future.


conference on computer supported cooperative work | 2015

Flock: Hybrid Crowd-Machine Learning Classifiers

Justin Cheng; Michael S. Bernstein

We present hybrid crowd-machine learning classifiers: classification models that start with a written description of a learning goal, use the crowd to suggest predictive features and label data, and then weigh these features using machine learning to produce models that are accurate and use human-understandable features. These hybrid classifiers enable fast prototyping of machine learning models that can improve on both algorithm performance and human judgment, and accomplish tasks where automated feature extraction is not yet feasible. Flock, an interactive machine learning platform, instantiates this approach. To generate informative features, Flock asks the crowd to compare paired examples, an approach inspired by analogical encoding. The crowds efforts can be focused on specific subsets of the input space where machine-extracted features are not predictive, or instead used to partition the input space and improve algorithm performance in subregions of the space. An evaluation on six prediction tasks, ranging from detecting deception to differentiating impressionist artists, demonstrated that aggregating crowd features improves upon both asking the crowd for a direct prediction and off-the-shelf machine learning features by over 10%. Further, hybrid systems that use both crowd-nominated and machine-extracted features can outperform those that use either in isolation.


human factors in computing systems | 2015

Measuring Crowdsourcing Effort with Error-Time Curves

Justin Cheng; Jaime Teevan; Michael S. Bernstein

Crowdsourcing systems lack effective measures of the effort required to complete each task. Without knowing how much time workers need to execute a task well, requesters struggle to accurately structure and price their work. Objective measures of effort could better help workers identify tasks that are worth their time. We propose a data-driven effort metric, ETA (error-time area), that can be used to determine a tasks fair price. It empirically models the relationship between time and error rate by manipulating the time that workers have to complete a task. ETA reports the area under the error-time curve as a continuous metric of worker effort. The curves 10th percentile is also interpretable as the minimum time most workers require to complete the task without error, which can be used to price the task. We validate the ETA metric on ten common crowdsourcing tasks, including tagging, transcription, and search, and find that ETA closely tracks how workers would rank these tasks by effort. We also demonstrate how ETA allows requesters to rapidly iterate on task designs and measure whether the changes improve worker efficiency. Our findings can facilitate the process of designing, pricing, and allocating crowdsourcing tasks.


conference on computer supported cooperative work | 2013

Tools for predicting drop-off in large online classes

Justin Cheng; Chinmay Kulkarni; Scott R. Klemmer

This paper describes two diagnostic tools to predict students are at risk of dropping out from an online class. While thousands of students have been attracted to large online classes, keeping them motivated has been challenging. Experiments on a large, online HCI class suggest that the tools these paper introduces can help identify students who will not complete assignments, with an F1 score of 0.46 and 0.73 three days before the assignment due date.


human factors in computing systems | 2011

GoSlow: designing for slowness, reflection and solitude

Justin Cheng; Akshay Bapat; Gregory Thomas; Kevin Tse; Nikhil Nawathe; Jeremy Crockett; Gilly Leshed

We are surrounded by technologies that fuel a fast-paced, at-the-moment, connected life. In contrast, GoSlow is a mobile application designed to help users slow down, contemplate, and be alone. Through serendipWe are surrounded by technologies that fuel a fast-paced, at-the-moment, connected life. In contrast, GoSlow is a mobile application designed to help users slow down, contemplate, and be alone. Through serendipitous moments of pause and reflection, GoSlow offers simple ways for users to cut back and relax, provides an outlet for contemplation and reminiscence, and helps them disconnect and get away. Our user study reveals that GoSlow encourages introspective reflection, slowing down, and can help reduce stress with minimal intervention.itous moments of pause and reflection, GoSlow offers simple ways for users to cut back and relax, provides an outlet for contemplation and reminiscence, and helps them disconnect and get away. Our user study reveals that GoSlow encourages introspective reflection, slowing down, and can help reduce stress with minimal intervention.

Collaboration


Dive into the Justin Cheng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge