Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kurt Luther is active.

Publication


Featured researches published by Kurt Luther.


conference on computer supported cooperative work | 2012

Who gives a tweet?: evaluating microblog content value

Paul André; Michael S. Bernstein; Kurt Luther

While microblog readers have a wide variety of reactions to the content they see, studies have tended to focus on extremes such as retweeting and unfollowing. To understand the broad continuum of reactions in-between, which are typically not shared publicly, we designed a website that collected the first large corpus of follower ratings on Twitter updates. Using our dataset of over 43,000 voluntary ratings, we find that nearly 36% of the rated tweets are worth reading, 25% are not, and 39% are middling. These results suggest that users tolerate a large amount of less-desired content in their feeds. We find that users value information sharing and random thoughts above me-oriented or presence updates. We also offer insight into evolving social norms, such as lack of context and misuse of @mentions and hashtags. We discuss implications for emerging practice and tool design.


designing interactive systems | 2008

Games for virtual team building

Jason B. Ellis; Kurt Luther; Katherine Bessière; Wendy A. Kellogg

Distributed teams are increasingly common in todays workplace. For these teams, face-to-face meetings where members can most easily build trust are rare and often costprohibitive. 3D virtual worlds and games may provide an alternate means for encouraging team development due to their affordances for facile communication, emotional engagement, and social interaction among participants. Using principles derived from social psychological theory, we have designed and built a collection of team-building games within the popular virtual world Second Life. We detail here the design decisions made in the creation of these games and discuss how they evolved based on early participant observations.


international conference on supporting group work | 2010

Why it works (when it works): success factors in online creative collaboration

Kurt Luther; Kelly Caine; Kevin Ziegler; Amy Bruckman

Online creative collaboration (peer production) has enabled the creation of Wikipedia and open source software (OSS), and is rapidly expanding to encompass new domains, such as video, music, and animation. But what are the underlying principles allowing online creative collaboration to succeed, and how well do they transfer from one domain to another? In this paper, we address these questions by comparing and contrasting online, collaborative animated movies, called collabs, with OSS projects. First, we use qualitative methods to solicit potential success factors from collab participants. Then, we test these predictions by quantitatively analyzing a data set of nearly 900 collabs. Finally, we compare and contrast our results with the literature on OSS development and propose broader theoretical implications. Our findings offer a starting point for a systematic research agenda seeking to unlock the potential of online creative collaboration.


conference on computer supported cooperative work | 2015

Structuring, Aggregating, and Evaluating Crowdsourced Design Critique

Kurt Luther; Jari Lee Tolentino; Wei Wu; Amy Pavel; Brian P. Bailey; Maneesh Agrawala; Björn Hartmann; Steven P. Dow

Feedback is an important component of the design process, but gaining access to high-quality critique outside a classroom or firm is challenging. We present CrowdCrit, a web-based system that allows designers to receive design critiques from non-expert crowd workers. We evaluated CrowdCrit in three studies focusing on the designers experience and benefits of the critiques. In the first study, we compared crowd and expert critiques and found evidence that aggregated crowd critique approaches expert critique. In a second study, we found that designers who got crowd feedback perceived that it improved their design process. The third study showed that designers were enthusiastic about crowd critiques and used them to change their designs. We conclude with implications for the design of crowd feedback services.


human factors in computing systems | 2009

Pathfinder: an online collaboration environment for citizen scientists

Kurt Luther; Scott Counts; Kristin Brooke Stecher; Aaron Hoff; Paul Johns

For over a century, citizen scientists have volunteered to collect huge quantities of data for professional scientists to analyze. We designed Pathfinder, an online environment that challenges this traditional division of labor by providing tools for citizen scientists to collaboratively discuss and analyze the data they collect. We evaluated Pathfinder in a sustainability and commuting context using a mixed methods approach in both naturalistic and experimental settings. Our results showed that citizen scientists preferred Pathfinder to a standard wiki and were able to go beyond data collection and engage in deeper discussion and analyses. We also found that citizen scientists require special types of technological support because they generate original research. This paper offers an early example of the mutually beneficial relationship between HCI and citizen science.


conference on computer supported cooperative work | 2016

Almost an Expert: The Effects of Rubrics and Expertise on Perceived Value of Crowdsourced Design Critiques

Alvin Yuan; Kurt Luther; Markus Krause; Sophie Isabel Vennix; Steven P. Dow; Björn Hartmann

Expert feedback is valuable but hard to obtain for many designers. Online crowds can provide fast and affordable feedback, but workers may lack relevant domain knowledge and experience. Can expert rubrics address this issue and help novices provide expert-level feedback? To evaluate this, we conducted an experiment with a 2x2 factorial design. Student designers received feedback on a visual design from both experts and novices, who produced feedback using either an expert rubric or no rubric. We found that rubrics helped novice workers provide feedback that was rated nearly as valuable as expert feedback. A follow-up analysis on writing style showed that student designers found feedback most helpful when it was emotionally positive and specific, and that a rubric increased the occurrence of these characteristics in feedback. The analysis also found that expertise correlated with longer critiques, but not the other favorable characteristics. An informal evaluation indicates that experts may instead have produced value by providing clearer justifications.


conference on computer supported cooperative work | 2014

CrowdCrit: crowdsourcing and aggregating visual design critique

Kurt Luther; Amy Pavel; Wei Wu; Jari-lee Tolentino; Maneesh Agrawala; Björn Hartmann; Steven P. Dow

People who create visual designs often struggle to find high-quality critique outside a firm or classroom, and current online feedback solutions are limited. We created a system called CrowdCrit which leverages paid crowdsourcing to generate and visualize high-quality visual design critique. Our work extends prior crowd feedback research by focusing on scaffolding the process and language of studio critique for crowds.


human factors in computing systems | 2014

Curated city: capturing individual city guides through social curation

Justin Cranshaw; Kurt Luther; Patrick Gage Kelley; Norman M. Sadeh

We report on our design of Curated City, a website that lets people build their own personal guide to the citys neighborhoods by chronicling their favorite experiences. Although users make their own personal guides, they are immersed in a social curatorial experience where they are influenced directly and indirectly by the guides of others. We use a 2-week field trial involving 20 residents of Pittsburgh as a technological probe to explore the initial design decisions, and we further refine the design landscape through subject interviews. Based on this study, we identify a set of design recommendations for building scalable social platforms for curating the experiences of the city.


acm multimedia | 2008

Audio Puzzler: piecing together time-stamped speech transcripts with a puzzle game

Nicholas Diakopoulos; Kurt Luther; Irfan A. Essa

We have developed an audio-based casual puzzle game which produces a time-stamped transcription of spoken audio as a by-product of play. Our evaluation of the game indicates that it is both fun and challenging. The transcripts generated using the game are more accurate than those produced using a standard automatic transcription system and the time-stamps of words are within several hundred milliseconds of ground truth.


information and communication technologies in tourism | 2008

RevisiTour: Enriching the Tourism Experience With User-Generated Content

Youn ah Kang; John T. Stasko; Kurt Luther; Avinash Ravi; Yan Xu

We have explored design opportunities to enrich the tourism experience of people at the Georgia Aquarium by providing a context of photos and by motivating people to be active creators of content to share their experiences with others. We designed a system named RevisiTour to enable visitors to reorganize photos taken from tour sites and share the photos with others. A visitor’s path and timestamp are recorded on a badge with a sensor throughout a trip. After the trip, the visitor can access a website where s/he uploads photos, synchronizes them with the path, and shares the photos with others. We report on how the system was designed, developed, and refined. After developing a prototype, we evaluated a mock-up of the system with actual visitors in the Georgia Aquarium. The analysis and design implications show the possibility of user-generated content systems for tour sites.

Collaboration


Dive into the Kurt Luther's collaboration.

Top Co-Authors

Avatar

Amy Bruckman

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven P. Dow

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Irfan A. Essa

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul André

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge