Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rajan Vaish is active.

Publication


Featured researches published by Rajan Vaish.


user interface software and technology | 2015

Daemo: A Self-Governed Crowdsourcing Marketplace

Snehal (Neil) Gaikwad; Durim Morina; Rohit Nistala; Megha Agarwal; Alison Cossette; Radhika Bhanu; Saiph Savage; Vishwajeet Narwal; Karan Rajpal; Jeff Regino; Aditi Mithal; Adam Ginzberg; Aditi Nath; Karolina R. Ziulkoski; Trygve Cossette; Dilrukshi Gamage; Angela Richmond-Fuller; Ryo Suzuki; Jeerel Herrejón; Kevin Le; Claudia Flores-Saviaga; Haritha Thilakarathne; Kajal Gupta; William Dai; Ankita Sastry; Shirish Goyal; Thejan Rajapakshe; Niki Abolhassani; Angela Xie; Abigail Reyes

Crowdsourcing marketplaces provide opportunities for autonomous and collaborative professional work as well as social engagement. However, in these marketplaces, workers feel disrespected due to unreasonable rejections and low payments, whereas requesters do not trust the results they receive. The lack of trust and uneven distribution of power among workers and requesters have raised serious concerns about sustainability of these marketplaces. To address the challenges of trust and power, this paper introduces Daemo, a self-governed crowdsourcing marketplace. We propose a prototype task to improve the work quality and open-governance model to achieve equitable representation. We envisage Daemo will enable workers to build sustainable careers and provide requesters with timely, quality labor for their businesses.


user interface software and technology | 2016

Boomerang: Rebounding the Consequences of Reputation Feedback on Crowdsourcing Platforms

Snehalkumar (Neil) S. Gaikwad; Durim Morina; Adam Ginzberg; Catherine A. Mullings; Shirish Goyal; Dilrukshi Gamage; Christopher Diemert; Mathias Burton; Sharon Zhou; Mark E. Whiting; Karolina R. Ziulkoski; Alipta Ballav; Aaron Gilbee; Senadhipathige S. Niranga; Vibhor Sehgal; Jasmine Lin; Leonardy Kristianto; Angela Richmond-Fuller; Jeff Regino; Nalin Chhibber; Dinesh Majeti; Sachin Sharma; Kamila Mananova; Dinesh Dhakal; William Dai; Victoria Purynova; Samarth Sandeep; Varshine Chandrakanthan; Tejas Sarma; Sekandar Matin

Paid crowdsourcing platforms suffer from low-quality work and unfair rejections, but paradoxically, most workers and requesters have high reputation scores. These inflated scores, which make high-quality work and workers difficult to find, stem from social pressure to avoid giving negative feedback. We introduce Boomerang, a reputation system for crowdsourcing platforms that elicits more accurate feedback by rebounding the consequences of feedback directly back onto the person who gave it. With Boomerang, requesters find that their highly-rated workers gain earliest access to their future tasks, and workers find tasks from their highly-rated requesters at the top of their task feed. Field experiments verify that Boomerang causes both workers and requesters to provide feedback that is more closely aligned with their private opinions. Inspired by a game-theoretic notion of incentive-compatibility, Boomerang opens opportunities for interaction design to incentivize honest reporting over strategic dishonesty.


conference on computer supported cooperative work | 2017

Crowd Guilds: Worker-led Reputation and Feedback on Crowdsourcing Platforms

Mark E. Whiting; Dilrukshi Gamage; Snehalkumar (Neil) S. Gaikwad; Aaron Gilbee; Shirish Goyal; Alipta Ballav; Dinesh Majeti; Nalin Chhibber; Angela Richmond-Fuller; Freddie Vargus; Tejas Sarma; Varshine Chandrakanthan; Teogenes Moura; Mohamed Hashim Salih; Gabriel B. T. Kalejaiye; Adam Ginzberg; Catherine A. Mullings; Yoni Dayan; Kristy Milland; Henrique R. Orefice; Jeff Regino; Sayna Parsi; Kunz Mainali; Vibhor Sehgal; Sekandar Matin; Akshansh Sinha; Rajan Vaish; Michael S. Bernstein

Crowd workers are distributed and decentralized. While decentralization is designed to utilize independent judgment to promote high-quality results, it paradoxically undercuts behaviors and institutions that are critical to high-quality work. Reputation is one central example: crowdsourcing systems depend on reputation scores from decentralized workers and requesters, but these scores are notoriously inflated and uninformative. In this paper, we draw inspiration from historical worker guilds (e.g., in the silk trade) to design and implement crowd guilds: centralized groups of crowd workers who collectively certify each others quality through double-blind peer assessment. A two-week field experiment compared crowd guilds to a traditional decentralized crowd work model. Crowd guilds produced reputation signals more strongly correlated with ground-truth worker quality than signals available on current crowd working platforms, and more accurate than in the traditional model.


user interface software and technology | 2017

Crowd Research: Open and Scalable University Laboratories

Rajan Vaish; Snehalkumar (Neil) S. Gaikwad; Geza Kovacs; Andreas Veit; Ranjay Krishna; Imanol Arrieta Ibarra; Camelia Simoiu; Michael J. Wilber; Serge J. Belongie; Sharad Goel; James Davis; Michael S. Bernstein

Research experiences today are limited to a privileged few at select universities. Providing open access to research experiences would enable global upward mobility and increased diversity in the scientific workforce. How can we coordinate a crowd of diverse volunteers on open-ended research? How could a PI have enough visibility into each persons contributions to recommend them for further study? We present Crowd Research, a crowdsourcing technique that coordinates open-ended research through an iterative cycle of open contribution, synchronous collaboration, and peer assessment. To aid upward mobility and recognize contributions in publications, we introduce a decentralized credit system: participants allocate credits to each other, which a graph centrality algorithm translates into a collectively-created author order. Over 1,500 people from 62 countries have participated, 74% from institutions with low access to research. Over two years and three projects, this crowd has produced articles at top-tier Computer Science venues, and participants have gone on to leading graduate programs.


conference on computer supported cooperative work | 2017

The Daemo Crowdsourcing Marketplace

Snehalkumar (Neil) S. Gaikwad; Mark E. Whiting; Dilrukshi Gamage; Catherine A. Mullings; Dinesh Majeti; Shirish Goyal; Aaron Gilbee; Nalin Chhibber; Adam Ginzberg; Angela Richmond-Fuller; Sekandar Matin; Vibhor Sehgal; Tejas Sarma; Ahmed Nasser; Alipta Ballav; Jeff Regino; Sharon Zhou; Kamila Mananova; Preethi Srinivas; Karolina R. Ziulkoski; Dinesh Dhakal; Alexander Stolzoff; Senadhipathige S. Niranga; Mohamed Hashim Salih; Akshansh Sinha; Rajan Vaish; Michael S. Bernstein

The success of crowdsourcing markets is dependent on a strong foundation of trust between workers and requesters. In current marketplaces, workers and requesters are often unable to trust each others quality, and their mental models of tasks are misaligned due to ambiguous instructions or confusing edge cases. This breakdown of trust typically arises from (1) flawed reputation systems which do not accurately reflect worker and requester quality, and from (2) poorly designed tasks. In this demo, we present how Boomerang and Prototype Tasks, the fundamental building blocks of the Daemo crowdsourcing marketplace, help restore trust between workers and requesters. Daemos Boomerang reputation system incentivizes alignment between opinion and ratings by determining the likelihood that workers and requesters will work together in the future based on how they rate each other. Daemos Prototype tasks require that new tasks go through a feedback iteration phase with a small number of workers so that requesters can revise their instructions and task designs before launch.


international world wide web conferences | 2018

Creating Crowdsourced Research Talks at Scale

Rajan Vaish; Shirish Goyal; Amin Saberi; Sharad Goel

There has been a marked shift towards learning and consuming information through video. Most academic research, however, is still distributed only in text form, as researchers often have limited time, resources, and incentives to create video versions of their work. To address this gap, we propose, deploy, and evaluate a scalable, end-to-end system for crowdsourcing the creation of short, 5-minute research videos based on academic papers. Doing so requires solving complex coordination and collaborative video production problems. To assist coordination, we designed a structured workflow that enables efficient delegation of tasks, while also motivating the crowd through a collaborative learning environment. To facilitate video production, we developed an online tool with which groups can make micro-audio recordings that are automatically stitched together to create a complete talk. We tested this approach with a group of volunteers recruited from 52 countries through an open call. This distributed crowd produced over 100 video talks in 12 languages based on papers from top-tier computer science conferences. The produced talks consistently received high ratings from a diverse group of non-experts and experts, including the authors of the original papers. These results indicate that our crowdsourcing approach is a promising method for producing high-quality research talks at scale, increasing the distribution and accessibility of scientific knowledge.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2018

What’s in it for me? Self-serving versus other-oriented framing in messages advocating use of prosocial peer-to-peer services

Rajan Vaish; Q. Vera Liao; Victoria Bellotti

Abstract We present a study that investigates the effectiveness of self-serving versus other-oriented motivational framing of messages designed to persuade people to sign up for a prosocial peer-to-peer (P2P) service. As part of the study, volunteer message senders were incentivized to recruit people to sign up for one of three types of prosocial P2P services. Senders were given an option of choosing one of four pre-designed invitation messages to send to their contacts, two framed for self-serving motivations and two framed for other-oriented motivations. We found that recipients were more attracted to click on messages emphasizing self-serving benefits. This may not match the expectation of senders, who generally prioritized other-oriented motives for participating in prosocial P2P services. However, after recipients clicked the messages to investigate further, effects of self versus other-framing messages depended on the nature of the service. Our findings suggest that, even for prosocial services, messages offering self-serving motivations are more effective than altruistic ones on inspiring interests. But the overall persuasive effect on conversion may be more nuanced, where the persuasion context (service type) appears to be a critical moderator.


arXiv: Human-Computer Interaction | 2015

On Optimizing Human-Machine Task Assignments.

Andreas Veit; Michael J. Wilber; Rajan Vaish; Serge J. Belongie; James Davis; Vishal Anand; Anshu Aviral; Prithvijit Chakrabarty; Yash Chandak; Sidharth Chaturvedi; Chinmaya Devaraj; Ankit Dhall; Utkarsh Dwivedi; Sanket Gupte; Sharath N. Sridhar; Karthik Paga; Anuj Pahuja; Aditya Raisinghani; Ayush Sharma; Shweta Sharma; Darpana Sinha; Nisarg Thakkar; K. Bala Vignesh; Utkarsh Verma; Kanniganti Abhishek; Amod Agrawal; Arya Aishwarya; Aurgho Bhattacharjee; Sarveshwaran Dhanasekar; Venkata Karthik Gullapalli


arXiv: Human-Computer Interaction | 2017

CrowdTone: Crowd-powered tone feedback and improvement system for emails.

Rajan Vaish; Andrés Monroy-Hernández


learning at scale | 2017

Mobilizing the Crowd to Create an Open Repository of Research Talks

Rajan Vaish; Sharad Goel; Amin Saberi

Collaboration


Dive into the Rajan Vaish's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Snehalkumar (Neil) S. Gaikwad

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark E. Whiting

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge