Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dilrukshi Gamage is active.

Publication


Featured researches published by Dilrukshi Gamage.


user interface software and technology | 2015

Daemo: A Self-Governed Crowdsourcing Marketplace

Snehal (Neil) Gaikwad; Durim Morina; Rohit Nistala; Megha Agarwal; Alison Cossette; Radhika Bhanu; Saiph Savage; Vishwajeet Narwal; Karan Rajpal; Jeff Regino; Aditi Mithal; Adam Ginzberg; Aditi Nath; Karolina R. Ziulkoski; Trygve Cossette; Dilrukshi Gamage; Angela Richmond-Fuller; Ryo Suzuki; Jeerel Herrejón; Kevin Le; Claudia Flores-Saviaga; Haritha Thilakarathne; Kajal Gupta; William Dai; Ankita Sastry; Shirish Goyal; Thejan Rajapakshe; Niki Abolhassani; Angela Xie; Abigail Reyes

Crowdsourcing marketplaces provide opportunities for autonomous and collaborative professional work as well as social engagement. However, in these marketplaces, workers feel disrespected due to unreasonable rejections and low payments, whereas requesters do not trust the results they receive. The lack of trust and uneven distribution of power among workers and requesters have raised serious concerns about sustainability of these marketplaces. To address the challenges of trust and power, this paper introduces Daemo, a self-governed crowdsourcing marketplace. We propose a prototype task to improve the work quality and open-governance model to achieve equitable representation. We envisage Daemo will enable workers to build sustainable careers and provide requesters with timely, quality labor for their businesses.


user interface software and technology | 2016

Boomerang: Rebounding the Consequences of Reputation Feedback on Crowdsourcing Platforms

Snehalkumar (Neil) S. Gaikwad; Durim Morina; Adam Ginzberg; Catherine A. Mullings; Shirish Goyal; Dilrukshi Gamage; Christopher Diemert; Mathias Burton; Sharon Zhou; Mark E. Whiting; Karolina R. Ziulkoski; Alipta Ballav; Aaron Gilbee; Senadhipathige S. Niranga; Vibhor Sehgal; Jasmine Lin; Leonardy Kristianto; Angela Richmond-Fuller; Jeff Regino; Nalin Chhibber; Dinesh Majeti; Sachin Sharma; Kamila Mananova; Dinesh Dhakal; William Dai; Victoria Purynova; Samarth Sandeep; Varshine Chandrakanthan; Tejas Sarma; Sekandar Matin

Paid crowdsourcing platforms suffer from low-quality work and unfair rejections, but paradoxically, most workers and requesters have high reputation scores. These inflated scores, which make high-quality work and workers difficult to find, stem from social pressure to avoid giving negative feedback. We introduce Boomerang, a reputation system for crowdsourcing platforms that elicits more accurate feedback by rebounding the consequences of feedback directly back onto the person who gave it. With Boomerang, requesters find that their highly-rated workers gain earliest access to their future tasks, and workers find tasks from their highly-rated requesters at the top of their task feed. Field experiments verify that Boomerang causes both workers and requesters to provide feedback that is more closely aligned with their private opinions. Inspired by a game-theoretic notion of incentive-compatibility, Boomerang opens opportunities for interaction design to incentivize honest reporting over strategic dishonesty.


conference on computer supported cooperative work | 2017

Crowd Guilds: Worker-led Reputation and Feedback on Crowdsourcing Platforms

Mark E. Whiting; Dilrukshi Gamage; Snehalkumar (Neil) S. Gaikwad; Aaron Gilbee; Shirish Goyal; Alipta Ballav; Dinesh Majeti; Nalin Chhibber; Angela Richmond-Fuller; Freddie Vargus; Tejas Sarma; Varshine Chandrakanthan; Teogenes Moura; Mohamed Hashim Salih; Gabriel B. T. Kalejaiye; Adam Ginzberg; Catherine A. Mullings; Yoni Dayan; Kristy Milland; Henrique R. Orefice; Jeff Regino; Sayna Parsi; Kunz Mainali; Vibhor Sehgal; Sekandar Matin; Akshansh Sinha; Rajan Vaish; Michael S. Bernstein

Crowd workers are distributed and decentralized. While decentralization is designed to utilize independent judgment to promote high-quality results, it paradoxically undercuts behaviors and institutions that are critical to high-quality work. Reputation is one central example: crowdsourcing systems depend on reputation scores from decentralized workers and requesters, but these scores are notoriously inflated and uninformative. In this paper, we draw inspiration from historical worker guilds (e.g., in the silk trade) to design and implement crowd guilds: centralized groups of crowd workers who collectively certify each others quality through double-blind peer assessment. A two-week field experiment compared crowd guilds to a traditional decentralized crowd work model. Crowd guilds produced reputation signals more strongly correlated with ground-truth worker quality than signals available on current crowd working platforms, and more accurate than in the traditional model.


2015 8th International Conference on Ubi-Media Computing (UMEDIA) | 2015

Quality of MOOCs: A review of literature on effectiveness and quality aspects

Dilrukshi Gamage; Shantha Fernando; Indika Perera

Massive Open Online Courses (MOOC) is a trending phenomenon in online education. Number of participants in a MOOC and the number of MOOCs by platforms and courses are appearing to be increasing at a tremendous level. Although MOOC found to be the “buzz” word, latest reports claim that the hype of the MOOC is fading. One reason to this is because many MOOCs offered and created despite of evaluating the effectiveness of it. Therefore the qualities of MOOCs are under criticism. It is essential to seek possible solutions to balance the learner goal while offering a quality service. Working towards the direction, this literature review focuses on past researches carried out in identifying the success factors, best practices, and effectiveness of a MOOC. We focused literature published between 2012 and 2015 and found significantly less number of empirical evidence in discovering MOOC quality factors. Out of 4745 peer reviewed publications which met with the search terms, only 26 literatures found to produce highly relevant in deciding a quality of a MOOC. Out the 26 literatures, only 3 provided a quality dimensions with empirical evidence and 7 provided with proposal frameworks based on past literature. We discuss the concerns arising from the review and identify issues including lack of evidence in identifying the critical success factors, absence of social interactions, networking, anthropological and ethnographic view in determining a quality MOOC.


conference on computer supported cooperative work | 2017

The Daemo Crowdsourcing Marketplace

Snehalkumar (Neil) S. Gaikwad; Mark E. Whiting; Dilrukshi Gamage; Catherine A. Mullings; Dinesh Majeti; Shirish Goyal; Aaron Gilbee; Nalin Chhibber; Adam Ginzberg; Angela Richmond-Fuller; Sekandar Matin; Vibhor Sehgal; Tejas Sarma; Ahmed Nasser; Alipta Ballav; Jeff Regino; Sharon Zhou; Kamila Mananova; Preethi Srinivas; Karolina R. Ziulkoski; Dinesh Dhakal; Alexander Stolzoff; Senadhipathige S. Niranga; Mohamed Hashim Salih; Akshansh Sinha; Rajan Vaish; Michael S. Bernstein

The success of crowdsourcing markets is dependent on a strong foundation of trust between workers and requesters. In current marketplaces, workers and requesters are often unable to trust each others quality, and their mental models of tasks are misaligned due to ambiguous instructions or confusing edge cases. This breakdown of trust typically arises from (1) flawed reputation systems which do not accurately reflect worker and requester quality, and from (2) poorly designed tasks. In this demo, we present how Boomerang and Prototype Tasks, the fundamental building blocks of the Daemo crowdsourcing marketplace, help restore trust between workers and requesters. Daemos Boomerang reputation system incentivizes alignment between opinion and ratings by determining the likelihood that workers and requesters will work together in the future based on how they rate each other. Daemos Prototype tasks require that new tasks go through a feedback iteration phase with a small number of workers so that requesters can revise their instructions and task designs before launch.


learning at scale | 2017

Improving Assessment on MOOCs Through Peer Identification and Aligned Incentives

Dilrukshi Gamage; Mark E. Whiting; Thejan Rajapakshe; Haritha Thilakarathne; Indika Perera; Shantha Fernando

Massive Open Online Courses (MOOCs) use peer assessment to grade open ended questions at scale, allowing students to provide feedback. Relative to teacher based grading, peer assessment on MOOCs traditionally delivers lower quality feedback and fewer learner interactions. We present the identified peer review (IPR) framework, which provides non-blind peer assessment and incentives driving high quality feedback. We show that, compared to traditional peer assessment methods, IPR leads to significantly longer and more useful feedback as well as more discussion between peers.


2015 8th International Conference on Ubi-Media Computing (UMEDIA) | 2015

Factors leading to an effective MOOC from participiants perspective

Dilrukshi Gamage; Shantha Fernando; Indika Perera

Massive Open Online Courses (MOOCs) are dominating the eLearning field due to its sound pedagogical features and being open to any interest participant. Due to popularity and the demand, number of MOOCs increases at a higher rate. However, not all the MOOCs meet the goals of user. In other terms, not all the MOOCs are effective. It is vital to identify the factors affect to an effective MOOC. Since the MOOC concept is new, students behaviors and requirements are different than a typical eLearning course. Hence we used Grounded Theory (GT) methodology in order to identify these factors. We found 10 dimensions which affects to an effective MOOC; namely interactivity, collaboration, pedagogy, motivation, network of opportunities/ future directions, assessment, learner support, technology, usability and content. This research explains the process of GT and the results dimensions will assist in designing and implementing an effective MOOC.


international conference on advances in ict for emerging regions | 2014

Improving eLearning to meet challenges in 21 st century

Dilrukshi Gamage; Indika Perera; Shantha Fernando

Elearning has been practicing for more than a decade, it is evolving and changing rapidly to meet the needs of users. Many countries are moving from an industrial-based to information-based economy and the education systems must respond to this change. The skills needed in a modern world are critical thinking, creativity, collaboration, metacognition, and motivation. Last couple of decade eLearning focused more towards the individual development and literally assessing the students lower order thinking. The pedagogy supports to achieve learning outcomes based on individual performances leaving the learner in an isolated learning environment. The latest disruption in online learning is the Massive Open Online Courses (MOOC). Many researchers assert that it provides sound pedagogical change leading to many advantages. Our research aims to address the gap between current and past outcomes produced by eLearning with that of next few decades to come. We used qualitative method Grounded theory to identify the factors affecting to an effective eLearning and ranked them according to the priority by using quantitative method Principle Component Analysis (PCA) in SPSS (Version 10.0 for Windows).


Archive | 2017

Factors affecting to effective eLearning: Learners Perspective

Dilrukshi Gamage; Msd Fernando; Indika Perera


2015 8th International Conference on Ubi-Media Computing (UMEDIA) | 2015

A framework to analyze effectiveness of eLearning in MOOC: Learners perspective

Dilrukshi Gamage; Indika Perera; Shantha Fernando

Collaboration


Dive into the Dilrukshi Gamage's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark E. Whiting

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Snehalkumar (Neil) S. Gaikwad

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge