Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anand Kulkarni is active.

Publication


Featured researches published by Anand Kulkarni.


conference on computer supported cooperative work | 2012

Collaboratively crowdsourcing workflows with turkomatic

Anand Kulkarni; Matthew Can; Björn Hartmann

Preparing complex jobs for crowdsourcing marketplaces requires careful attention to workflow design, the process of decomposing jobs into multiple tasks, which are solved by multiple workers. Can the crowd help design such workflows? This paper presents Turkomatic, a tool that recruits crowd workers to aid requesters in planning and solving complex jobs. While workers decompose and solve tasks, requesters can view the status of worker-designed workflows in real time; intervene to change tasks and solutions; and request new solutions to subtasks from the crowd. These features lower the threshold for crowd employers to request complex work. During two evaluations, we found that allowing the crowd to plan without requester supervision is partially successful, but that requester intervention during workflow planning and execution improves quality substantially. We argue that Turkomatics collaborative approach can be more successful than the conventional workflow design process and discuss implications for the design of collaborative crowd planning systems.


conference on computer supported cooperative work | 2012

Shepherding the crowd yields better work

Steven P. Dow; Anand Kulkarni; Scott R. Klemmer; Björn Hartmann

Micro-task platforms provide massively parallel, on-demand labor. However, it can be difficult to reliably achieve high-quality work because online workers may behave irresponsibly, misunderstand the task, or lack necessary skills. This paper investigates whether timely, task-specific feedback helps crowd workers learn, persevere, and produce better results. We investigate this question through Shepherd, a feedback system for crowdsourced work. In a between-subjects study with three conditions, crowd workers wrote consumer reviews for six products they own. Participants in the None condition received no immediate feedback, consistent with most current crowdsourcing practices. Participants in the Self-assessment condition judged their own work. Participants in the External assessment condition received expert feedback. Self-assessment alone yielded better overall work than the None condition and helped workers improve over time. External assessment also yielded these benefits. Participants who received external assessment also revised their work more. We conclude by discussing interaction and infrastructure approaches for integrating real-time assessment into online work.


user interface software and technology | 2013

Chorus: a crowd-powered conversational assistant

Walter S. Lasecki; Rachel Wesley; Jeffrey Nichols; Anand Kulkarni; James F. Allen; Jeffrey P. Bigham

Despite decades of research attempting to establish conversational interaction between humans and computers, the capabilities of automated conversational systems are still limited. In this paper, we introduce Chorus, a crowd-powered conversational assistant. When using Chorus, end users converse continuously with what appears to be a single conversational partner. Behind the scenes, Chorus leverages multiple crowd workers to propose and vote on responses. A shared memory space helps the dynamic crowd workforce maintain consistency, and a game-theoretic incentive mechanism helps to balance their efforts between proposing and voting. Studies with 12 end users and 100 crowd workers demonstrate that Chorus can provide accurate, topical responses, answering nearly 93% of user queries appropriately, and staying on-topic in over 95% of responses. We also observed that Chorus has advantages over pairing an end user with a single crowd worker and end users completing their own tasks in terms of speed, quality, and breadth of assistance. Chorus demonstrates a new future in which conversational assistants are made usable in the real world by combining human and machine intelligence, and may enable a useful new way of interacting with the crowds powering other systems.


human factors in computing systems | 2011

Turkomatic: automatic recursive task and workflow design for mechanical turk

Anand Kulkarni; Matthew Can; Björn Hartmann

Completing complex tasks on crowdsourcing platforms like Mechanical Turk currently requires significant up-front investment into task decomposition and workflow design. We present a new method for automating task and workflow design for high-level, complex tasks. Unlike previous approaches, our strategy is recursive, recruiting workers from the crowd to help plan out how problems can be solved most effectively. Our initial experiments suggest that this strategy can successfully create workflows to solve tasks considered difficult from an AI perspective, although it is highly sensitive to the design choices made by workers.


human factors in computing systems | 2011

Shepherding the crowd: managing and providing feedback to crowd workers

Steven P. Dow; Anand Kulkarni; Brie Bunge; Truc Nguyen; Scott R. Klemmer; Björn Hartmann

Micro-task platforms provide a marketplace for hiring people to do short-term work for small payments. Requesters often struggle to obtain high-quality results, especially on content-creation tasks, because work cannot be easily verified and workers can move to other tasks without consequence. Such platforms provide little opportunity for workers to reflect and improve their task performance. Timely and task-specific feedback can help crowd workers learn, persist, and produce better results. We analyze the design space for crowd feedback and introduce Shepherd, a prototype system for visualizing crowd work, providing feedback, and promoting workers into shepherding roles. This paper describes our current progress and our plans for system development and evaluation.


conference on computer supported cooperative work | 2013

Micro-volunteering: helping the helpers in development

Michael S. Bernstein; Mike Bright; Edward Cutrell; Steven P. Dow; Elizabeth M. Gerber; Anupam Jain; Anand Kulkarni

Finding and retaining volunteers is a challenge for most of the NGOs (non-government-organizations) or non-profit organizations worldwide. Quite often, volunteers have a desire to help but are hesitant in making time commitments due to busy lives or demanding schedules. Micro-volunteering or crowdsourced volunteering has taken off in the last few years where a task is divided into fragments and accomplished collectively by the crowd. Individuals are only required to work on small chunks of tasks during their bits of short free times during the day. This panel brings in an interesting mix of researchers from the crowdsourcing/development space and social entrepreneurs to discuss the pros and cons of micro-volunteering for non-profits and identify the missing blocks in enabling us to replicate this concept in developing regions worldwide.


human factors in computing systems | 2017

Subcontracting Microwork

Meredith Ringel Morris; Jeffrey P. Bigham; Robin Brewer; Jonathan Bragg; Anand Kulkarni; Jessie Li; Saiph Savage

Mainstream crowdwork platforms treat microtasks as indivisible units; however, in this article, we propose that there is value in re-examining this assumption. We argue that crowdwork platforms can improve their value proposition for all stakeholders by supporting subcontracting within microtasks. After describing the value proposition of subcontracting, we then define three models for microtask subcontracting: real-time assistance, task management, and task improvement, and reflect on potential use cases and implementation considerations associated with each. Finally, we describe the outcome of two tasks on Mechanical Turk meant to simulate aspects of subcontracting. We reflect on the implications of these findings for the design of future crowd work platforms that effectively harness the potential of subcontracting workflows.


conference on automation science and engineering | 2008

Actuator networks for navigating an unmonitored mobile robot

Jeremy Schiff; Anand Kulkarni; Danny Bazo; Vincent Duindamx; Ron Alterovitz; Dezhen Song; Ken Goldberg

Building on recent work in sensor-actuator networks and distributed manipulation, we consider the use of pure actuator networks for localization-free robotic navigation. We show how an actuator network can be used to guide an unobserved robot to a desired location in space and introduce an algorithm to calculate optimal actuation patterns for such a network. Sets of actuators are sequentially activated to induce a series of static potential fields that robustly drive the robot from a start to an end location under movement uncertainty. Our algorithm constructs a roadmap with probability-weighted edges based on motion uncertainty models and identifies an actuation pattern that maximizes the probability of successfully guiding the robot to its goal. Simulations of the algorithm show that an actuator network can robustly guide robots with various uncertainty models through a two-dimensional space. We experiment with additive Gaussian Cartesian motion uncertainty models and additive Gaussian polar models. Motion randomly chosen destinations within the convex hull of a 10-actuator network succeeds with with up to 93.4% probability. For n actuators, and m samples per transition edge in our roadmap, our runtime is O(mn6).


user interface software and technology | 2012

Speaking with the crowd

Walter S. Lasecki; Rachel Wesley; Anand Kulkarni; Jeffrey P. Bigham

Automated systems are not yet able to engage in a robust dialogue with users due the complexity and ambiguity of natural language. However, humans can easily converse with one another and maintain a shared history of past interactions. In this paper, we introduce Chorus, a system that enables real-time, two-way natural language conversation between an end user and a crowd acting as a single agent. Chorus is capable of maintaining a consistent, on-topic conversation with end users across multiple sessions, despite constituent individuals perpetually joining and leaving the crowd. This is enabled by using a curated shared dialogue history. Even though crowd members are constantly providing input, we present users with a stream of dialogue that appears to be from a single conversational partner. Experiments demonstrate that dialogue with Chorus displays elements of conversational memory and interaction consistency. Workers were able to answer 84.6% of user queries correctly, demonstrating that crowd-powered communication interfaces can serve as a robust means of interacting with software systems.


conference on computer supported cooperative work | 2017

Worker-Owned Cooperative Models for Training Artificial Intelligence

Anand Sriraman; Jonathan Bragg; Anand Kulkarni

Artificial intelligence (AI) is widely expected to reduce the need for human labor in a variety of sectors. Workers on virtual labor marketplaces accelerate this process by generating training data for AI systems. We propose a new model where workers earn ownership of trained AI systems, allowing them to draw a long-term royalty from a tool that replaces their labor. This concept offers benefits for workers and requesters alike, reducing the upfront costs of model training while increasing longer-term rewards to workers. We identify design and technical problems associated with this new concept, including finding market opportunities for trained models, financing model training, and compensating workers fairly for training contributions. A survey of workers on Amazon Mechanical Turk about this idea finds that workers are willing to give up 25% of their earnings in exchange for an investment in the future performance of a machine learning system.

Collaboration


Dive into the Anand Kulkarni's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Prayag Narula

University of California

View shared research outputs
Top Co-Authors

Avatar

Jeffrey P. Bigham

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Can

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steven P. Dow

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge