Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ece Kamar is active.

Publication


Featured researches published by Ece Kamar.


spoken language technology workshop | 2012

Crowdsourcing the acquisition of natural language corpora: Methods and observations

William Yang Wang; Dan Bohus; Ece Kamar; Eric Horvitz

We study the opportunity for using crowdsourcing methods to acquire language corpora for use in natural language processing systems. Specifically, we empirically investigate three methods for eliciting natural language sentences that correspond to a given semantic form. The methods convey frame semantics to crowd workers by means of sentences, scenarios, and list-based descriptions. We discuss various performance measures of the crowdsourcing process, and analyze the semantic correctness, naturalness, and biases of the collected language. We highlight research challenges and directions in applying these methods to acquire corpora for natural language processing applications.


human factors in computing systems | 2017

Revolt: Collaborative Crowdsourcing for Labeling Machine Learning Datasets

Joseph Chee Chang; Saleema Amershi; Ece Kamar

Crowdsourcing provides a scalable and efficient way to construct labeled datasets for training machine learning systems. However, creating comprehensive label guidelines for crowdworkers is often prohibitive even for seemingly simple concepts. Incomplete or ambiguous label guidelines can then result in differing interpretations of concepts and inconsistent labels. Existing approaches for improving label quality, such as worker screening or detection of poor work, are ineffective for this problem and can lead to rejection of honest work and a missed opportunity to capture rich interpretations about data. We introduce Revolt, a collaborative approach that brings ideas from expert annotation workflows to crowd-based labeling. Revolt eliminates the burden of creating detailed label guidelines by harnessing crowd disagreements to identify ambiguous concepts and create rich structures (groups of semantically related items) for post-hoc label decisions. Experiments comparing Revolt to traditional crowdsourced labeling show that Revolt produces high quality labels without requiring label guidelines in turn for an increase in monetary cost. This up front cost, however, is mitigated by Revolts ability to produce reusable structures that can accommodate a variety of label boundaries without requiring new data to be collected. Further comparisons of Revolts collaborative and non-collaborative variants show that collaboration reaches higher label accuracy with lower monetary cost.


human factors in computing systems | 2016

Toward a Learning Science for Complex Crowdsourcing Tasks

Shayan Doroudi; Ece Kamar; Emma Brunskill; Eric Horvitz

We explore how crowdworkers can be trained to tackle complex crowdsourcing tasks. We are particularly interested in training novice workers to perform well on solving tasks in situations where the space of strategies is large and workers need to discover and try different strategies to be successful. In a first experiment, we perform a comparison of five different training strategies. For complex web search challenges, we show that providing expert examples is an effective form of training, surpassing other forms of training in nearly all measures of interest. However, such training relies on access to domain expertise, which may be expensive or lacking. Therefore, in a second experiment we study the feasibility of training workers in the absence of domain expertise. We show that having workers validate the work of their peer workers can be even more effective than having them review expert examples if we only present solutions filtered by a threshold length. The results suggest that crowdsourced solutions of peer workers may be harnessed in an automated training pipeline.


computational science and engineering | 2009

Modeling User Perception of Interaction Opportunities for Effective Teamwork

Ece Kamar; Ya'akov Gal; Barbara J. Grosz

This paper presents a model of collaborative decision-making for groups that involve people and computer agents. The model distinguishes between actions relating to participants’ commitment to the group and actions relating to their individual tasks, uses this distinction to decompose group decision making into smaller problems that can be solved efficiently. It allows computer agents to reason about the benefits of their actions on a collaboration and the ways in which human participants perceive these benefits. The model was tested in a setting in which computer agents need to decide whether to interrupt people to obtain potentially valuable information. Results show that the magnitude of the benefit of interruption to the collaboration is a major factor influencing the likelihood that people will accept interruption requests. They further establish that people’s perceived type of their partners (whether humans or computers) significantly affected their perceptions of the usefulness of interruptions when the benefit of the interruption is not clear-cut. These results imply that system designers need to consider not only the possible benefits of interruptions to collaborative human-computer teams but also the way that such benefits are perceived by people.


annual meeting of the special interest group on discourse and dialogue | 2014

Crowdsourcing Language Generation Templates for Dialogue Systems

Margaret Mitchell; Dan Bohus; Ece Kamar

We explore the use of crowdsourcing to generate natural language in spoken dialogue systems. We introduce a methodology to elicit novel templates from the crowd based on a dialogue seed corpus, and investigate the effect that the amount of surrounding dialogue context has on the generation task. Evaluation is performed both with a crowd and with a system developer to assess the naturalness and suitability of the elicited phrases. Results indicate that the crowd is able to provide reasonable and diverse templates within this methodology. More work is necessary before elicited templates can be automatically plugged into the system.


conference on computer supported cooperative work | 2017

Communicating Context to the Crowd for Complex Writing Tasks

Niloufar Salehi; Jaime Teevan; Shamsi T. Iqbal; Ece Kamar

Crowd work is typically limited to simple, context-free tasks because they are easy to describe and understand. In contrast, complex tasks require communication between the requester and workers to achieve mutual understanding, which can be more work than it is worth. This paper explores the notion of structured communication: using structured microtasks to support communication in the domain of complex writing. Our studies compare a variety of communication mechanisms with respect to the costs to the requester in providing information and the value of that information to workers while performing the task. We find that different mechanisms are effective at different stages of writing. For early drafts, asking the requester to state the biggest problem in the current write-up is valuable and low cost, while later it is more useful for the worker if the requester highlights the text that needs to be improved. These findings can be used to enable richer, more interactive crowd work than what currently seems possible. We incorporate the findings in a workflow for crowdsourcing written content using appropriately timed mechanisms for communicating with the crowd.


international conference on social robotics | 2017

What Went Wrong and Why? Diagnosing Situated Interaction Failures in the Wild

Sean Andrist; Dan Bohus; Ece Kamar; Eric Horvitz

Effective situated interaction hinges on the well-coordinated operation of a set of competencies, including computer vision, speech recognition, and natural language, as well as higher-level inferences about turn taking and engagement. Systems often rely on a set of hand-coded and machine-learned components organized into several sensing and decision-making pipelines. Given their complexity and inter-dependencies, developing and debugging such systems can be challenging. “In-the-wild” deployments outside of controlled lab conditions bring further challenges due to unanticipated phenomena, including unexpected interactions such as playful engagements. We present a methodology for assessing performance, identifying problems, and diagnosing the root causes and influences of different types of failures on the overall performance of a situated interaction system functioning in the wild. We apply the methodology to a dataset of interactions collected with a robot deployed in a public space inside an office building. The analyses identify and characterize multiple types of failures, their causes, and their relationship to overall performance. We employ models that predict overall interaction quality from various combinations of failures. Finally, we discuss lessons learned with such a diagnostic methodology for improving situated systems deployed in the wild.


adaptive agents and multi-agents systems | 2016

POMDPs for Assisting Homeless Shelters – Computational and Deployment Challenges

Amulya Yadav; Hau Chan; Albert Xin Jiang; Eric Rice; Ece Kamar; Barbara J. Grosz; Milind Tambe

This paper looks at challenges faced during the ongoing deployment of HEALER, a POMDP based software agent that recommends sequential intervention plans for use by homeless shelters, who organize these interventions to raise awareness about HIV among homeless youth. HEALER’s sequential plans (built using knowledge of social networks of homeless youth) choose intervention participants strategically to maximize influence spread, while reasoning about uncertainties in the network. In order to compute its plans, HEALER (i) casts this influence maximization problem as a POMDP and solves it using a novel planner which scales up to previously unsolvable real-world sizes; (ii) and constructs social networks of homeless youth at low cost, using a Facebook application. HEALER is currently being deployed in the real world in collaboration with a homeless shelter. Initial feedback from the shelter officials has been positive but they were surprised by the solutions generated by HEALER as these solutions are very counter-intuitive. Therefore, there is a need to justify HEALER’s solutions in a way that mirrors the officials’ intuition. In this paper, we report on progress made towards HEALER’s deployment and detail first steps taken to tackle the issue of explaining HEALER’s solutions.


international joint conference on artificial intelligence | 2018

Evaluating and Complementing Vision-to-Language Technology for People who are Blind with Conversational Crowdsourcing.

Elliot Salisbury; Ece Kamar; Meredith Ringel Morris

We study how real-time crowdsourcing can be used both for evaluating the value provided by existing automated approaches and for enabling workflows that provide scalable and useful alt text to blind users. We show that the shortcomings of existing AI image captioning systems frequently hinder a users understanding of an image they cannot see to a degree that even clarifying conversations with sighted assistants cannot correct. Based on analysis of clarifying conversations collected from our studies, we design experiences that can effectively assist users in a scalable way without the need for real-time interaction. Our results provide lessons and guidelines that the designers of future AI captioning systems can use to improve labeling of social media imagery for blind users.


international conference on intelligent transportation systems | 2016

Optimizing the diamond lane: A more tractable carpool problem and algorithms

Cathy Wu; Kalyanaraman Shankari; Ece Kamar; Randy H. Katz; David E. Culler; Christos H. Papadimitriou; Eric Horvitz; Alexandre M. Bayen

Carpooling has been long deemed a promising approach to better utilizing existing transportation infrastructure. However, there are several reasons carpooling is still not the preferred mode of commute in the United States: first, complex human factors, including time constraints and not having right incentive structures, discourage the sharing of rides; second, algorithmic and technical barriers inhibit the development of online services for matching riders. In this work, we study algorithms for 3+ high-occupancy vehicle (HOV) lanes, which permit vehicles that hold three or more people. We focus on the technical barriers but also address the aforementioned human factors. We formulate the HOV3 Carpool problem, and show that it is NP-Complete. We thus pose the relaxed problem HOV3- Carpool problem, allowing groups of up to size three, and propose several methods for solving the problem of finding globally optimal carpool groups that may utilize these 3- HOV lanes. Our methods include local search, integer programming, and dynamic programming. Our local search methods include sampling-based (hill-climbing and simulated annealing), classical neighborhood search, and a hybrid random neighborhood search. We assess the methods numerically in terms of objective value and scalability. Our findings show that our sampling-based local search methods scale up to 100K agents, thereby improving upon related previous work (which studies up to 1000 agents). The hill climbing local search method converges significantly closer and faster towards a naive lower bound on cumulative carpooling cost.

Collaboration


Dive into the Ece Kamar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amulya Yadav

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Andrey Kolobov

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge