Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shayan Doroudi is active.

Publication


Featured researches published by Shayan Doroudi.


human factors in computing systems | 2016

Toward a Learning Science for Complex Crowdsourcing Tasks

Shayan Doroudi; Ece Kamar; Emma Brunskill; Eric Horvitz

We explore how crowdworkers can be trained to tackle complex crowdsourcing tasks. We are particularly interested in training novice workers to perform well on solving tasks in situations where the space of strategies is large and workers need to discover and try different strategies to be successful. In a first experiment, we perform a comparison of five different training strategies. For complex web search challenges, we show that providing expert examples is an effective form of training, surpassing other forms of training in nearly all measures of interest. However, such training relies on access to domain expertise, which may be expensive or lacking. Therefore, in a second experiment we study the feasibility of training workers in the absence of domain expertise. We show that having workers validate the work of their peer workers can be even more effective than having them review expert examples if we only present solutions filtered by a threshold length. The results suggest that crowdsourced solutions of peer workers may be harnessed in an automated training pipeline.


international joint conference on artificial intelligence | 2018

Importance Sampling for Fair Policy Selection.

Shayan Doroudi; Philip S. Thomas; Emma Brunskill

We consider the problem of off-policy policy selection in reinforcement learning: using historical data generated from running one policy to compare two or more policies. We show that approaches based on importance sampling can be unfair—they can select the worse of two policies more often than not. We give two examples where the unfairness of importance sampling could be practically concerning. We then present sufficient conditions to theoretically guarantee fairness and a related notion of safety. Finally, we provide a practical importance sampling-based estimator to help mitigate one of the systematic sources of unfairness resulting from using importance sampling for policy selection.


learning at scale | 2017

Robust Evaluation Matrix: Towards a More Principled Offline Exploration of Instructional Policies

Shayan Doroudi; Vincent Aleven; Emma Brunskill

The gold standard for identifying more effective pedagogical approaches is to perform an experiment. Unfortunately, frequently a hypothesized alternate way of teaching does not yield an improved effect. Given the expense and logistics of each experiment, and the enormous space of potential ways to improve teaching, it would be highly preferable if it were possible to estimate in advance of running a study whether an alternative teaching strategy would improve learning. This is true even in learning at scale situations, since even if it is logistically easier to recruit a large number of subjects, it remains a high stakes environment because the experiment is impacting many real students. For certain classes of alternate teaching approaches, such as new ways to sequence existing material, it is possible to build student models that can be used as simulators to estimate the performance of learners under new proposed teaching methods. However, existing methods for doing so can overestimate the performance of new teaching methods. We instead propose the Robust Evaluation Matrix (REM) method which explicitly considers model mismatch between the student model used to derive the teaching strategy and that used as a simulator to evaluate the teaching strategy effectiveness. We then present two case studies from a fractions intelligent tutoring system and from a concept learning task from prior work that show how REM could be used both to detect when a new instructional policy may not be effective on actual students and to detect when it may be effective in improving student learning.


international conference on artificial intelligence and statistics | 2016

A PAC RL Algorithm for Episodic POMDPs

Zhaohan Daniel Guo; Shayan Doroudi; Emma Brunskill


Grantee Submission | 2016

Sequence Matters but How Exactly? A Method for Evaluating Activity Sequences from Data.

Shayan Doroudi; Kenneth Holstein; Vincent Aleven; Emma Brunskill


educational data mining | 2015

Towards Understanding How to Leverage Sense-Making, Induction and Refinement, and Fluency to Improve Robust Learning.

Shayan Doroudi; Kenneth Holstein; Vincent Aleven; Emma Brunskill


educational data mining | 2017

The Misidentified Identifiability Problem of Bayesian Knowledge Tracing.

Shayan Doroudi; Emma Brunskill


Grantee Submission | 2017

Robust Evaluation Matrix: Towards a More Principled Offline Exploration of Instructional Policies.

Shayan Doroudi; Vincent Aleven; Emma Brunskill


educational data mining | 2016

Sequence Matters, But How Exactly? A Method for Evaluating Activity Sequences from Data.

Shayan Doroudi; Kenneth Holstein; Vincent Aleven; Emma Brunskill


EDM (Workshops) | 2016

Sequence Matters, But How Do I Discover How? Towards a Workflow for Evaluating Activity Sequences from Data.

Shayan Doroudi; Kenneth Holstein; Vincent Aleven; Emma Brunskill

Collaboration


Dive into the Shayan Doroudi's collaboration.

Top Co-Authors

Avatar

Emma Brunskill

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Vincent Aleven

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Kenneth Holstein

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Philip S. Thomas

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge