Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luke Miratrix is active.

Publication


Featured researches published by Luke Miratrix.


human factors in computing systems | 2013

Predicting users' first impressions of website aesthetics with a quantification of perceived visual complexity and colorfulness

Katharina Reinecke; Tom Yeh; Luke Miratrix; Rahmatri Mardiko; Yuechen Zhao; Jenny Jiaqi Liu; Krzysztof Z. Gajos

Users make lasting judgments about a websites appeal within a split second of seeing it for the first time. This first impression is influential enough to later affect their opinions of a sites usability and trustworthiness. In this paper, we demonstrate a means to predict the initial impression of aesthetics based on perceptual models of a websites colorfulness and visual complexity. In an online study, we collected ratings of colorfulness, visual complexity, and visual appeal of a set of 450 websites from 548 volunteers. Based on these data, we developed computational models that accurately measure the perceived visual complexity and colorfulness of website screenshots. In combination with demographic variables such as a users education level and age, these models explain approximately half of the variance in the ratings of aesthetic appeal given after viewing a website for 500ms only.


The Annals of Applied Statistics | 2016

Compared to what? Variation in the impacts of early childhood education by alternative care type

Avi Feller; Todd Grindal; Luke Miratrix; Lindsay C. Page

Early childhood education research often compares a group of children who receive the intervention of interest to a group of children who receive care in a range of different care settings. In this paper, we estimate differential impacts of an early childhood intervention by alternative care setting, using data from the Head Start Impact Study, a large-scale randomized evaluation. To do so, we utilize a Bayesian principal stratification framework to estimate separate impacts for two types of Compliers: those children who would otherwise be in other center-based care when assigned to control and those who would otherwise be in home-based care. We find strong, positive short-term effects of Head Start on receptive vocabulary for those Compliers who would otherwise be in home-based care. By contrast, we find no meaningful impact of Head Start on vocabulary for those Compliers who would otherwise be in other center-based care. Our findings suggest that alternative care type is a potentially important source of variation in early childhood education interventions.


conference on electronic voting technology workshop on trustworthy elections | 2009

Implementing risk-limiting post-election audits in California

Joseph Lorenzo Hall; Luke Miratrix; Philip B. Stark; Melvin Briones; Elaine Ginnold; Freddie Oakley; Martin Peaden; Gail Pellerin; Tom Stanionis; Tricia Webber

Risk-limiting postelection audits limit the chance of certifying an electoral outcome if the outcome is not what a full hand count would show. Building on previous work [18, 17, 20, 21, 11], we report pilot risk-limiting audits in four elections during 2008 in three California counties: one during the February 2008 Primary Election in Marin County and three during the November 2008 General Elections in Marin, Santa Cruz and Yolo Counties. We explain what makes an audit risk-limiting and how existing and proposed laws fall short. We discuss the differences among our four pilot audits. We identify challenges to practical, efficient risk-limiting audits and conclude that current approaches are too complex to be used routinely on a large scale. One important logistical bottleneck is the difficulty of exporting data from commercial election management systems in a format amenable to audit calculations. Finally, we propose a barebones risk-limiting audit that is less efficient than these pilot audits, but avoids many practical problems.


arXiv: Statistics Theory | 2015

To Adjust or Not to Adjust? Sensitivity Analysis of M-Bias and Butterfly-Bias

Peng Ding; Luke Miratrix

Abstract “M-Bias,” as it is called in the epidemiologic literature, is the bias introduced by conditioning on a pretreatment covariate due to a particular “M-Structure” between two latent factors, an observed treatment, an outcome, and a “collider.” This potential source of bias, which can occur even when the treatment and the outcome are not confounded, has been a source of considerable controversy. We here present formulae for identifying under which circumstances biases are inflated or reduced. In particular, we show that the magnitude of M-Bias in linear structural equation models tends to be relatively small compared to confounding bias, suggesting that it is generally not a serious concern in many applied settings. These theoretical results are consistent with recent empirical findings from simulation studies. We also generalize the M-Bias setting (1) to allow for the correlation between the latent factors to be nonzero and (2) to allow for the collider to be a confounder between the treatment and the outcome. These results demonstrate that mild deviations from the M-Structure tend to increase confounding bias more rapidly than M-Bias, suggesting that choosing to condition on any given covariate is generally the superior choice. As an application, we re-examine a controversial example between Professors Donald Rubin and Judea Pearl.


American Journal of Evaluation | 2015

Principal Stratification A Tool for Understanding Variation in Program Effects Across Endogenous Subgroups

Lindsay C. Page; Avi Feller; Todd Grindal; Luke Miratrix; Marie-Andrée Somers

Increasingly, researchers are interested in questions regarding treatment-effect variation across partially or fully latent subgroups defined not by pretreatment characteristics but by postrandomization actions. One promising approach to address such questions is principal stratification. Under this framework, a researcher defines endogenous subgroups, or principal strata, based on post-randomization behaviors under both the observed and the counterfactual experimental conditions. These principal strata give structure to such research questions and provide a framework for determining estimation strategies to obtain desired effect estimates. This article provides a nontechnical primer to principal stratification. We review selected applications to highlight the breadth of substantive questions and methodological issues that this method can inform. We then discuss its relationship to instrumental variables analysis to address binary noncompliance in an experimental context and highlight how the framework can be generalized to handle more complex posttreatment patterns. We emphasize the counterfactual logic fundamental to principal stratification and the key assumptions that render analytic challenges more tractable. We briefly discuss technical aspects of estimation procedures, providing a short guide for interested readers.


IEEE Transactions on Information Forensics and Security | 2009

Election Audits Using a Trinomial Bound

Luke Miratrix; Philip B. Stark

In November 2008, we audited contests in Santa Cruz and Marin counties, California. The audits were risk-limiting: they had a prespecified minimum chance of requiring a full hand count if the outcomes were wrong. We developed a new technique for these audits, the trinomial bound. Batches of ballots are selected for audit using probabilities proportional to the amount of error each batch can conceal. Votes in the sample batches are counted by hand. Totals for each batch are compared to the semiofficial results. The ldquotaintrdquo in each sample batch is computed by dividing the largest relative overstatement of any margin by the largest possible relative overstatement of any margin. The observed taints are binned into three groups: less than or equal to zero, between zero and a threshold d , and larger than d . The number of batches in the three bins have a joint trinomial distribution. An upper confidence bound for the overstatement of the margin in the election as a whole is constructed by inverting tests for trinomial category probabilities and projecting the resulting set. If that confidence bound is sufficiently small, the hypothesis that the outcome is wrong is rejected, and the audit stops. If not, there is a full hand count. We conducted the audits with a risk limit of 25%, ensuring at least a 75% chance of a full manual count if the outcomes were wrong. The trinomial confidence bound confirmed the results without a full count, even though the Santa Cruz audit found some errors. The trinomial bound gave better results than the Stringer bound, which is commonly used to analyze financial audit samples drawn with probability proportional to error bounds.


arXiv: Methodology | 2016

A Conditional Randomization Test to Account for Covariate Imbalance in Randomized Experiments

Jonathan Philip Hennessy; Tirthankar Dasgupta; Luke Miratrix; Cassandra Wolos Pattanayak; Pradipta Sarkar

Abstract We consider the conditional randomization test as a way to account for covariate imbalance in randomized experiments. The test accounts for covariate imbalance by comparing the observed test statistic to the null distribution of the test statistic conditional on the observed covariate imbalance. We prove that the conditional randomization test has the correct significance level and introduce original notation to describe covariate balance more formally. Through simulation, we verify that conditional randomization tests behave like more traditional forms of covariate adjustment but have the added benefit of having the correct conditional significance level. Finally, we apply the approach to a randomized product marketing experiment where covariate information was collected after randomization.


Journal of the American Statistical Association | 2018

Decomposing Treatment Effect Variation

Peng Ding; Avi Feller; Luke Miratrix

ABSTRACT Understanding and characterizing treatment effect variation in randomized experiments has become essential for going beyond the “black box” of the average treatment effect. Nonetheless, traditional statistical approaches often ignore or assume away such variation. In the context of randomized experiments, this article proposes a framework for decomposing overall treatment effect variation into a systematic component explained by observed covariates and a remaining idiosyncratic component. Our framework is fully randomization-based, with estimates of treatment effect variation that are entirely justified by the randomization itself. Our framework can also account for noncompliance, which is an important practical complication. We make several contributions. First, we show that randomization-based estimates of systematic variation are very similar in form to estimates from fully interacted linear regression and two-stage least squares. Second, we use these estimators to develop an omnibus test for systematic treatment effect variation, both with and without noncompliance. Third, we propose an R2-like measure of treatment effect variation explained by covariates and, when applicable, noncompliance. Finally, we assess these methods via simulation studies and apply them to the Head Start Impact Study, a large-scale randomized experiment. Supplementary materials for this article are available online.


Journal of Educational and Behavioral Statistics | 2017

Principal Score Methods: Assumptions, Extensions, and Practical Considerations

Avi Feller; Fabrizia Mealli; Luke Miratrix

Researchers addressing posttreatment complications in randomized trials often turn to principal stratification to define relevant assumptions and quantities of interest. One approach for the subsequent estimation of causal effects in this framework is to use methods based on the “principal score,” the conditional probability of belonging to a certain principal stratum given covariates. These methods typically assume that stratum membership is as good as randomly assigned, given these covariates. We clarify the key assumption in this context, known as principal ignorability, and argue that versions of this assumption are quite strong in practice. We describe these concepts in terms of both one- and two-sided noncompliance and propose a novel approach for researchers to “mix and match” principal ignorability assumptions with alternative assumptions, such as the exclusion restriction. Finally, we apply these ideas to randomized evaluations of a job training program and an early childhood education program. Overall, applied researchers should acknowledge that principal score methods, while useful tools, rely on assumptions that are typically hard to justify in practice.


Journal of Research on Educational Effectiveness | 2018

Bounding, an Accessible Method for Estimating Principal Causal Effects, Examined and Explained.

Luke Miratrix; Jane Furey; Avi Feller; Todd Grindal; Lindsay C. Page

ABSTRACT Estimating treatment effects for subgroups defined by posttreatment behavior (i.e., estimating causal effects in a principal stratification framework) can be technically challenging and heavily reliant on strong assumptions. We investigate an alternative path: using bounds to identify ranges of possible effects that are consistent with the data. This simple approach relies on fewer assumptions and yet can result in policy-relevant findings. As we show, even moderately predictive covariates can be used to substantially tighten bounds in a straightforward manner. Via simulation, we demonstrate which types of covariates are maximally beneficial. We conclude with an analysis of a multisite experimental study of Early College High Schools. When examining the programs impact on students completing the ninth grade “on-track” for college, we find little impact for ECHS students who would otherwise attend a high-quality high school, but substantial effects for those who would not. This suggests a potential benefit in expanding these programs in areas primarily served by lower quality schools.

Collaboration


Dive into the Luke Miratrix's collaboration.

Top Co-Authors

Avatar

Avi Feller

University of California

View shared research outputs
Top Co-Authors

Avatar

Peng Ding

University of California

View shared research outputs
Top Co-Authors

Avatar

Bin Yu

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian Gawalt

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jinzhu Jia

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge