Kevin A. Clarke
University of Rochester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kevin A. Clarke.
Conflict Management and Peace Science | 2005
Kevin A. Clarke
Quantitative political science is awash in control variables. The justification for these bloated specifications is usually the fear of omitted variable bias. A key underlying assumption is that the danger posed by omitted variable bias can be ameliorated by the inclusion of relevant control variables. Unfortunately, as this article demonstrates, there is nothing in the mathematics of regression analysis that supports this conclusion. The inclusion of additional control variables may increase or decrease the bias, and we cannot know for sure which is the case in any particular situation. A brief discussion of alternative strategies for achieving experimental control follows the main result.
Journal of Conflict Resolution | 2003
Kevin A. Clarke
This study introduces a simple nonparametric test for the relative discrimination of models in international relations research. The common parametric approach, the Vuong test, does not perform well under the small-n, high canonical correlation conditions that are sometimes encountered in world politics research. The nonparametric approach outperforms the Vuong test in Monte Carlo experiments and is trivial to implement even for the most complicated models. The method is applied to two empirical examples: the debate over long cycles and the effect of domestic politics on foreign policy decision making.
American Political Science Review | 2008
Kevin A. Clarke; Randall W. Stone
Although democracy is a key concept in political science, debate continues over definitions and mechanisms. Bueno de Mesquita, Smith, Siverson, & Morrow (2003) make the important claim that most of democracys effects are in fact due to something conceptually simpler and empirically easier to measure than democracy: the size of the minimum winning coalition that selects the leader. The argument is intuitively appealing and supported by extensive data analysis. Unfortunately, the statistical technique they use induces omitted variable bias into their results. They argue that they need to control for democracy, but their estimation procedure is equivalent to omitting democracy from their analysis. When we reestimate their regressions controlling for democracy, most of their important findings do not survive.
Perspectives on Politics | 2007
Kevin A. Clarke; David M. Primo
Although the use of models has come to dominate much of the scientific study of politics, the discipline’s understanding of the role or function that models play in the scientific enterprise has not kept pace. We argue that models should be assessed for their usefulness for a particular purpose, not solely for the accuracy of their predictions. We provide a typology of the uses to which models may be put, and show how these uses are obscured by the field’s emphasis on model testing. Our approach highlights the centrality of models in scientific reasoning, avoids the logical inconsistencies of current practice, and offers political scientists a new way of thinking about the relationship between the natural world and the models with which we are so familiar.
Conflict Management and Peace Science | 2009
Kevin A. Clarke
Scholars often assume that the danger posed by omitted variable bias can be ameliorated by the inclusion of large numbers of relevant control variables. However, there is nothing in the mathematics of regression analysis that supports this conclusion. This paper goes beyond textbook treatments of omitted variable bias and shows, both for OLS and for generalized linear models, that the inclusion of additional control variables may increase or decrease the bias, and we cannot know for sure which is the case in any particular situation. The last section of the paper shows how formal sensitivity analysis can be used to determine whether omitted variables are a problem. A substantive example demonstrates the method.
Political Studies | 2010
Kevin A. Clarke; Curtis S. Signorino
We consider the problem of choosing between rival statistical models that are non-nested in terms of their functional forms. We assess the ability of two tests, one parametric and one distribution-free, to discriminate between such models. Our Monte Carlo simulations demonstrate that both tests are, to varying degrees, able to discriminate between strategic and non-strategic discrete choice models. The distribution-free test appears to have greater relative power in small samples.
Comparative Political Studies | 2007
Kevin A. Clarke
The aim of this article is to demonstrate that comparative theory testing is necessary if political scientists wish to make positive statements regarding the confirmation of their theories. Using the tools of formal logic, the author first establishes that theory confirmation is not possible when a theory is tested in isolation, regardless of the statistical approach—falsificationism, confirmationism, or Bayesian confirmationism—employed by the researcher. The author then establishes a necessary and sufficient condition for positive theory confirmation and shows that this condition is met only when two rival theories are tested against one another. Finally, the author discusses two methods of comparative theory testing demonstrating that being comparative, besides being necessary, is also straightforward and practical.
Conflict Management and Peace Science | 2012
Kevin A. Clarke
In this article, I address whether the inclusion of control variables in the hopes of getting an unbiased estimate of the residual variance is a good reason for the inclusion of control variables. I conclude that the inclusion of an additional control variable is unlikely to decrease the estimated standard errors and that the tradeoffs involved with including a new control variable are simply too large.
Journal of Nonparametric Statistics | 2012
Mark Fey; Kevin A. Clarke
In this paper, we are interested in the inconsistencies that can arise in the context of rank-based multiple comparisons. It is well known that these inconsistencies exist, but we prove that every possible distribution-free rank-based multiple comparison procedure with certain reasonable properties is susceptible to these phenomena. The proof is based on a generalisation of Arrows theorem, a fundamental result in social choice theory which states that when faced with three or more alternatives, it is impossible to rationally aggregate preference rankings subject to certain desirable properties. Applying this theorem to treatment rankings, we generalise a number of existing results in the literature and demonstrate that procedures that use rank sums cannot be improved. Finally, we show that the best possible procedures are based on the Friedman rank statistic and the k-sample sign statistic, in that these statistics minimise the potential for paradoxical results.
Political Analysis | 2007
Kevin A. Clarke