Katie Steele
London School of Economics and Political Science
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Katie Steele.
Social Epistemology | 2007
Katie Steele; Helen M. Regan; Mark Colyvan; Mark A. Burgman
Group decisions raise a number of substantial philosophical and methodological issues. We focus on the goal of the group decision exercise itself. We ask: What should be counted as a good group decision‐making result? The right decision might not be accessible to, or please, any of the group members. Conversely, a popular decision can fail to be the correct decision. In this paper we discuss what it means for a decision to be “right” and what components are required in a decision process to produce happy decision‐makers. Importantly, we discuss how “right” decisions can produce happy decision‐makers, or rather, the conditions under which happy decision‐makers and right decisions coincide. In a large range of contexts, we argue for the adoption of formal consensus models to assist in the group decision‐making process. In particular, we advocate the formal consensus convergence model of Lehrer and Wagner (1981), because a strong case can be made as to why the underlying algorithm produces a result that should make each of the experts in a group happy. Arguably, this model facilitates true consensus, where the group choice is effectively each person’s individual choice. We analyse Lehrer and Wagner’s algorithm for reaching consensus on group probabilities/utilities in the context of complex decision‐making for conservation biology. While many conservation decisions are driven by a search for objective utility/probability distributions (regarding extinction risks of species and the like), other components of conservation management primarily concern the interests of stakeholders. We conclude with cautionary notes on mandating consensus in decision scenarios for which no fact of the matter exists. For such decision settings alternative types of social choice methods are more appropriate.
Philosophy of Science | 2012
Katie Steele
Richard Rudner famously argues that the communication of scientific advice to policy makers involves ethical value judgments. His argument has, however, been rightly criticized. This article revives Rudner’s conclusion, by strengthening both his lines of argument: we generalize his initial assumption regarding the form in which scientists must communicate their results and complete his ‘backup’ argument by appealing to the difference between private and public decisions. Our conclusion that science advisors must, for deep-seated pragmatic reasons, make value judgments is further bolstered by reflections on how the scientific contribution to policy is far less straightforward than the Rudner-style model suggests.
The British Journal for the Philosophy of Science | 2013
Katie Steele; Charlotte Werndl
We argue that concerns about double-counting—using the same evidence both to calibrate or tune climate models and also to confirm or verify that the models are adequate—deserve more careful scrutiny in climate modelling circles. It is widely held that double-counting is bad and that separate data must be used for calibration and confirmation. We show that this is far from obviously true, and that climate scientists may be confusing their targets. Our analysis turns on a Bayesian/relative-likelihood approach to incremental confirmation. According to this approach, double-counting is entirely proper. We go on to discuss plausible difficulties with calibrating climate models, and we distinguish more and less ambitious notions of confirmation. Strong claims of confirmation may not, in many cases, be warranted, but it would be a mistake to regard double-counting as the culprit. 1 Introduction 2 Remarks about Models and Adequacy-for-Purpose 3 Evidence for Calibration Can Also Yield Comparative Confirmation 3.1 Double-counting I 3.2 Double-counting II 4 Climate Science Examples: Comparative Confirmation in Practice 4.1 Confirmation due to better and worse best fits 4.2 Confirmation due to more and less plausible forcings values 5 Old Evidence 6 Doubts about the Relevance of Past Data 7 Non-comparative Confirmation and Catch-Alls 8 Climate Science Example: Non-comparative Confirmation and Catch-Alls in Practice 9 Concluding Remarks 1 Introduction 2 Remarks about Models and Adequacy-for-Purpose 3 Evidence for Calibration Can Also Yield Comparative Confirmation 3.1 Double-counting I 3.2 Double-counting II 3.1 Double-counting I 3.2 Double-counting II 4 Climate Science Examples: Comparative Confirmation in Practice 4.1 Confirmation due to better and worse best fits 4.2 Confirmation due to more and less plausible forcings values 4.1 Confirmation due to better and worse best fits 4.2 Confirmation due to more and less plausible forcings values 5 Old Evidence 6 Doubts about the Relevance of Past Data 7 Non-comparative Confirmation and Catch-Alls 8 Climate Science Example: Non-comparative Confirmation and Catch-Alls in Practice 9 Concluding Remarks
Synthese | 2007
Katie Steele
I focus my discussion on the well-known Ellsberg paradox. I find good normative reasons for incorporating non-precise belief, as represented by sets of probabilities, in an Ellsberg decision model. This amounts to forgoing the completeness axiom of expected utility theory. Provided that probability sets are interpreted as genuinely indeterminate belief (as opposed to “imprecise” belief), such a model can moreover make the “Ellsberg choices” rationally permissible. Without some further element to the story, however, the model does not explain how an agent may come to have unique preferences for each of the Ellsberg options. Levi (1986, Hard choices: Decision making under unresolved conflict. Cambridge, New York: Cambridge University Press) holds that the extra element amounts to innocuous secondary “risk” or security considerations that are used to break ties when more than one option is rationally permissible. While I think a lexical choice rule of this kind is very plausible, I argue that it involves a greater break with xpected utility theory than mere violation of the ordering axiom.
Journal of Philosophical Logic | 2012
Katie Steele
This paper considers a special case of belief updating—when an agent learns testimonial data, or in other words, the beliefs of others on some issue. The interest in this case is twofold: (1) the linear averaging method for updating on testimony is somewhat popular in epistemology circles, and it is important to assess its normative acceptability, and (2) this facilitates a more general investigation of what it means/requires for an updating method to have a suitable Bayesian representation (taken here as the normative standard). The paper initially defends linear averaging against Bayesian-compatibility concerns raised by Bradley (Soc Choice Welf 29:609–632, 2007), as well as problems associated with multiple testimony updates. The resolution of these issues, however, requires an extremely nuanced interpretation of the parameters of the linear averaging model—the so-called weights of respect. We go on to propose a role that the parameters of any ‘shortcut’ updating function should play, by way of minimal interpretation of these parameters. The class of updating functions that is consistent with this role, however, excludes linear averaging, at least in its standard form.
Philosophy of Science | 2016
Seamus Bradley; Katie Steele
This article considers a puzzling conflict between two positions that are each compelling: (a) it is irrational for an agent to pay to avoid ‘free’ evidence, and (b) rational agents may have imprecise beliefs. An important aspect of responding to this conflict is resolving the question of how rational (imprecise) agents ought to make sequences of decisions—we make explicit what the key alternatives are and defend our own approach. We endorse a resolution of the aforementioned puzzle—we privilege decision theories that merely permit avoiding free evidence over decision theories that make avoiding free information obligatory.
Journal of Strain Analysis for Engineering Design | 2016
Katie Steele; Charlotte Werndl
In engineering, as in other scientific fields, researchers seek to confirm their models with real-world data. It is common practice to assess models in terms of the distance between the model outputs and the corresponding experimental observations. An important question that arises is whether the model should then be ‘tuned’, in the sense of estimating the values of free parameters to get a better fit with the data, and furthermore whether the tuned model can be confirmed with the same data used to tune it. This dual use of data is often disparagingly referred to as ‘double-counting’. Here, we analyse these issues, with reference to selected research articles in engineering (one mechanical and the other civil). Our example studies illustrate more and less controversial practices of model tuning and double-counting, both of which, we argue, can be shown to be legitimate within a Bayesian framework. The question nonetheless remains as to whether the implied scientific assumptions in each case are apt from the engineering point of view.
Synthese | 2017
Nicolas Wüthrich; Katie Steele
In the biomedical context, policy makers face a large amount of potentially discordant evidence from different sources. This prompts the question of how this evidence should be aggregated in the interests of best-informed policy recommendations. The starting point of our discussion is Hunter and Williams’ recent work on an automated aggregation method for medical evidence. Our negative claim is that it is far from clear what the relevant criteria for evaluating an evidence aggregator of this sort are. What is the appropriate balance between explicitly coded algorithms and implicit reasoning involved, for instance, in the packaging of input evidence? In short: What is the optimal degree of ‘automation’? On the positive side: We propose the ability to perform an adequate robustness analysis (which depends on the nature of the input variables and parameters of the aggregator) as the focal criterion, primarily because it directs efforts to what is most important, namely, the structure of the algorithm and the appropriate extent of automation. Moreover, where there are resource constraints on the aggregation process, one must also consider what balance between volume of evidence and accuracy in the treatment of individual evidence best facilitates inference. There is no prerogative to aggregate the total evidence available if this would in fact reduce overall accuracy.
The British Journal for the Philosophy of Science | 2016
Katie Steele; Charlotte Werndl
This article argues that common intuitions regarding (a) the specialness of ‘use-novel’ data for confirmation and (b) that this specialness implies the ‘no-double-counting rule’, which says that data used in ‘constructing’ (calibrating) a model cannot also play a role in confirming the model’s predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in light of prominent accounts of confirmation of model predictions. We show that on the Bayesian account of confirmation, and also on the standard classical hypothesis-testing account, claims (a) and (b) are not generally true; but for some select cases, it is possible to distinguish data used for calibration from use-novel data, where only the latter confirm. The more specialized classical model-selection methods, on the other hand, uphold a nuanced version of claim (a), but this comes apart from (b), which must be rejected in favour of a more refined account of the relationship between calibration and confirmation. Thus, depending on the framework of confirmation, either the scope or the simplicity of the intuitive position must be revised. 1 Introduction 2 A Climate Case Study 3 The Bayesian Method vis-à-vis Intuitions 4 Classical Tests vis-à-vis Intuitions 5 Classical Model-Selection Methods vis-à-vis Intuitions 5.1 Introducing classical model-selection methods 5.2 Two cases 6 Re-examining Our Case Study 7 Conclusion 1 Introduction 2 A Climate Case Study 3 The Bayesian Method vis-à-vis Intuitions 4 Classical Tests vis-à-vis Intuitions 5 Classical Model-Selection Methods vis-à-vis Intuitions 5.1 Introducing classical model-selection methods 5.2 Two cases 5.1 Introducing classical model-selection methods 5.2 Two cases 6 Re-examining Our Case Study 7 Conclusion
The British Journal for the Philosophy of Science | 2016
Camilla Colombo; Katie Steele
The precautionary principle (PP) roughly recommends that we make careful decisions about how to act, especially where long-term human well-being is at stake, and where we face uncertainty that is more or less severe about the decision problem at hand. Not surprisingly, there has been much controversy about how this maxim for decision-making should be cashed out, and whether it provides any new, concrete guidance at all. Some argue that the PP is merely a ragbag of wisdom about appropriate values, beliefs, and decision rules governing the choice of how to act in any given circumstance. Moreover, when it comes to decision rules for negotiating uncertainty, there seems to be internal division as to whether the PP merely promotes the orthodox (‘expected value’) rule, or rather sides with a more controversial rule that gives extra emphasis, in evaluating acts, to the worst-case scenarios. Detractors argue that, either way, there are problems: the former ‘weak’ interpretation is vacuous, while the latter ‘strong’ interpretation is inconsistent. Against this criticism, Steel seeks an account of the PP that is unified and coherent, yet still recognizably broad in content. Steel’s strategy is to recast the ‘weak’ versus ‘strong’ divide rather as different, complementary levels of advice for negotiating uncertainty (Chapters 1–2). Moreover, he fleshes out Brit. J. Phil. Sci. 67 (2016), 1195–1200