Jan Sprenger
Tilburg University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jan Sprenger.
Philosophy of Science | 2011
Jonah N. Schupbach; Jan Sprenger
This article introduces and defends a probabilistic measure of the explanatory power that a particular explanans has over its explanandum. To this end, we propose several intuitive, formal conditions of adequacy for an account of explanatory power. Then, we show that these conditions are uniquely satisfied by one particular probabilistic function. We proceed to strengthen the case for this measure of explanatory power by proving several theorems, all of which show that this measure neatly corresponds to our explanatory intuitions. Finally, we briefly describe some promising future projects inspired by our account.
The British Journal for the Philosophy of Science | 2015
Richard Dawid; Stephan Hartmann; Jan Sprenger
Scientific theories are hard to find, and once scientists have found a theory, H, they often believe that there are not many distinct alternatives to H. But is this belief justified? What should scientists believe about the number of alternatives to H, and how should they change these beliefs in the light of new evidence? These are some of the questions that we will address in this article. We also ask under which conditions failure to find an alternative to H confirms the theory in question. This kind of reasoning (which we call the ‘no alternatives argument’) is frequently used in science and therefore deserves a careful philosophical analysis. 1 Introduction 2 The Conceptual Framework 3 The No Alternatives Argument 4 Discussion I: A Quantitative Analysis of the No Alternatives Argument 5 Discussion II: The Number of Alternatives and the Problem of Underdetermination 6 Conclusions Appendix A Appendix B 1 Introduction 2 The Conceptual Framework 3 The No Alternatives Argument 4 Discussion I: A Quantitative Analysis of the No Alternatives Argument 5 Discussion II: The Number of Alternatives and the Problem of Underdetermination 6 Conclusions Appendix A Appendix B Appendix A Appendix B
Journal of Logic and Computation | 2010
Stephan Hartmann; Gabriella Pigozzi; Jan Sprenger
The aggregation of consistent individual judgements on logically interconnected propositions into a collective judgement on the same propositions has recently drawn much attention. Seemingly reasonable aggregation procedures, such as propositionwise majority voting, cannot ensure an equally consistent collective conclusion. The literature on judgement aggregation refers to such a problem as the discursive dilemma. In this article we assume that the decision which the group is trying to reach is factually right or wrong. Hence, we address the question of how good various approaches are at selecting the right conclusion. We focus on two approaches: distance-based procedures and a Bayesian analysis. They correspond to group-internal and group external decision making, respectively. We compare those methods in a probabilistic model whose assumptions are subsequently relaxed. Our findings have two general implications for judgement aggregation problems: first, in a voting procedure, reasons should carry higher weight than the conclusion, and second, considering members of an advisory board to be highly competent is a better strategy than discounting their advice.
Synthese | 2014
Cyrille Imbert; Ryan Muldoon; Jan Sprenger; Kevin J. S. Zollman
Scientists are not isolated agents: they collaborate in laboratories, research networks and large-scale international projects. Apart from direct collaboration, scientists interact with each other in various ways: they follow entrenched research programs, trust their peers, embed their work into an existing paradigm, exchange concepts, methods and results, compete for grants or prestige, etc. The collective dimension of science has been discussed by philosophers of science in various ways, but until recently, the use of formal methods has been restricted to some particular areas, such as the treatment of the division of scientific labor, the study of reward schemes or the effects of network structures on the production of scientific knowledge. Given the great promise of these methods for modeling and understanding of the dynamics of scientific research, this blind spot struck us as surprising. At the same time, social aspects of the production and diffusion of knowledge have been
Synthese | 2012
Stephan Hartmann; Jan Sprenger
The aggregation of consistent individual judgments on logically interconnected propositions into a collective judgment on those propositions has recently drawn much attention. Seemingly reasonable aggregation procedures, such as propositionwise majority voting, cannot ensure an equally consistent collective conclusion. In this paper, we motivate that quite often, we do not only want to make a factually right decision, but also to correctly evaluate the reasons for that decision. In other words, we address the problem of tracking the truth. We set up a probabilistic model that generalizes the analysis of Bovens and Rabinowicz (Synthese 150: 131–153, 2006) and use it to compare several aggregation procedures. Demanding some reasonable adequacy constraints, we demonstrate that a reasons- or premise-based aggregation procedure tracks the truth better than any other procedure. However, we also illuminate that such a procedure is not in all circumstances easy to implement, leaving actual decision-makers with a tradeoff problem.
Philosophy of Science | 2013
Jan Sprenger
Testing a point null hypothesis is a classical but controversial issue in statistical methodology. A prominent illustration is Lindley’s Paradox, which emerges in hypothesis tests with large sample size and exposes a salient divergence between Bayesian and frequentist inference. A close analysis of the paradox reveals that both Bayesians and frequentists fail to satisfactorily resolve it. As an alternative, I suggest Bernardo’s Bayesian Reference Criterion: (i) it targets the predictive performance of the null hypothesis in future experiments; (ii) it provides a proper decision-theoretic model for testing a point null hypothesis; (iii) it convincingly addresses Lindley’s Paradox.
Politics, Philosophy & Economics | 2014
Ryan Muldoon; Chiara Lisciandra; Cristina Bicchieri; Stephan Hartmann; Jan Sprenger
A descriptive norm is a behavioral rule that individuals follow when their empirical expectations of others following the same rule are met. We aim to provide an account of the emergence of descriptive norms by first looking at a simple case, that of the standing ovation. We examine the structure of a standing ovation, and show it can be generalized to describe the emergence of a wide range of descriptive norms.
Philosophy of Science | 2009
Jan Sprenger
To what extent does the design of statistical experiments, in particular sequential trials, affect their interpretation? Should postexperimental decisions depend on the observed data alone, or should they account for the used stopping rule? Bayesians and frequentists are apparently deadlocked in their controversy over these questions. To resolve the deadlock, I suggest a three‐part strategy that combines conceptual, methodological, and decision‐theoretic arguments. This approach maintains the pre‐experimental relevance of experimental design and stopping rules but vindicates their evidential, postexperimental irrelevance.
European journal for philosophy of science | 2016
Jan Sprenger
This paper develops a probabilistic reconstruction of the No Miracles Argument (NMA) in the debate between scientific realists and anti-realists. The goal of the paper is to clarify and to sharpen the NMA by means of a probabilistic formalization. In particular, I demonstrate that the persuasive force of the NMA depends on the particular disciplinary context where it is applied, and the stability of theories in that discipline. Assessments and critiques of “the” NMA, without reference to a particular context, are misleading and should be relinquished. This result has repercussions for recent anti-realist arguments, such as the claim that the NMA commits the base rate fallacy (Howson (2000), Magnus and Callender (Philosophy of Science, 71:320–338, 2004)). It also helps to explain the persistent disagreement between realists and anti-realists.
Philosophy of Science | 2015
Jan Sprenger
One of the most troubling and persistent challenges for Bayesian Confirmation Theory is the Problem of Old Evidence (POE). The problem arises for anyone who models scientific reasoning by means of Bayesian Conditionalization. This article addresses the problem as follows: First, I clarify the nature and varieties of the POE and analyze various solution proposals in the literature. Second, I present a novel solution that combines previous attempts while making weaker and more plausible assumptions. Third and last, I summarize my findings and put them into the context of the general debate about POE and Bayesian reasoning.