Malcolm R. Forster
University of Wisconsin-Madison
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Malcolm R. Forster.
The British Journal for the Philosophy of Science | 1994
Malcolm R. Forster; Elliott Sober
Traditional analyses of the curve fitting problem maintain that the data do not indicate what form the fitted curve should take. Rather, this issue is said to be settled by prior probabilities, by simplicity, or by a background theory. In this paper, we describe a result due to Akaike [1973], which shows how the data can underwrite an inference concerning the curves form based on an estimate of how predictively accurate it will be. We argue that this approach throws light on the theoretical virtues of parsimoniousness, unification, and non ad hocness, on the dispute about Bayesianism, and on empiricism and scientific realism.
Nature Human Behaviour | 2018
Daniel J. Benjamin; James O. Berger; Magnus Johannesson; Brian A. Nosek; Eric-Jan Wagenmakers; Richard A. Berk; Kenneth A. Bollen; Björn Brembs; Lawrence D. Brown; Colin F. Camerer; David Cesarini; Christopher D. Chambers; Merlise A. Clyde; Thomas D. Cook; Paul De Boeck; Zoltan Dienes; Anna Dreber; Kenny Easwaran; Charles Efferson; Ernst Fehr; Fiona Fidler; Andy P. Field; Malcolm R. Forster; Edward I. George; Richard Gonzalez; Steven N. Goodman; Edwin J. Green; Donald P. Green; Anthony G. Greenwald; Jarrod D. Hadfield
We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.
Philosophy of Science | 2002
Malcolm R. Forster
What has science actually achieved? A theory of achievement should (1) define what has been achieved, (2) describe the means or methods used in science, and (3) explain how such methods lead to such achievements. Predictive accuracy is one truth‐related achievement of science, and there is an explanation of why common scientific practices (of trading off simplicity and fit) tend to increase predictive accuracy. Akaike’s explanation for the success of AIC is limited to interpolative predictive accuracy. But therein lies the strength of the general framework, for it also provides a clear formulation of many open problems of research.
The British Journal for the Philosophy of Science | 1995
Malcolm R. Forster
The central problem with Bayesian philosophy of science is that it cannot take account of the relevance of simplicity and unification to confirmation, induction, and scientific inference. The standard Bayesian folklore about factoring simplicity into the priors, and convergence theorems as a way of grounding their objectivity are some of the myths that Earmans book does not address adequately.
Philosophy of Science | 2007
Malcolm R. Forster
The simple question, what is empirical success? turns out to have a surprisingly complicated answer. We need to distinguish between meritorious fit and ‘fudged fit’, which is akin to the distinction between prediction and accommodation. The final proposal is that empirical success emerges in a theory dependent way from the agreement of independent measurements of theoretically postulated quantities. Implications for realism and Bayesianism are discussed.
Archive | 2000
Malcolm R. Forster
Many philosophers underestimate the general disillusionment in the philosophical outlook on science caused, in part, by Kuhn’s Structure of Scientific Revolutions. The challenge presented by Hume’s problem of induction has always kept the issue of scientific truth at the forefront of philosophical research. Philosophers expended great energy in defending a broad spectrum of replies to Hume’s scepticism, ranging from the view that theories are merely instruments for the control and prediction of nature, to realist views of science (which hold that science aims at the truth about the world, and is rational in the pursuit of this goal).
Philosophy of Statistics | 2011
Malcolm R. Forster; Elliott Sober
Publisher Summary Akaike helped to launch the field in statistics now known as model selection theory by describing a goal, proposing a criterion, and proving a theorem. The goal is to figure out how accurately models will predict new data when fitted to old. The criterion came to be called the Akaike Information Criterion (AIC). The theorem that Akaike proved made it natural to understand AIC as a frequentist construct. AIC is a device for estimating the predictive accuracy of models. Bayesians assess an estimator by determining whether the estimates it generates are probably true or probably close to the truth. This chapter shows that it is an estimator whose estimates should be taken seriously by Bayesians, its frequentist pedigree notwithstanding. Frequentists often maintain that the question of how an individual estimate should be interpreted is meaningless—that the only legitimate question concerns the long-term behavior of estimators. Bayesians prove that interpretation of individual estimates is pressing in view of the fact that a given estimate might be produced by any number of different estimation procedures.
Erkenntnis | 1994
Malcolm R. Forster
The paper provides a formal proof that efficient estimates of parameters, which vary as as little as possible when measurements are repeated, may be expected to provide more accurate predictions. The definition of predictive accuracy is motivated by the work of Akaike (1973). Surprisingly, the same explanation provides a novel solution for a well known problem for standard theories of scientific confirmation — the Ravens Paradox. This is significant in light of the fact that standard Bayesian analyses of the paradox fail to account for the predictive utility of universal laws like “All ravens are black.”
Minds and Machines | 2006
Malcolm R. Forster
The likelihood theory of evidence (LTE) says, roughly, that all the information relevant to the bearing of data on hypotheses (or models) is contained in the likelihoods. There exist counterexamples in which one can tell which of two hypotheses is true from the full data, but not from the likelihoods alone. These examples suggest that some forms of scientific reasoning, such as the consilience of inductions (Whewell, 1858. In Novum organon renovatum (Part II of the 3rd ed.). The philosophy of the inductive sciences. London: Cass, 1967), cannot be represented within Bayesian and Likelihoodist philosophies of science.
The British Journal for the Philosophy of Science | 1995
Malcolm R. Forster
Curve-fitting typically works by trading off goodness-of-fit with simplicity, where simplicity is measured by the number of adjustable parameters. However, such methods cannot be applied in an unrestricted way. I discuss one such correction, and explain why the exception arises. The same kind of probabilistic explanation offers a surprising resolution to a common-sense dilemma.