Prasanta S. Bandyopadhyay
Montana State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Prasanta S. Bandyopadhyay.
Philosophy of Science | 1999
Prasanta S. Bandyopadhyay; Robert J. Boik
In the curve fitting problem two conflicting desiderata, simplicity and goodness-of-fit pull in opposite directions. To solve this problem, two proposals, the first one based on Bayess theorem criterion (BTC) and the second one advocated by Forster and Sober based on Akaikes Information Criterion (AIC) are discussed. We show that AIC, which is frequentist in spirit, is logically equivalent to BTC, provided that a suitable choice of priors is made. We evaluate the charges against Bayesianism and contend that AIC approach has shortcomings. We also discuss the relationship between Schwarzs Bayesian Information Criterion and BTC.
International Studies in The Philosophy of Science | 2010
Prasanta S. Bandyopadhyay; Gordon Brittan
We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ‘strong objective Bayesianism’ is characterized by two claims, that all scientific inference is ‘logical’ and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are ‘dogmatic’. The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is not (in the appropriate sense) logical, the second fails to provide a non‐question‐begging account of ‘same background information’. We urge that a suitably objective Bayesian account of scientific inference does not require either of the claims. Finally, we argue that Bayesianism needs to be fine‐grained in the same way that Bayesians fine‐grain their beliefs.
Synthese | 2006
Prasanta S. Bandyopadhyay; Gordon Brittan
The notion of a severe test has played an important methodological role in the history of science. But it has not until recently been analyzed in any detail. We develop a generally Bayesian analysis of the notion, compare it with Deborah Mayo’s error-statistical approach by way of sample diagnostic tests in the medical sciences, and consider various objections to both. At the core of our analysis is a distinction between evidence and confirmation or belief. These notions must be kept separate if mistakes are to be avoided; combined in the right way, they provide an adequate understanding of severity.Those who think that the weight of the evidence always enables you to choose between hypotheses “ignore one of the factors (the prior probability) altogether, and treat the other (the likelihood) as though it ...meant something other than it actually does. This is the same mistake as is made by someone who has scruples about measuring the arms of a balance (having only a tape measure at his disposal ...), but is willing to assert that the heavier load will always tilt the balance (thereby implicitly assuming, although without admitting it, that the arms are of equal length!). (Bruno de Finetti, Theory of Probability)2
Philosophy of Statistics | 2011
Prasanta S. Bandyopadhyay; Malcolm R. Forster
Publisher Summary Philosophy has a broader scope than the specific sciences. It is concerned with general principles and issues. “Statistics” is a specific branch of knowledge that, among many other activities, includes addressing reliable ways of gathering data and making inferences based on them. Statisticians are interested in knowing which tools to use and what mechanisms to employ in making and correcting inferences. This chapter presents papers from various fields of expertise, which include philosophy, statistics, mathematics, computer science, economics, ecology, electrical engineering, epidemiology, and geo-science. The issues considered include the following: the causal inference in observational studies; the recent advances in model selection criteria; such foundational questions as “acceptance of the likelihood principle (LP)” and “conditional probability”; the nature of statistical/probabilistic paradoxes, the problems associated with understanding the notion of randomness, the Stein phenomenon, general problems in data mining, and a number of applied and historical issues in probability and statistics. Four statistical paradigms discussed are: classical statistics or error statistics, Bayesian statistics, likelihood-based statistics, and the Akaikean-Information Criterion-based statistics.
International Studies in The Philosophy of Science | 2016
Prasanta S. Bandyopadhyay; Mark L. Taper; Gordon Brittan
ABSTRACT There is a debate in Bayesian confirmation theory between subjective and non-subjective accounts of evidence. Colin Howson has provided a counterexample to our non-subjective account of evidence: the counterexample refers to a case in which there is strong evidence for a hypothesis, but the hypothesis is highly implausible. In this article, we contend that, by supposing that strong evidence for a hypothesis makes the hypothesis more believable, Howson conflates the distinction between confirmation and evidence. We demonstrate that Howson’s counterexample fails for a different pair of hypotheses.
Philosophy of Science | 1997
Prasanta S. Bandyopadhyay
I show that van Fraassens empiricism leads to mutually incompatible claims with regard to empirical theories. He is committed to the claim that reasons for accepting a theory and believing it are always identical, insofar as the theory in question is an empirical theory. He also makes a general claim that reasons for accepting a theory are not always reasons for believing it irrespective of whether the theory is an empirical theory.
Archive | 2016
Prasanta S. Bandyopadhyay; Gordon Brittan; Mark L. Taper
We contend that Bayesian accounts of evidence are inadequate, and that in this sense a complete theory of hypothesis testing must go beyond belief adjustment. Some prominent Bayesians disagree. To make our case, we will discuss and then provide reasons for rejecting the accounts of David Christensen, James Joyce, and Alan Hajek. The main theme and final conclusions are straightforward: first, that no purely subjective account of evidence, in terms of belief alone, is adequate and second, that evidence is a comparative notion, applicable only when two hypotheses are confronted with the same data, as has been suggested in the literature on “crucial experiments” from Francis Bacon on.
Archive | 2016
Prasanta S. Bandyopadhyay; Gordon Brittan; Mark L. Taper
Very possibly the most famously intractable epistemological conundrum in the history of modern western philosophy is Descartes’ argument from dreaming. It seems to support in an irrefutable way a radical scepticism about the existence of a physical world existing independent of our sense-experience. But this argument as well as those we discussed in the last chapter and many others of the same kind rest on a conflation of evidence and confirmation: since the paradoxical or sceptical hypothesis has as much “evidence” going for it as the conventional or commonly accepted hypothesis, it is equally well supported by the data and there is nothing to choose between them. By this time, however, we understand very well that data that fail to discriminate hypotheses do not constitute “evidence” for any of them, i.e., that “data” and “evidence” are not interchangeable notions, that it does not follow from the fact that there is strong evidence for a hypothesis against one or more of its competitors that it is therefore highly confirmed, and that it does not follow from the fact that a hypothesis is highly confirmed that there is strong evidence for it against its rivals.
Archive | 2016
Prasanta S. Bandyopadhyay; Gordon Brittan; Mark L. Taper
The first step is to distinguish two questions: 1. Given the data, what should we believe, and to what degree? 2. What kind of evidence do the data provide for a hypothesis H 1 as against an alternative hypothesis H 2 , and how much? We call the first the “confirmation”, the second the “evidence” question. Many different answers to each have been given. In order to make the distinction between them as intuitive and precise as possible, we answer the first in a Bayesian way: a hypothesis is confirmed to the extent that the data raise the probability that it is true. We answer the second question in a Likelihoodist way, that is, data constitute evidence for a hypothesis as against any of its rivals to the extent that they are more likely on it than on them. These two simple ideas are very different, but both can be made precise, and each has a great deal of explanatory power. At the same time, they enforce corollary distinctions between “data” and “evidence”, and between different ways in which the concept of “probability” is to be interpreted. An Appendix explains how our likelihoodist account of evidence deals with composite hypotheses.
Archive | 2016
Prasanta S. Bandyopadhyay; Gordon Brittan; Mark L. Taper
Several non-Bayesian and non-Likelihood accounts of evidence have been worked out in interesting detail. One such account has been championed by the philosopher Deborah Mayo and the statistician Ari Spanos. Following Popper, it assumes from the outset that to test a hypothesis is to submit it to a severe test. Unlike Popper it relies on the notion of error frequencies central to Neyman-Pearson statistics. Unlike Popper as well, Mayo and Spanos think that global theories like Newtonian mechanics are tested in a piecemeal way, by submitting their component hypotheses to severe tests. We argue that the error-statistical notion of severity is not adequately “severe,” that the emphasis on piecemeal testing procedures is misplaced, and that the Mayo-Spanos account of evidence is mistakenly committed to a “true model” assumption. In a technical Appendix we deflect Mayo’s critique of the multiple-model character of our account of evidence.