Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James W. McAllister is active.

Publication


Featured researches published by James W. McAllister.


Studies in History and Philosophy of Science | 1996

The Evidential Significance of Thought Experiment in Science

James W. McAllister

Abstract The most promising way to regard thought experiment is as a species of experiment, alongside concrete experiment. Of the authors who take this view, many portray thought experiment as possessing evidential significance intrinsically. In contrast, concrete experiment is nowadays most convincingly portrayed as acquiring evidential significance in a particular area of science at a particular time in consequence of the persuasive efforts of scientists. I argue that the claim that thought experiment possesses evidential significance intrinsically is contradicted by the history of science. Thought experiment, like concrete experiment, has evidential significance only where particular assumptions—such as the Galilean doctrine of phenomena—are taken to hold; under alternative premises, in themselves equally defensible, thought experiment is evidentially inert.


Studies in History and Philosophy of Science | 2003

Algorithmic randomness in empirical data

James W. McAllister

Abstract According to a traditional view, scientific laws and theories constitute algorithmic compressions of empirical data sets collected from observations and measurements. This article defends the thesis that, to the contrary, empirical data sets are algorithmically incompressible. The reason is that individual data points are determined partly by perturbations, or causal factors that cannot be reduced to any pattern. If empirical data sets are incompressible, then they exhibit maximal algorithmic complexity, maximal entropy and zero redundancy. They are therefore maximally efficient carriers of information about the world. Since, on algorithmic information theory, a string is algorithmically random just if it is incompressible, the thesis entails that empirical data sets consist of algorithmically random strings of digits. Rather than constituting compressions of empirical data, scientific laws and theories pick out patterns that data sets exhibit with a certain noise.


Philosophy of Science | 2003

Effective Complexity as a Measure of Information Content

James W. McAllister

Murray Gell‐Mann has proposed the concept of effective complexity as a measure of information content. The effective complexity of a string of digits is defined as the algorithmic complexity of the regular component of the string. This paper argues that the effective complexity of a given string is not uniquely determined. The effective complexity of a string admitting a physical interpretation, such as an empirical data set, depends on the cognitive and practical interests of investigators. The effective complexity of a string as a purely formal construct, lacking a physical interpretation, is either close to zero, or equal to the string’s algorithmic complexity, or arbitrary, depending on the auxiliary criterion chosen to pick out the regular component of the string. Because of this flaw, the concept of effective complexity is unsuitable as a measure of information content.


T. A. C. Reydon and L. Hemerik, eds., Current Themes in Theoretical Biology: A Dutch Perspective, 95 - 127 (2005) | 2005

The Composite Species Concept: A Rigorous Basis for Cladistic Practice

D. J. Kornet; James W. McAllister

As previous work has shown, the genealogical network can be partitioned exhaustively into internodons, mutually exclusive and historically continuous entities delimited between two successive permanent splits or between a permanent split and an extinction. Internodons are not suitable candidates for the status of species, because of their short life span and the difficulty of recognizing their boundaries. However, internodons may be suitable building blocks for a viable species concept. We introduce the concept of composite species as a sequence of internodons, by qualifying only some permanent splits in the genealogical network as speciation events. The permanent splits that count as speciation events on our account are those associated with a character state fixation: this proposal ensures the recognizability of composite species. Lastly, we show how actual taxonomic practice is able to recover the phylogenetic tree of composite species from standard morphological data.


Philosophy of Science | 2004

Thought Experiments and the Belief in Phenomena

James W. McAllister

Thought experiment acquires evidential significance only on particular metaphysical assumptions. These include the thesis that science aims at uncovering “phenomena”—universal and stable modes in which the world is articulated—and the thesis that phenomena are revealed imperfectly in actual occurrences. Only on these Platonically inspired assumptions does it make sense to bypass experience of actual occurrences and perform thought experiments. These assumptions are taken to hold in classical physics and other disciplines, but not in sciences that emphasize variety and contingency, such as Aristotelian natural philosophy and some forms of historiography. This explains why thought experiments carry weight in the former but not the latter disciplines.


Philosophy of Science | 2007

Model Selection and the Multiplicity of Patterns in Empirical Data

James W. McAllister

Several quantitative techniques for choosing among data models are available. Among these are techniques based on algorithmic information theory, minimum description length theory, and the Akaike information criterion. All these techniques are designed to identify a single model of a data set as being the closest to the truth. I argue, using examples, that many data sets in science show multiple patterns, providing evidence for multiple phenomena. For any such data set, there is more than one data model that must be considered close to the truth. I conclude that, since the established techniques for choosing among data models are unequipped to handle these cases, they cannot be regarded as adequate.


Synthese | 2011

What do patterns in empirical data tell us about the structure of the world

James W. McAllister

This article discusses the relation between features of empirical data and structures in the world. I defend the following claims. Any empirical data set exhibits all possible patterns, each with a certain noise term. The magnitude and other properties of this noise term are irrelevant to the evidential status of a pattern: all patterns exhibited in empirical data constitute evidence of structures in the world. Furthermore, distinct patterns constitute evidence of distinct structures in the world. It follows that the world must be regarded as containing all possible structures. The remainder of the article is devoted to elucidating the meaning and implications of the latter claim.


Philosophy of Science | 2010

The Ontology of Patterns in Empirical Data

James W. McAllister

This article defends the following claims. First, for patterns exhibited in empirical data, there is no criterion on which to demarcate patterns that are physically significant and patterns that are not physically significant. I call a pattern physically significant if it corresponds to a structure in the world. Second, all patterns must be regarded as physically significant. Third, distinct patterns must be regarded as providing evidence for distinct structures in the world. Fourth, in consequence, the world must be conceived as showing all possible structures.


International Studies in The Philosophy of Science | 1997

Laws of nature, natural history, and the description of the world

James W. McAllister

Abstract The modern sciences are divided into two groups: law‐formulating and natural historical sciences. Sciences of both groups aim at describing the world, but they do so differently. Whereas the natural historical sciences produce “transcriptions” intended to be literally true of actual occurrences, laws of nature are expressive symbols of aspects of the world. The relationship between laws and the world thus resembles that between the symbols of classical iconography and the objects for which they stand. The natural historical approach was founded by Aristotle and is retained in such present‐day sciences as botany. Modern physics differentiated itself from the natural historical sciences and developed a symbolizing approach at the hands of Galileo and Descartes. Our knowledge of the physical domain is provided by two disciplines: the law‐formulating science of physics and a natural historical science on which we depend in the everyday manipulation of our surroundings.


Minds and Machines | 2007

Malcolm Gladwell, Blink: The Power of Thinking Without Thinking

James W. McAllister

Shortly after the appearance of Malcolm Gladwell’s book, Blink: The Power of Thinking Without Thinking, the journal Nature published a survey of misconduct in scientific research. The authors of the survey, Brian C. Martinson, Melissa S. Anderson, and Raymond de Vries, had asked thousands of scientists whether they had engaged in various forms of behaviour that they labelled as questionable. These included ‘‘changing the design, methodology or results of a study in response to pressure from a funding source’’ and ‘‘ignoring major aspects of human-subject requirements’’. A further form of misconduct investigated in the survey was ‘‘dropping observations or data points from analyses based on a gut feeling that they were inaccurate’’. Of the scientists who responded to the survey, 15.3% admitted discarding data on these grounds (Martinson, Anderson, & De Vries, 2005). Two aspects of this survey are noteworthy. First, it is remarkable that the survey designers included evaluating empirical data for plausibility on the basis of gut feeling in the category of misconduct. It is not obvious that acting on misgivings about the outcome of an observation, even if these cannot be expressed or justified explicitly, threatens the integrity of science in the same way as misrepresenting research findings at the behest of a commercial sponsor or violating ethical norms in experiments on humans. Second, it is odd that fewer than one in six respondents admitted relying on gut feeling to assess the plausibility of data. How, one may wonder, do the other respondents handle data? Do they never experience a feeling that a particular observation is inaccurate, or are they willing to ignore such a feeling and trust the data point anyhow? One suspects that, in reality, virtually all scientists would be prepared to discard data on the gut feeling that they were inaccurate, but that many scientists who responded to the survey—perhaps influenced by the suggestion that such behaviour is illegitimate—failed to report this.

Collaboration


Dive into the James W. McAllister's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Henry Folse

Loyola University New Orleans

View shared research outputs
Researchain Logo
Decentralizing Knowledge