Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nancy Cartwright is active.

Publication


Featured researches published by Nancy Cartwright.


Biosocieties | 2007

Are RCTs the Gold Standard

Nancy Cartwright

The claims of randomized controlled trials (RCTs) to be the gold standard rest on the fact that the ideal RCT is a deductive method: if the assumptions of the test are met, a positive result implies the appropriate causal conclusion. This is a feature that RCTs share with a variety of other methods, which thus have equal claim to being a gold standard. This article describes some of these other deductive methods and also some useful non-deductive methods, including the hypothetico-deductive method. It argues that with all deductive methods, the benefit that the conclusions follow deductively in the ideal case comes with a great cost: narrowness of scope. This is an instance of the familiar trade-off between internal and external validity. RCTs have high internal validity but the formal methodology puts severe constraints on the assumptions a target population must meet to justify exporting a conclusion from the test population to the target. The article reviews one such set of assumptions to show the kind of knowledge required. The overall conclusion is that to draw causal inferences about a target population, which method is best depends case-by-case on what background knowledge we have or can come to obtain. There is no gold standard.


Philosophy of Science | 2004

Causation: One Word Many Things

Nancy Cartwright

We currently have on offer a variety of different theories of causation. Many are strikingly good, providing detailed and plausible treatments of exemplary cases; and all suffer from clear counterexamples. I argue that, contra Hume and Kant, this is because causation is not a single, monolithic concept. There are different kinds of causal relations imbedded in different kinds of systems, readily described using thick causal concepts. Our causal theories pick out important and useful structures that fit some familiar cases—cases we discover and ones we devise to fit.


The British Journal for the Philosophy of Science | 2002

Against Modularity, the Causal Markov Condition, and Any Link Between the Two: Comments on Hausman and Woodward

Nancy Cartwright

In their rich and intricate paper ‘Independence, Invariance, and the Causal Markov Condition’, Daniel Hausman and James Woodward ([1999]) put forward two independent theses, which they label ‘level invariance’ and ‘manipulability’, and they claim that, given a specific set of assumptions, manipulability implies the causal Markov condition. These claims are interesting and important, and this paper is devoted to commenting on them. With respect to level invariance, I argue that Hausman and Woodwards discussion is confusing because, as I point out, they use different senses of ‘intervention’ and ‘invariance’ without saying so. I shall remark on these various uses and point out that the thesis is true in at least two versions. The second thesis, however, is not true. I argue that in their formulation, the manipulability thesis is patently false and that a modified version does not fare better. Furthermore, I think their proof that manipulability implies the causal Markov condition is not conclusive. In the deterministic case it is valid but vacuous, whereas it is invalid in the probabilistic case. 1 Introduction 2 Intervention, invariance and modularity 3 The causal Markov condition: CM1 and CM2 4 From MOD to the causal Markov condition and back 5 A second argument for CM2 6 The proof of the causal Markov condition for probabilistic causes 7 ‘Cartwrights objection’ defended 8 Metaphysical defenses of the causal Markov condition 9 Conclusion


Noûs | 1988

Probability and causality: Why Hume and indeterminism don't mix

John Dupré; Nancy Cartwright

A basic assumption of this paper is that things and events have causal capacities: in virtue of the properties they possess, they have the power to bring about other events or states. If you want to bring about a certain outcome, it is a good idea to introduce something with the appropriate capacity. The Humean tradition downplays capacities, and conceives of them as no more than misleading ways of referring to lawlike regularities. We want to reverse this idea: it is better to think of lawlike regularities as misleading ways of referring to the exercise of capacities. If we try to tailor our causal claims to match the regularities we see in nature, we will miss a good deal of the causal structure. Much recent discussion of probabilistic causality may be seen as a neo-Humean attempt to explain probabilistic capacities as, in a way, reducible to probabilistic regularities. Causality is supposed to be a relation between events, a relation which holds in virtue of the empirically distinguishable properties that events have; the relation consists in, or at least is marked by, the regular association of these properties. For Hume the association needed to be universal; nowadays a probabilistic association of the right sort will do. This does not work, we want to argue, and the reason is that the right sort of connections between-capacities and properties do not exist. Capacities are carried by properties. That is, you cannot have the capacity without having one of the right properties. But the same property can carry mixed capacities, and so the true complexity of the situation cannot be revealed by the associations of properties. This is what we hope to show here.


Physica A-statistical Mechanics and Its Applications | 1976

A non-negative Wigner-type distribution

Nancy Cartwright

The Wigner function, which is commonly used as a joint distribution for non-commuting observables, is shown to be non-negative in all quantum states when smoothed with a gaussian whose variances are greater than or equal to those of the minimum uncertainty wave packet.


Erkenntnis | 2002

In Favor of Laws that are Not Ceteris Paribus After All

Nancy Cartwright

Opponents of ceteris paribus laws are apt to complain that the laws are vague and untestable. Indeed, claims to this effect are made by Earman, Roberts and Smith in this volume. I argue that these kinds of claims rely on too narrow a view about what kinds of concepts we can and do regularly use in successful sciences and on too optimistic a view about the extent of application of even our most successful non-ceteris paribus laws. When it comes to testing, we test ceteris paribus laws in exactly the same way that we test laws without the ceteris paribus antecedent. But at least when the ceteris paribus antecedent is there we have an explicit acknowledgment of important procedures we must take in the design of the experiments — i.e., procedures to control for “all interferences” even those we cannot identify under the concepts of any known theory.


Philosophy of Science | 2006

Well-ordered science : evidence for use.

Nancy Cartwright

This article agrees with Philip Kitcher that we should aim for a well‐ordered science, one that answers the right questions in the right ways. Crucial to this is to address questions of use: Which scientific account is right for which system in which circumstances? This is a difficult question: evidence that may support a scientific claim in one context may not support it in another. Drawing on examples in physics and other sciences, this article argues that work on the warrant of theories in philosophy of science needs to change. Emphasis should move from the warrant of theories in the abstract to questions of evidence for use.


Erkenntnis | 1993

Causality and realism in the EPR experiment

Hasok Chang; Nancy Cartwright

We argue against the common view that it is impossible to give a causal account of the distant correlations that are revealed in EPR-type experiments. We take a realistic attitude about quantum mechanics which implies a willingness to modify our familiar concepts according to its teachings. We object to the argument that the violation of factorizability in EPR rules out causal accounts, since such an argument is at best based on the desire to retain a classical description of nature that consists of processes that are continuous in space and time. We also do not think special relativity prohibits the superluminal propagation of causes in EPR, for the phenomenon of quantum measurement may very well fall outside the domain of application of special relativity. It is possible to give causal accounts of EPR as long as we are willing to take quantum mechanics seriously, and we offer two such accounts.


Archive | 1988

How to Tell a Common Cause: Generalizations of the Conjunctive Fork Criterion

Nancy Cartwright

For much of his career Wesley Salmon has defended Reichenbach’s principle of the common cause, and in particular he has endorsed and developed Reichenbach’s statistical characterization of common causes in terms of conjunctive forks: common causes screen off joint effects from each other — that is, given the common cause, the correlation between joint effects that have no direct causal influence on each other will disappear. But in his recent work on causal processes, Salmon has given up this view.1 He now maintains that where there is a common cause there will be either a conjunctive fork or an interactive fork. This is a considerable weakening of his position, for the implication does not go the other way around. So long as interactive forks are characterized purely statistically, it is not the case that any factor which produces either a conjunctive or an interactive fork will be a common cause. Interaction needs some more robust, non-statistical characterization, and this is just what Salmon tries to provide in his work on causal processes.


Philosophy of Science | 1997

Models : The blueprints for laws

Nancy Cartwright

In this paper the claim that laws of nature are to be understood as claims about what necessarily or reliably happens is disputed. Laws can characterize what happens in a reliable way, but they do not do this easily. We do not have laws for everything occurring in the world, but only for those situations where what happens in nature is represented by a model: models are blueprints for nomological machines, which in turn give rise to laws. An example from economics shows, in particular, how we use--and how we need to use--models to get probabilistic laws.

Collaboration


Dive into the Nancy Cartwright's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mauricio Suárez

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eileen Munro

London School of Economics and Political Science

View shared research outputs
Top Co-Authors

Avatar

Eleonora Montuschi

London School of Economics and Political Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Pemberton

London School of Economics and Political Science

View shared research outputs
Top Co-Authors

Avatar

Roman Frigg

London School of Economics and Political Science

View shared research outputs
Researchain Logo
Decentralizing Knowledge