Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Trisha Greenhalgh is active.

Publication


Featured researches published by Trisha Greenhalgh.


BMJ | 2001

The challenge of complexity in health care

Paul E Plsek; Trisha Greenhalgh

This is the first in a series of four articles Across all disciplines, at all levels, and throughout the world, health care is becoming more complex. Just 30 years ago the typical general practitioner in the United Kingdom practised from privately owned premises with a minimum of support staff, subscribed to a single journal, phoned up a specialist whenever he or she needed advice, and did around an hours paperwork per week. The specialist worked in a hospital, focused explicitly on a particular system of the body, was undisputed leader of his or her “firm,” and generally left administration to the administrators. These individuals often worked long hours, but most of their problems could be described in biomedical terms and tackled using the knowledge and skills they had acquired at medical school. You used to go to the doctor when you felt ill, to find out what was wrong with you and get some medicine that would make you better. These days you are as likely to be there because the doctor (or the nurse, the care coordinator, or even the computer) has sent for you. Your treatment will now be dictated by the evidence—but this may well be imprecise, equivocal, or conflicting. Your declared values and preferences may be used, formally or informally, in a shared management decision about your illness. The solution to your problem is unlikely to come in a bottle and may well involve a multidisciplinary team. Not so long ago public health was the science of controlling infectious diseases by identifying the “cause” (an alien organism) and taking steps to remove or contain it. Todays epidemics have fuzzier boundaries (one is even known as “syndrome X”1): they are the result of the interplay of genetic predisposition, environmental context, and lifestyle choices. The experience of …


Journal of Health Services Research & Policy | 2005

Realist review - a new method of systematic review designed for complex policy interventions:

Ray Pawson; Trisha Greenhalgh; Gill Harvey; Kieran Walshe

Evidence-based policy is a dominant theme in contemporary public services but the practical realities and challenges involved in using evidence in policy-making are formidable. Part of the problem is one of complexity. In health services and other public services, we are dealing with complex social interventions which act on complex social systems-things like league tables, performance measures, regulation and inspection, or funding reforms. These are not ‘magic bullets‘ which will always hit their target, but programmes whose effects are crucially dependent on context and implementation. Traditional methods of review focus on measuring and reporting on programme effectiveness, often find that the evidence is mixed or conflicting, and provide little or no clue as to why the intervention worked or did not work when applied in different contexts or circumstances, deployed by different stakeholders, or used for different purposes. This paper offers a model of research synthesis which is designed to work with complex social interventions or programmes, and which is based on the emerging ‘realist’ approach to evaluation. It provides an explanatory analysis aimed at discerning what works for whom, in what circumstances, in what respects and how. The first step is to make explicit the programme theory (or theories) - the underlying assumptions about how an intervention is meant to work and what impacts it is expected to have. We then look for empirical evidence to populate this theoretical framework, supporting, contradicting or modifying the programme theories as it goes. The results of the review combine theoretical understanding and empirical evidence, and focus on explaining the relationship between the context in which the intervention is applied, the mechanisms by which it works and the outcomes which are produced. The aim is to enable decision-makers to reach a deeper understanding of the intervention and how it can be made to work most effectively. Realist review does not provide simple answers to complex questions. It will not tell policy-makers or managers whether something works or not, but will provide the policy and practice community with the kind of rich, detailed and highly practical understanding of complex social interventions which is likely to be of much more use to them when planning and implementing programmes at a national, regional or local level.


BMJ | 1997

How to read a paper: Papers that go beyond numbers (qualitative research)

Trisha Greenhalgh; Rod S. Taylor

Epidemiologist Nick Black has argued that a finding or a result is more likely to be accepted as a fact if it is quantified (expressed in numbers) than if it is not.1 There is little or no scientific evidence, for example, to support the well known “facts” that one couple in 10 is infertile, or that one man in 10 is homosexual. Yet, observes Black, most of us are happy to accept uncritically such simplified, reductionist, and blatantly incorrect statements so long as they contain at least one number. Researchers who use qualitative methods seek a deeper truth. They aim to “study things in their natural setting, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them,”2 and they use “a holistic perspective which preserves the complexities of human behaviour.”1 #### Summary points Qualitative methods aim to make sense of, or interpret, phenomena in terms of the meanings people bring to them Qualitative research may define preliminary questions which can then be addressed in quantitative studies A good qualitative study will address a clinical problem through a clearly formulated question and using more than one research method (triangulation) Analysis of qualitative data can and should be done using explicit, systematic, and reproducible methods Questions such as “How many parents would consult their general practitioner when their child has a mild temperature?” or “What proportion of smokers have tried to give up?” clearly need answering through quantitative methods. But questions like “Why do parents worry so much about their childrens temperature?” and “What stops people giving up smoking?” cannot and should not be answered by leaping in and measuring the first aspect of the problem that we (the outsiders) think might be important. Rather, we need to listen to what people have to say, …


BMJ | 2014

Evidence based medicine: a movement in crisis?

Trisha Greenhalgh; Jeremy Howick; Neal Maskrey

Trisha Greenhalgh and colleagues argue that, although evidence based medicine has had many benefits, it has also had some negative unintended consequences. They offer a preliminary agenda for the movement’s renaissance, refocusing on providing useable evidence that can be combined with context and professional expertise so that individual patients get optimal treatment


BMJ | 2010

Why did the Lancet take so long

Trisha Greenhalgh

The retraction of the infamous MMR paper may be overdue, but it is a good thing for science


BMJ | 1997

How to read a paper: Papers that summarise other papers (systematic reviews and meta-analyses)

Trisha Greenhalgh

Remember the essays you used to write as a student? You would browse through the indexes of books and journals until you came across a paragraph that looked relevant, and copied it out. If anything you found did not fit in with the theory you were proposing, you left it out. This, more or less, constitutes the methodology of the journalistic review—an overview of primary studies which have not been identified or analysed in a systematic (standardised and objective) way. #### Summary points A systematic review is an overview of primary studies that used explicit and reproducible methods A meta-analysis is a mathematical synthesis of the results of two or more primary studies that addressed the same hypothesis in the same way Although meta-analysis can increase the precision of a result, it is important to ensure that the methods used for the review were valid and reliable In contrast, a systematic review is an overview of primary studies which contains an explicit statement of objectives, materials, and methods and has been conducted according to explicit and reproducible methodology (fig 1). Fig 1 Methodology for a systematic review of randomised controlled trials1 Some advantages of the systematic review are given in box. When a systematic review is undertaken, not only must the search for relevant articles be thorough and objective, but the criteria used to reject articles as “flawed” must be explicit and independent of the results of those trials. The most enduring and useful systematic reviews, notably those undertaken by the Cochrane Collaboration, are regularly updated to incorporate new evidence.2 ### Box 1: Advantages of systematic reviews3 RETURN TO TEXT


BMJ | 1999

Narrative based medicine: narrative based medicine in an evidence based world.

Trisha Greenhalgh

This is the last in a series of five articles on narrative based medicine In a widely quoted riposte to critics who accused them of naive empiricism, Sackett and colleagues claimed that “the practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence By individual clinical expertise we mean the proficiency and judgment that individual clinicians acquire through clinical experience and clinical practice.”1 Sackett and colleagues were anxious to acknowledge that there is an art to medicine as well as an objective empirical science but they did not attempt to define or categorise the elusive quality of clinical competence. This article explores the dissonance between the “science” of objective measurement2 and the “art” of clinical proficiency and judgment,3–5 and attempts to integrate these different perspectives on clinical method. ### Summary points Even “evidence based” clinicians uphold the importance of clinical expertise and judgment Clinical method is an interpretive act which draws on narrative skills to integrate the overlapping stories told by patients, clinicians, and test results The art of selecting the most appropriate medical maxim for a particular clinical decision is acquired largely through the accumulation of “case expertise” (the stories or “illness scripts” of patients and clinical anecdotes) The dissonance we experience when trying to apply research findings to the clinical encounter often occurs when we abandon the narrative-interpretive paradigm and try to get by on “evidence” alone Science is concerned with the formulation and attempted falsification of hypotheses using reproducible methods that allow the construction of generalisable statements about how the universe behaves. Conventional medical training teaches students to view medicine as a science and the doctor as an impartial investigator who builds differential diagnoses as if they were scientific theories and who excludes competing possibilities in a manner …


BMJ | 1997

How to read a paper. Papers that report diagnostic or screening tests.

Trisha Greenhalgh

If you are new to the concept of validating diagnostic tests, the following example may help you. Ten men are awaiting trial for murder. Only three of them actually committed a murder; the seven others are innocent of any crime. A jury hears each case and finds six of the men guilty of murder. Two of the convicted are true murderers. Four men are wrongly imprisoned. One murderer walks free. PETER BROWN This information can be expressed in what is known as a two by two table (table 1). Note that the “truth” (whether or not the men really committed a murder) is expressed along the horizontal title row, whereas the jurys verdict (which may or may not reflect the truth) is expressed down the vertical row. View this table: Table 1 Two by two table showing outcome of trial for 10 men accused of murder These figures, if they are typical, reflect several features of this particular jury: These five features constitute, respectively, the sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of this jurys performance. The rest of this article considers these five features applied to diagnostic (or screening) tests when compared with a “true” diagnosis or gold standard. A sixth feature—the likelihood ratio—is introduced at the end of the article. Our window cleaner told me that he had been feeling thirsty recently and had …


BMJ | 2001

Complexity science: Complexity and clinical care

Tim Wilson; Tim Holt; Trisha Greenhalgh

This is the second in a series of four articles Biological and social systems are inherently complex, so it is hardly surprising that few if any human illnesses can be said to have a single “cause” or “cure.”1 This article applies the principles introduced in the introductory article in this series2 to three specific clinical areas: the control of blood glucose levels in diabetes, the management of diagnostic uncertainty, and health promotion. A complex adaptive system is a collection of individual agents with freedom to act in ways that are not always totally predictable, and whose actions are interconnected so that the action of one part changes the context for other agents.2 In relation to human health and illness there are several levels of such systems. For all these reasons neither illness nor human behaviour is predictable and neither can safely be “modelled” in a simple cause and effect system.3 The human body is not a machine and its malfunctioning cannot be adequately analysed by breaking the …


BMJ | 2001

Computer assisted learning in undergraduate medical education

Trisha Greenhalgh

It is becoming “a truth universally acknowledged” that the education of undergraduate medical students will be enhanced through the use of computer assisted learning. Access to the wide range of online options illustrated in the figure must surely make learning more exciting, effective, and likely to be retained. This assumption is potentially but by no means inevitably correct. ### Box 1: Why fund computer assisted learning? Computer assisted learning is inevitable —Individual lecturers and departments are already beginning to introduce a wide range of computer based applications, sometimes in a haphazard way. Planned and coordinated development is better than indiscriminate expansion It is convenient and flexible —Courses supported by computer assisted learning applications may require fewer face to face lectures and seminars and place fewer geographical and temporal constraints on staff and students. Students at peripheral hospitals or primary care centres may benefit in particular Unique presentational benefits —Computer presentation is particularly suited to subjects that are visually intensive, detail oriented, and difficult to conceptualise, such as complex biochemical processes or microscopic images.1 Furthermore, “virtual” cases may reduce the need to use animal or human tissue in learning Personalised learning —Each learner can progress at his or her preferred pace. They can repeat, interrupt, and resume at will, which may have particular advantages for weaker students Economies of scale —Once an application has been set up, the incremental cost of offering it to additional students is relatively small Competitive advantage —Potential applicants may use the quality of information technology to discriminate between medical schools. A “leading edge” virtual campus is likely to attract good students Achieves the ultimate goal of higher education —The goal is to link people into learning communities. Computer applications, especially the internet and world wide web, are an extremely efficient way of doing this2 Expands pedagogical horizons —The most controversial argument for … RETURN TO TEXT

Collaboration


Dive into the Trisha Greenhalgh's collaboration.

Top Co-Authors

Avatar

Stephanie Jc Taylor

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Chris Griffiths

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Anna Schwappach

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Eleni Epiphaniou

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hannah L Parke

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Neetha Purushotham

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Sadhana Jacob

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Aziz Sheikh

Health Science University

View shared research outputs
Researchain Logo
Decentralizing Knowledge