Joanna Reynolds
University of London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joanna Reynolds.
Malaria Journal | 2013
Evelyn K. Ansah; Joanna Reynolds; Samson Akanpigbiam; Christopher J. M. Whitty; Clare Chandler
BackgroundThe debate on rapid diagnostic tests (RDTs) for malaria has begun to shift from whether RDTs should be used, to how and under what circumstances their use can be optimized. This has increased the need for a better understanding of the complexities surrounding the role of RDTs in appropriate treatment of fever. Studies have focused on clinician practices, but few have sought to understand patient perspectives, beyond notions of acceptability.MethodsThis qualitative study aimed to explore patient and caregiver perceptions and experiences of RDTs following a trial to assess the introduction of the tests into routine clinical care at four health facilities in one district in Ghana. Six focus group discussions and one in-depth interview were carried out with those who had received an RDT with a negative test result.ResultsPatients had high expectations of RDTs. They welcomed the tests as aiding clinical diagnoses and as tools that could communicate their problem better than they could, verbally. However, respondents also believed the tests could identify any cause of illness, beyond malaria. Experiences of patients suggested that RDTs were adopted into an existing system where patients are both physically and intellectually removed from diagnostic processes and where clinicians retain authority that supersedes tests and their results. In this situation, patients did not feel able to articulate a demand for test-driven diagnosis.ConclusionsImprovements in communication between the health worker and patient, particularly to explain the capabilities of the test and management of RDT negative cases, may both manage patient expectations and promote patient demand for test-driven diagnoses.
Implementation Science | 2014
Joanna Reynolds; Deborah DiLiberto; Lindsay Mangham-Jefferies; Evelyn K. Ansah; Sham Lal; Hilda Mbakilwa; Katia Bruxvoort; Jayne Webster; Lasse S. Vestergaard; Shunmay Yeung; Toby Leslie; Eleanor Hutchinson; Hugh Reyburn; David G. Lalloo; David Schellenberg; Bonnie Cundill; Sarah G. Staedke; Virginia Wiseman; Catherine Goodman; Clare Chandler
BackgroundThere is increasing recognition among trialists of the challenges in understanding how particular ‘real-life’ contexts influence the delivery and receipt of complex health interventions. Evaluations of interventions to change health worker and/or patient behaviours in health service settings exemplify these challenges. When interpreting evaluation data, deviation from intended intervention implementation is accounted for through process evaluations of fidelity, reach, and intensity. However, no such systematic approach has been proposed to account for the way evaluation activities may deviate in practice from assumptions made when data are interpreted.MethodsA collective case study was conducted to explore experiences of undertaking evaluation activities in the real-life contexts of nine complex intervention trials seeking to improve appropriate diagnosis and treatment of malaria in varied health service settings. Multiple sources of data were used, including in-depth interviews with investigators, participant-observation of studies, and rounds of discussion and reflection.Results and discussionFrom our experiences of the realities of conducting these evaluations, we identified six key ‘lessons learned’ about ways to become aware of and manage aspects of the fabric of trials involving the interface of researchers, fieldworkers, participants and data collection tools that may affect the intended production of data and interpretation of findings. These lessons included: foster a shared understanding across the study team of how individual practices contribute to the study goals; promote and facilitate within-team communications for ongoing reflection on the progress of the evaluation; establish processes for ongoing collaboration and dialogue between sub-study teams; the importance of a field research coordinator bridging everyday project management with scientific oversight; collect and review reflective field notes on the progress of the evaluation to aid interpretation of outcomes; and these approaches should help the identification of and reflection on possible overlaps between the evaluation and intervention.ConclusionThe lessons we have drawn point to the principle of reflexivity that, we argue, needs to become part of standard practice in the conduct of evaluations of complex interventions to promote more meaningful interpretations of the effects of an intervention and to better inform future implementation and decision-making.
PLOS ONE | 2015
Éimhín M. Ansbro; Michelle M. Gill; Joanna Reynolds; Katharine D. Shelley; Susan Strasser; Tabitha Sripipatana; Alexander Tshaka Ncube; Grace Tembo Mumba; Fern Terris-Prestholt; Rosanna W. Peeling; David Mabey
Syphilis affects 1.4 million pregnant women globally each year. Maternal syphilis causes congenital syphilis in over half of affected pregnancies, leading to early foetal loss, pregnancy complications, stillbirth and neonatal death. Syphilis is under-diagnosed in pregnant women. Point-of-care rapid syphilis tests (RST) allow for same-day treatment and address logistical barriers to testing encountered with standard Rapid Plasma Reagin testing. Recent literature emphasises successful introduction of new health technologies requires healthcare worker (HCW) acceptance, effective training, quality monitoring and robust health systems. Following a successful pilot, the Zambian Ministry of Health (MoH) adopted RST into policy, integrating them into prevention of mother-to-child transmission of HIV clinics in four underserved Zambian districts. We compare HCW experiences, including challenges encountered in scaling up from a highly supported NGO-led pilot to a large-scale MoH-led national programme. Questionnaires were administered through structured interviews of 16 HCWs in two pilot districts and 24 HCWs in two different rollout districts. Supplementary data were gathered via stakeholder interviews, clinic registers and supervisory visits. Using a conceptual framework adapted from health technology literature, we explored RST acceptance and usability. Quantitative data were analysed using descriptive statistics. Key themes in qualitative data were explored using template analysis. Overall, HCWs accepted RST as learnable, suitable, effective tools to improve antenatal services, which were usable in diverse clinical settings. Changes in training, supervision and quality monitoring models between pilot and rollout may have influenced rollout HCW acceptance and compromised testing quality. While quality monitoring was integrated into national policy and training, implementation was limited during rollout despite financial support and mentorship. We illustrate that new health technology pilot research can rapidly translate into policy change and scale-up. However, training, supervision and quality assurance models should be reviewed and strengthened as rollout of the Zambian RST programme continues.
Qualitative Health Research | 2013
Joanna Reynolds; Molly Wood; Amy Mikhail; Tamanna Ahmad; Karimullah Karimullah; Mohibullah Motahed; Anwar Hazansai; Sayed Habib Baktash; Nadia Anwari; James Kizito; Ismail Mayan; Mark Rowland; Clare Chandler; Toby Leslie
In many malaria-endemic areas, including Afghanistan, overdiagnosis of malaria is common. Even when using parasite-based diagnostic tests prior to treatment, clinicians commonly prescribe antimalarial treatment following negative test results. This practice neglects alternative causes of fever, uses drugs unnecessarily, and might contribute to antimalarial drug resistance. We undertook a qualitative study among health workers using different malaria diagnostic methods in Afghanistan to explore perceptions of malaria diagnosis. Health workers valued diagnostic tests for their ability to confirm clinical suspicions of malaria via a positive result, but a negative result was commonly interpreted as an absence of diagnosis, legitimizing clinical diagnosis of malaria and prescription of antimalarial drugs. Prescribing decisions reflected uncertainty around tests and diagnosis, and were influenced by social- and health-system factors. Study findings emphasize the need for nuanced and context-specific guidance to change prescriber behavior and improve treatment of malarial and nonmalarial febrile illnesses.
Journal of Medical Ethics | 2008
Joanna Reynolds; Nicola Crichton; W. Fisher; Steven H. Sacks
Aims: The aims of the study were to explore expert opinion on the distinction between “research” and “audit”, and to determine the need for review by a National Health Service (NHS) Research Ethics Committee (REC). Background: Under current guidelines only “research” projects within the NHS require REC approval. Concerns have been expressed over difficulties in distinguishing between research and other types of project, and no existing guidelines appear to have been validated. The implications of this confusion include unnecessary REC applications, and crucially, the potential for ethically unsound projects to escape review. Methods: A three-stage Delphi method was chosen to explore expert opinion and develop consensus. Stage 1 comprised ten semi-structured interviews gathering opinion on distinguishing between types of project and how to determine need for ethical review. Stages 2 and 3 were questionnaires, asking 24 “experts” to rate levels of ethical concern and types of project for a series of questions. Anonymised responses from stage 2 were fed back in stage 3. The final responses were analysed for consensus. Results: Of 46 questions, consensus was achieved for 14 (30.4%) for level of ethical concern and for 15 (32.6%) for type of project. Conclusions: Several ideas proved discriminatory for classifying the type of project and assessing level of ethical concern, and they can be used to develop an algorithm to determine need for ethical review. There was little relationship between assessment of the level of ethical concern and classification of the project. There was inconsistency in defining and classifying studies as something other than “research” or “audit”.
Ethnography | 2017
Joanna Reynolds
Contemporary approaches to evaluating ‘complex’ social and health interventions are opening up spaces for methodologies attuned to examining contextual complexities, such as ethnography. Yet the alignment of the two agendas – evaluative and ethnographic – is not necessarily comfortable in practice. I reflect on experiences of conducting ethnographic research alongside a public health evaluation of a community-based initiative in the UK, using the lens of ‘missing out’ to examine intersections between my own ethnographic concerns and those of the communities under study. I examine potential opportunities posed by the discomfort of ‘missing out’, particularly for identifying the processes and spaces of inclusion and exclusion that contributed both to my ethnographic experiences and to the realities of the communities engaging with the initiative. This reveals productive possibilities for a focus on ‘missing out’ as a form of relating for evaluations of the impacts of such initiatives on health and social inequalities.
Trials | 2011
Joanna Reynolds; Peter Mangesho; Lasse S. Vestergaard; Clare Chandler
Objectives This study aimed to explore the experiences of people participating in a clinical trial in Tanzania. We sought to understand the meaning attached to participation and how experiences of being in the trial related to participants’ original motivations for consenting, in order to explore appropriate strategies for recruitment in a developing country setting. Methods We designed a qualitative study alongside a clinical observational trial of the efficacy and safety of artemisinin-based combination therapy (ACT) for malaria in patients concomitantly receiving antiretroviral therapy (ART) for HIV in Muheza, Tanzania. Focus-group discussions have been held with HIV-positive and HIVnegative people who have participated in the trial, and with HIV-positive people who were screened but who did not participate. This data has been triangulated with in-depth interviews with staff conducting the trial and delivering HIV care at the hospital where the trial was conducted. Data is being analysed using an iterative, line-by-line approach based on the principles of grounded theory, to identify units of meaning and develop themes and constructs from the data. Results Analysis to date of eight FGDs and IDIs indicates a disconnect between the information given to trial participants in the recruitment and consent process and their understanding of the trial and its aims, with a few participants stating they did not realise they were part of a research study. This reflects, and may be attributable to trial participants’ frequent conflation of the clinical encounter - testing for malaria - with the research encounter - the recruitment process - and an inability to distinguish between these as separate events. When describing recruitment, many participants framed their narrative around clinical events such as the malaria test and seeking treatment, the latter being considered the most important reason for joining the study. The clinical context in which participants were screened and recruited appeared to influence their ability to interpret information about the trial and expectations for what may happen to them, raising questions about the nature of ‘informed consent’ in the recruitment process. Participants reported overwhelmingly positive experiences of participating in the trial, largely based around their access to numerous tests and free treatment, as well as reimbursement for transport and telephone costs. Being part of the trial was frequently conceptualised as receiving a ‘service’, valued chiefly for enabling participants to be observed and to know their health status. The perceived value of this ‘service’ was reflected in many participants’ reports of encouraging friends and family to attend for malaria testing, in order to access the service associated with the trial. Although this indicates again some confusion between clinical and research activities, it appeared that the ‘enactment’ of the trial – giving and receiving the service – offered the social space in which participants were likely to raise any concerns about aspects of the trial. Such concerns included questioning the ‘true’ aim of the study, and fears over blood taking. This suggests that it is within a relationship of interaction with trial staff and activities that comprehension about the meaning and value of participation can begin to emerge, thus highlighting the
Critical Public Health | 2007
Joanna Reynolds; Nicola Crichton
Sherman & Camprone-Piccardos paper examines the methodological and conceptual differences between surveillance and research projects, addressing an important area relevant to the type of activitie...
Critical Public Health | 2018
Joanna Reynolds
Abstract Engaging the community in initiatives to improve health and inequalities is a prominent feature of contemporary public health approaches. Yet, how ‘community’ might be differently interpreted and experienced through mechanisms of engagement is little understood, with potential implications for how the pathways of effect of such initiatives, and their impacts on health inequalities, might be evaluated. This study sought to explore how community was enacted through the delivery of an area-based, empowerment initiative underway in disadvantaged areas of England. An ethnographic approach was used to identify enactments of community arising around the core activities and decision-making processes of the resident-led initiative in two sites. Enactments comprised ‘boundary work’: the ongoing assertion and negotiation of boundaries around who or what was, and was not, eligible to contribute to decision-making, and / or benefit from the initiative. Boundary work arose around practices of connecting with and consulting residents, protecting locally defined interests and autonomy, negotiating different sets of interests, and navigating representation. The multiple, shifting enactments of community and its boundaries highlight implications for understanding processes of inclusion and exclusion inherent to community engagement, and for interpreting pathways between collective empowerment and improved health. The study also raises questions for evaluating similar complex, community initiatives, where community cannot be taken as a fixed analytical unit, but something continually in process through the interplay between the initiative and the wider context. This must inform interpretations of how, and for whom, community engagement might – or might not – improve health.
Health Systems and Reform | 2016
Clare Chandler; Helen Burchett; Louise E. Boyle; Olivia Achonduh; Anthony K. Mbonye; Deborah DiLiberto; Hugh Reyburn; Obinna Onwujekwe; Ane Haaland; Arantxa Roca-Feltrer; Frank Baiden; Wilfred F. Mbacham; Richard Ndyomugyenyi; Florence Nankya; Lindsay Mangham-Jefferies; Sîan E. Clarke; Hilda Mbakilwa; Joanna Reynolds; Sham Lal; Toby Leslie; Catherine Maiteki-Sebuguzi; Jayne Webster; Pascal Magnussen; Evelyn K. Ansah; Kristian Schultz Hansen; Eleanor Hutchinson; Bonnie Cundill; Shunmay Yeung; David Schellenberg; Sarah G. Staedke
Abstract—Rigorous evidence of “what works” to improve health care is in demand, but methods for the development of interventions have not been scrutinized in the same ways as methods for evaluation. This article presents and examines intervention development processes of eight malaria health care interventions in East and West Africa. A case study approach was used to draw out experiences and insights from multidisciplinary teams who undertook to design and evaluate these studies. Four steps appeared necessary for intervention design: (1) definition of scope, with reference to evaluation possibilities; (2) research to inform design, including evidence and theory reviews and empirical formative research; (3) intervention design, including consideration and selection of approaches and development of activities and materials; and (4) refining and finalizing the intervention, incorporating piloting and pretesting. Alongside these steps, projects produced theories, explicitly or implicitly, about (1) intended pathways of change and (2) how their intervention would be implemented.The work required to design interventions that meet and contribute to current standards of evidence should not be underestimated. Furthermore, the process should be recognized not only as technical but as the result of micro and macro social, political, and economic contexts, which should be acknowledged and documented in order to infer generalizability. Reporting of interventions should go beyond descriptions of final intervention components or techniques to encompass the development process. The role that evaluation possibilities play in intervention design should be brought to the fore in debates over health care improvement.