Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Abby Haynes is active.

Publication


Featured researches published by Abby Haynes.


Social Science & Medicine | 2015

The SPIRIT Action Framework: A structured approach to selecting and testing strategies to increase the use of research in policy

Sally Redman; Tari Turner; Huw Davies; Anna Williamson; Abby Haynes; Sue Brennan; Andrew Milat; Denise O'Connor; Fiona M. Blyth; Louisa Jorm; Sally Green

The recent proliferation of strategies designed to increase the use of research in health policy (knowledge exchange) demands better application of contemporary conceptual understandings of how research shapes policy. Predictive models, or action frameworks, are needed to organise existing knowledge and enable a more systematic approach to the selection and testing of intervention strategies. Useful action frameworks need to meet four criteria: have a clearly articulated purpose; be informed by existing knowledge; provide an organising structure to build new knowledge; and be capable of guiding the development and testing of interventions. This paper describes the development of the SPIRIT Action Framework. A literature search and interviews with policy makers identified modifiable factors likely to influence the use of research in policy. An iterative process was used to combine these factors into a pragmatic tool which meets the four criteria. The SPIRIT Action Framework can guide conceptually-informed practical decisions in the selection and testing of interventions to increase the use of research in policy. The SPIRIT Action Framework hypothesises that a catalyst is required for the use of research, the response to which is determined by the capacity of the organisation to engage with research. Where there is sufficient capacity, a series of research engagement actions might occur that facilitate research use. These hypotheses are being tested in ongoing empirical work.


Health Research Policy and Systems | 2017

Development and validation of SEER (Seeking, Engaging with and Evaluating Research): a measure of policymakers’ capacity to engage with and use research

Sue Brennan; Joanne E. McKenzie; Tari Turner; Sally Redman; Steve R. Makkar; Anna Williamson; Abby Haynes; Sally Green

BackgroundCapacity building strategies are widely used to increase the use of research in policy development. However, a lack of well-validated measures for policy contexts has hampered efforts to identify priorities for capacity building and to evaluate the impact of strategies. We aimed to address this gap by developing SEER (Seeking, Engaging with and Evaluating Research), a self-report measure of individual policymakers’ capacity to engage with and use research.MethodsWe used the SPIRIT Action Framework to identify pertinent domains and guide development of items for measuring each domain. Scales covered (1) individual capacity to use research (confidence in using research, value placed on research, individual perceptions of the value their organisation places on research, supporting tools and systems), (2) actions taken to engage with research and researchers, and (3) use of research to inform policy (extent and type of research use). A sample of policymakers engaged in health policy development provided data to examine scale reliability (internal consistency, test-retest) and validity (relation to measures of similar concepts, relation to a measure of intention to use research, internal structure of the individual capacity scales).ResultsResponse rates were 55% (150/272 people, 12 agencies) for the validity and internal consistency analyses, and 54% (57/105 people, 9 agencies) for test-retest reliability. The individual capacity scales demonstrated adequate internal consistency reliability (alpha coefficients > 0.7, all four scales) and test-retest reliability (intra-class correlation coefficients > 0.7 for three scales and 0.59 for fourth scale). Scores on individual capacity scales converged as predicted with measures of similar concepts (moderate correlations of > 0.4), and confirmatory factor analysis provided evidence that the scales measured related but distinct concepts. Items in each of these four scales related as predicted to concepts in the measurement model derived from the SPIRIT Action Framework. Evidence about the reliability and validity of the research engagement actions and research use scales was equivocal.ConclusionsInitial testing of SEER suggests that the four individual capacity scales may be used in policy settings to examine current capacity and identify areas for capacity building. The relation between capacity, research engagement actions and research use requires further investigation.


Journal of Health Communication | 2014

Reaching 'an audience that you would never dream of speaking to': influential public health researchers' views on the role of news media in influencing policy and public understanding

Simon Chapman; Abby Haynes; Gemma Derrick; Heidi Sturk; Wayne Hall; Alexis St.George

While governments and academic institutions urge researchers to engage with news media, traditional academic values of public disengagement have inhibited many from giving high priority to media activity. In this interview-based study, the authors report on the views about news media engagement and strategies used by 36 peer-voted leading Australian public health researchers in 6 fields. The authors consider their views about the role and importance of media in influencing policy, their reflections on effective or ineffective media communicators, and strategies used by these researchers about how to best retain their credibility and influence while engaging with the news media. A willingness and capacity to engage with the mass media was seen as an essential attribute of influential public health researchers.


International Journal of Social Research Methodology | 2015

Developing definitions for a knowledge exchange intervention in health policy and program agencies: reflections on process and value

Abby Haynes; Tari Turner; Sally Redman; Andrew Milat; Gabriel Moore

The development of definitions is an integral part of the research process but is often poorly described. This paper details the iterative development of five definitions: Policy, Health policy-maker, Health policy agency, Policy documents, and Research findings. We describe the challenges of developing definitions in a large multidisciplinary team and the important methodological repercussions. We identify four factors that were most helpful in this process: (1) An emphasis on fit-for-purpose functionality, (2) Consultation with in-context experts, (3) Our willingness to amend terms as well as definitions, and to revisit some methods and goals as a consequence, and (4) Agreement that we would satisfice: accept ‘good enough’ solutions rather than struggle for optimality and consensus.


Australian and New Zealand Journal of Public Health | 2012

A bibliometric analysis of research on Indigenous health in Australia, 1972–2008

Gemma Derrick; Andrew Hayen; Simon Chapman; Abby Haynes; Berenika M. Webster; Ian Anderson

Objective: To determine the growth patterns and citation volume of research publications referring to Indigenous health in Australia from 1972 to 2008 compared to seven selected health fields.


Health Research Policy and Systems | 2015

The development of ORACLe: a measure of an organisation’s capacity to engage in evidence-informed health policy

Steve R. Makkar; Tari Turner; Anna Williamson; Jordan J. Louviere; Sally Redman; Abby Haynes; Sally Green; Sue Brennan

BackgroundEvidence-informed policymaking is more likely if organisations have cultures that promote research use and invest in resources that facilitate staff engagement with research. Measures of organisations’ research use culture and capacity are needed to assess current capacity, identify opportunities for improvement, and examine the impact of capacity-building interventions. The aim of the current study was to develop a comprehensive system to measure and score organisations’ capacity to engage with and use research in policymaking, which we entitled ORACLe (Organisational Research Access, Culture, and Leadership).MethodWe used a multifaceted approach to develop ORACLe. Firstly, we reviewed the available literature to identify key domains of organisational tools and systems that may facilitate research use by staff. We interviewed senior health policymakers to verify the relevance and applicability of these domains. This information was used to generate an interview schedule that focused on seven key domains of organisational capacity. The interview was pilot-tested within four Australian policy agencies. A discrete choice experiment (DCE) was then undertaken using an expert sample to establish the relative importance of these domains. This data was used to produce a scoring system for ORACLe.ResultsThe ORACLe interview was developed, comprised of 23 questions addressing seven domains of organisational capacity and tools that support research use, including (1) documented processes for policymaking; (2) leadership training; (3) staff training; (4) research resources (e.g. database access); and systems to (5) generate new research, (6) undertake evaluations, and (7) strengthen relationships with researchers. From the DCE data, a conditional logit model was estimated to calculate total scores that took into account the relative importance of the seven domains. The model indicated that our expert sample placed the greatest importance on domains (2), (3) and (4).ConclusionWe utilised qualitative and quantitative methods to develop a system to assess and score organisations’ capacity to engage with and apply research to policy. Our measure assesses a broad range of capacity domains and identifies the relative importance of these capacities. ORACLe data can be used by organisations keen to increase their use of evidence to identify areas for further development.


Scientometrics | 2010

A cautionary bibliometric tale of two cities

Gemma Derrick; Heidi Sturk; Abby Haynes; Simon Chapman; Wayne Hall

Reliability of citation searches is a cornerstone of bibliometric research. The authors compare simultaneous search returns at two sites to demonstrate discrepancies that can occur as a result of differences in institutional subscriptions to the Web of Science and Web of Knowledge. Such discrepancies may have significant implications for the reliability of bibliometric research in general, but also for the calculation of individual and group indices used for promotion and funding decisions. The authors caution care when describing the methods used in bibliometric analysis and when evaluating researchers from different institutions. In both situations a description of the specific databases used would enable greater reliability.


Implementation Science | 2015

Figuring out fidelity: A worked example of the methods used to identify, critique and revise the essential elements of a contextualised intervention in health policy agencies

Abby Haynes; Sue Brennan; Sally Redman; Anna Williamson; Gisselle Gallego; Phyllis Butow

BackgroundIn this paper, we identify and respond to the fidelity assessment challenges posed by novel contextualised interventions (i.e. interventions that are informed by composite social and psychological theories and which incorporate standardised and flexible components in order to maximise effectiveness in complex settings).We (a) describe the difficulties of, and propose a method for, identifying the essential elements of a contextualised intervention; (b) provide a worked example of an approach for critiquing the validity of putative essential elements; and (c) demonstrate how essential elements can be refined during a trial without compromising the fidelity assessment.We used an exploratory test-and-refine process, drawing on empirical evidence from the process evaluation of Supporting Policy In health with Research: an Intervention Trial (SPIRIT). Mixed methods data was triangulated to identify, critique and revise how the intervention’s essential elements should be articulated and scored.ResultsOver 50 provisional elements were refined to a final list of 20 and the scoring rationalised. Six (often overlapping) challenges to the validity of the essential elements were identified. They were (1) redundant—the element was not essential; (2) poorly articulated—unclear, too specific or not specific enough; (3) infeasible—it was not possible to implement the essential element as intended; (4) ineffective—the element did not effectively deliver the change principles; (5) paradoxical—counteracting vital goals or change principles; or (6) absent or suboptimal—additional or more effective ways of operationalising the theory were identified. We also identified potentially valuable ‘prohibited’ elements that could be used to help reduce threats to validity.ConclusionsWe devised a method for critiquing the construct validity of our intervention’s essential elements and modifying how they were articulated and measured, while simultaneously using them as fidelity indicators. This process could be used or adapted for other contextualised interventions, taking evaluators closer to making theoretically and contextually sensitive decisions upon which to base fidelity assessments.


BMC Medicine | 2015

Research impact: neither quick nor easy

Sally Redman; Abby Haynes; Anna Williamson

Greenhalgh and Fahy’s paper about the 2014 Research Excellence Framework provides insights into the challenges of assessing research impact. Future research assessment exercises should consider how best to include measurement of indirect and non-linear impact and whether efforts in knowledge transfer and co-production should be explicitly recognised. Greenhalgh and Fahy’s findings also demonstrate that the structure of the assessment exercise can privilege certain kinds of research and may therefore miss some research that has a high impact on policy and practice. There are a growing number of courses, tools, and funding models to assist researchers in making an impact, although as yet there is little evidence about whether these approaches work in practice.Please see related article: http://www.biomedcentral.com/1741-7015/13/232.


Evidence & Policy: A Journal of Research, Debate and Practice | 2016

The pivotal position of 'liaison people': facilitating a research utilisation intervention in policy agencies

Abby Haynes; Phyllis Butow; Sue Brennan; Anna Williamson; Sally Redman; Stacy M. Carter; Gisselle Gallego; Sian Rudge

This paper explores the enormous variation in views, championing behaviours and impacts of liaison people: staff nominated to facilitate, tailor and promote SPIRIT (a research utilisation intervent ...

Collaboration


Dive into the Abby Haynes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Williamson

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wayne Hall

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Heidi Sturk

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge