Sue Brennan
Monash University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sue Brennan.
Marine Pollution Bulletin | 1994
Douglas A. Holdway; Sue Brennan; Jorma T. Ahokas
Abstract Hepatic ethyoxycoumarin O -deethylase (ECOD) and ethoxyresorufin O -deethylase (EROD) activities, and serum sorbitol dehydrogenase (s-SDH) were measured over 3 years in sand flathead ( Platycephalus bassensis ) collected from Port Phillip Bay, Australia. Significant enzyme induction generally occurred in regions closest to highly industrial and urbanized areas relative to undeveloped reference areas of the bay. High s-SDH levels were associated with lower hepatic microsomal ECOD and EROD levels. There were no sex differences in liver ECOD or s-SDH in any sampling period, and sex differences in EROD activity were only significant in September 1990, when males had significantly higher activities than females (47.0 pmol min −1 mg protein −1 compared with 28.4 pmol min −1 mg protein −1 , respectively). Liver EROD activity in sand flathead from Hobsons Bay was positively correlated with total freshwater input, mainly from the Yarra River, suggesting PAHs as a possible cause of the observed induction.
Implementation Science | 2013
Sue Brennan; Marije Bosch; Heather A Buchan; Sally Green
BackgroundMeasuring team factors in evaluations of Continuous Quality Improvement (CQI) may provide important information for enhancing CQI processes and outcomes; however, the large number of potentially relevant factors and associated measurement instruments makes inclusion of such measures challenging. This review aims to provide guidance on the selection of instruments for measuring team-level factors by systematically collating, categorizing, and reviewing quantitative self-report instruments.MethodsData sources: We searched MEDLINE, PsycINFO, and Health and Psychosocial Instruments; reference lists of systematic reviews; and citations and references of the main report of instruments. Study selection: To determine the scope of the review, we developed and used a conceptual framework designed to capture factors relevant to evaluating CQI in primary care (the InQuIRe framework). We included papers reporting development or use of an instrument measuring factors relevant to teamwork. Data extracted included instrument purpose; theoretical basis, constructs measured and definitions; development methods and assessment of measurement properties. Analysis and synthesis: We used qualitative analysis of instrument content and our initial framework to develop a taxonomy for summarizing and comparing instruments. Instrument content was categorized using the taxonomy, illustrating coverage of the InQuIRe framework. Methods of development and evidence of measurement properties were reviewed for instruments with potential for use in primary care.ResultsWe identified 192 potentially relevant instruments, 170 of which were analyzed to develop the taxonomy. Eighty-one instruments measured constructs relevant to CQI teams in primary care, with content covering teamwork context (45 instruments measured enabling conditions or attitudes to teamwork), team process (57 instruments measured teamwork behaviors), and team outcomes (59 instruments measured perceptions of the team or its effectiveness). Forty instruments were included for full review, many with a strong theoretical basis. Evidence supporting measurement properties was limited.ConclusionsExisting instruments cover many of the factors hypothesized to contribute to QI success. With further testing, use of these instruments measuring team factors in evaluations could aid our understanding of the influence of teamwork on CQI outcomes. Greater consistency in the factors measured and choice of measurement instruments is required to enable synthesis of findings for informing policy and practice.
Social Science & Medicine | 2015
Sally Redman; Tari Turner; Huw Davies; Anna Williamson; Abby Haynes; Sue Brennan; Andrew Milat; Denise O'Connor; Fiona M. Blyth; Louisa Jorm; Sally Green
The recent proliferation of strategies designed to increase the use of research in health policy (knowledge exchange) demands better application of contemporary conceptual understandings of how research shapes policy. Predictive models, or action frameworks, are needed to organise existing knowledge and enable a more systematic approach to the selection and testing of intervention strategies. Useful action frameworks need to meet four criteria: have a clearly articulated purpose; be informed by existing knowledge; provide an organising structure to build new knowledge; and be capable of guiding the development and testing of interventions. This paper describes the development of the SPIRIT Action Framework. A literature search and interviews with policy makers identified modifiable factors likely to influence the use of research in policy. An iterative process was used to combine these factors into a pragmatic tool which meets the four criteria. The SPIRIT Action Framework can guide conceptually-informed practical decisions in the selection and testing of interventions to increase the use of research in policy. The SPIRIT Action Framework hypothesises that a catalyst is required for the use of research, the response to which is determined by the capacity of the organisation to engage with research. Where there is sufficient capacity, a series of research engagement actions might occur that facilitate research use. These hypotheses are being tested in ongoing empirical work.
Implementation Science | 2012
Sue Brennan; Marije Bosch; Heather A Buchan; Sally Green
BackgroundContinuous quality improvement (CQI) methods are widely used in healthcare; however, the effectiveness of the methods is variable, and evidence about the extent to which contextual and other factors modify effects is limited. Investigating the relationship between these factors and CQI outcomes poses challenges for those evaluating CQI, among the most complex of which relate to the measurement of modifying factors. We aimed to provide guidance to support the selection of measurement instruments by systematically collating, categorising, and reviewing quantitative self-report instruments.MethodsData sources: We searched MEDLINE, PsycINFO, and Health and Psychosocial Instruments, reference lists of systematic reviews, and citations and references of the main report of instruments. Study selection: The scope of the review was determined by a conceptual framework developed to capture factors relevant to evaluating CQI in primary care (the InQuIRe framework). Papers reporting development or use of an instrument measuring a construct encompassed by the framework were included. Data extracted included instrument purpose; theoretical basis, constructs measured and definitions; development methods and assessment of measurement properties. Analysis and synthesis: We used qualitative analysis of instrument content and our initial framework to develop a taxonomy for summarising and comparing instruments. Instrument content was categorised using the taxonomy, illustrating coverage of the InQuIRe framework. Methods of development and evidence of measurement properties were reviewed for instruments with potential for use in primary care.ResultsWe identified 186 potentially relevant instruments, 152 of which were analysed to develop the taxonomy. Eighty-four instruments measured constructs relevant to primary care, with content measuring CQI implementation and use (19 instruments), organizational context (51 instruments), and individual factors (21 instruments). Forty-one instruments were included for full review. Development methods were often pragmatic, rather than systematic and theory-based, and evidence supporting measurement properties was limited.ConclusionsMany instruments are available for evaluating CQI, but most require further use and testing to establish their measurement properties. Further development and use of these measures in evaluations should increase the contribution made by individual studies to our understanding of CQI and enhance our ability to synthesise evidence for informing policy and practice.
Implementation Science | 2015
Emma Tavender; Marije Bosch; Russell L. Gruen; Sally Green; Susan Michie; Sue Brennan; Jill J Francis; Jennie Ponsford; Jonathan Knott; Sue Meares; Tracy Smyth; Denise O’Connor
BackgroundDespite the availability of evidence-based guidelines for the management of mild traumatic brain injury in the emergency department (ED), variations in practice exist. Interventions designed to implement recommended behaviours can reduce this variation. Using theory to inform intervention development is advocated; however, there is no consensus on how to select or apply theory. Integrative theoretical frameworks, based on syntheses of theories and theoretical constructs relevant to implementation, have the potential to assist in the intervention development process. This paper describes the process of applying two theoretical frameworks to investigate the factors influencing recommended behaviours and the choice of behaviour change techniques and modes of delivery for an implementation intervention.MethodsA stepped approach was followed: (i) identification of locally applicable and actionable evidence-based recommendations as targets for change, (ii) selection and use of two theoretical frameworks for identifying barriers to and enablers of change (Theoretical Domains Framework and Model of Diffusion of Innovations in Service Organisations) and (iii) identification and operationalisation of intervention components (behaviour change techniques and modes of delivery) to address the barriers and enhance the enablers, informed by theory, evidence and feasibility/acceptability considerations. We illustrate this process in relation to one recommendation, prospective assessment of post-traumatic amnesia (PTA) by ED staff using a validated tool.ResultsFour recommendations for managing mild traumatic brain injury were targeted with the intervention. The intervention targeting the PTA recommendation consisted of 14 behaviour change techniques and addressed 6 theoretical domains and 5 organisational domains. The mode of delivery was informed by six Cochrane reviews. It was delivered via five intervention components : (i) local stakeholder meetings, (ii) identification of local opinion leader teams, (iii) a train-the-trainer workshop for appointed local opinion leaders, (iv) local training workshops for delivery by trained local opinion leaders and (v) provision of tools and materials to prompt recommended behaviours.ConclusionsTwo theoretical frameworks were used in a complementary manner to inform intervention development in managing mild traumatic brain injury in the ED. The effectiveness and cost-effectiveness of the developed intervention is being evaluated in a cluster randomised trial, part of the Neurotrauma Evidence Translation (NET) program.
Health Research Policy and Systems | 2017
Sue Brennan; Joanne E. McKenzie; Tari Turner; Sally Redman; Steve R. Makkar; Anna Williamson; Abby Haynes; Sally Green
BackgroundCapacity building strategies are widely used to increase the use of research in policy development. However, a lack of well-validated measures for policy contexts has hampered efforts to identify priorities for capacity building and to evaluate the impact of strategies. We aimed to address this gap by developing SEER (Seeking, Engaging with and Evaluating Research), a self-report measure of individual policymakers’ capacity to engage with and use research.MethodsWe used the SPIRIT Action Framework to identify pertinent domains and guide development of items for measuring each domain. Scales covered (1) individual capacity to use research (confidence in using research, value placed on research, individual perceptions of the value their organisation places on research, supporting tools and systems), (2) actions taken to engage with research and researchers, and (3) use of research to inform policy (extent and type of research use). A sample of policymakers engaged in health policy development provided data to examine scale reliability (internal consistency, test-retest) and validity (relation to measures of similar concepts, relation to a measure of intention to use research, internal structure of the individual capacity scales).ResultsResponse rates were 55% (150/272 people, 12 agencies) for the validity and internal consistency analyses, and 54% (57/105 people, 9 agencies) for test-retest reliability. The individual capacity scales demonstrated adequate internal consistency reliability (alpha coefficients > 0.7, all four scales) and test-retest reliability (intra-class correlation coefficients > 0.7 for three scales and 0.59 for fourth scale). Scores on individual capacity scales converged as predicted with measures of similar concepts (moderate correlations of > 0.4), and confirmatory factor analysis provided evidence that the scales measured related but distinct concepts. Items in each of these four scales related as predicted to concepts in the measurement model derived from the SPIRIT Action Framework. Evidence about the reliability and validity of the research engagement actions and research use scales was equivocal.ConclusionsInitial testing of SEER suggests that the four individual capacity scales may be used in policy settings to examine current capacity and identify areas for capacity building. The relation between capacity, research engagement actions and research use requires further investigation.
Systematic Reviews | 2016
Carole Lunny; Sue Brennan; Steve McDonald; Joanne E. McKenzie
BackgroundOverviews of systematic reviews attempt to systematically retrieve and summarise the results of multiple systematic reviews into a single document. Methods for conducting, interpreting and reporting overviews of reviews are in their infancy. To date, there has been no systematic review or evidence map examining the range of methods for overviews nor of the evidence for using these methods. The objectives of the study are to develop and populate a framework of methods that have or may be used in conducting, interpreting and reporting overviews of systematic reviews of interventions (stage I); create an evidence map of studies that have evaluated these methods (stage II); and identify and describe unique methodological challenges of overviews.MethodsThe research will be undertaken in two stages. For both stages, we plan to search methods collections (e.g. Cochrane Methodology Register, Meth4ReSyn library, AHRQ Effective Health Care Program) to identify eligible studies. These searches will be supplemented by searching reference lists and citation searching. Stage I: Methods used in overviews will be identified from articles describing methods for overviews, methods studies examining a cross section/cohort of overviews, guidance documents and commentaries. The identified methods will populate a framework of available methods for conducting an overview. Two reviewers will independently code included studies to develop the framework. Thematic analysis of the coded data will be used to categorise and describe methods. Stage II: Evaluations of the performance of methods will be identified from systematic reviews of methods studies and methods studies. Evaluations will be described and mapped to the framework of methods identified in stage I.DiscussionThe results of this process will be useful for mapping of methods for overviews of systematic reviews, informing guidance and identifying and prioritising method research in this field.
Marine Environmental Research | 2003
E.T. Georgiades; Douglas A. Holdway; Sue Brennan; J.S Butty; Ali Temara
The present study examines the impact of exposure to oil-derived products on the behaviour and physiology of the Australian 11-armed asteroid Coscinasterias muricata. Asteroids were exposed to dilutions of water-accommodated fraction (WAF) of Bass Strait stabilised crude oil, dispersed oil or burnt oil (n = 8) for 4 days whereby, prey-localisation behaviour was examined immediately after exposure, and following 2, 7, and 14 days depuration in clean seawater. The prey-localisation behaviour of asteroids exposed to WAF and dispersed oil was significantly affected though recovery was apparent following 7 and 14 days depuration, respectively. In contrast, there was no significant change in the prey-localisation behaviour of asteroids exposed to burnt oil. Behavioural impacts were correlated with the total petroleum hydrocarbon concentrations (C6-C36) in each exposure solution, WAF (1.8 mg l(-1)), dispersed oil (3.5 mg l(-1)) and burnt oil (1.14 mg l(-1), respectively. The total microsomal cytochrome P450 content was significantly lower (P(Dunnett test) < 0.01) in asteroids exposed to dispersed oil than in any other asteroids, whilst asteroid alkaline phosphatase activity was not significantly affected (P(ANOVA) = 0.11). This study further documents the deleterious impact of dispersed oil to marine organisms and supports further research in the area of in situ burning as a less damaging oil spill response measure towards benthic macro-invertebrates.
Health Research Policy and Systems | 2015
Steve R. Makkar; Tari Turner; Anna Williamson; Jordan J. Louviere; Sally Redman; Abby Haynes; Sally Green; Sue Brennan
BackgroundEvidence-informed policymaking is more likely if organisations have cultures that promote research use and invest in resources that facilitate staff engagement with research. Measures of organisations’ research use culture and capacity are needed to assess current capacity, identify opportunities for improvement, and examine the impact of capacity-building interventions. The aim of the current study was to develop a comprehensive system to measure and score organisations’ capacity to engage with and use research in policymaking, which we entitled ORACLe (Organisational Research Access, Culture, and Leadership).MethodWe used a multifaceted approach to develop ORACLe. Firstly, we reviewed the available literature to identify key domains of organisational tools and systems that may facilitate research use by staff. We interviewed senior health policymakers to verify the relevance and applicability of these domains. This information was used to generate an interview schedule that focused on seven key domains of organisational capacity. The interview was pilot-tested within four Australian policy agencies. A discrete choice experiment (DCE) was then undertaken using an expert sample to establish the relative importance of these domains. This data was used to produce a scoring system for ORACLe.ResultsThe ORACLe interview was developed, comprised of 23 questions addressing seven domains of organisational capacity and tools that support research use, including (1) documented processes for policymaking; (2) leadership training; (3) staff training; (4) research resources (e.g. database access); and systems to (5) generate new research, (6) undertake evaluations, and (7) strengthen relationships with researchers. From the DCE data, a conditional logit model was estimated to calculate total scores that took into account the relative importance of the seven domains. The model indicated that our expert sample placed the greatest importance on domains (2), (3) and (4).ConclusionWe utilised qualitative and quantitative methods to develop a system to assess and score organisations’ capacity to engage with and apply research to policy. Our measure assesses a broad range of capacity domains and identifies the relative importance of these capacities. ORACLe data can be used by organisations keen to increase their use of evidence to identify areas for further development.
Implementation Science | 2015
Abby Haynes; Sue Brennan; Sally Redman; Anna Williamson; Gisselle Gallego; Phyllis Butow
BackgroundIn this paper, we identify and respond to the fidelity assessment challenges posed by novel contextualised interventions (i.e. interventions that are informed by composite social and psychological theories and which incorporate standardised and flexible components in order to maximise effectiveness in complex settings).We (a) describe the difficulties of, and propose a method for, identifying the essential elements of a contextualised intervention; (b) provide a worked example of an approach for critiquing the validity of putative essential elements; and (c) demonstrate how essential elements can be refined during a trial without compromising the fidelity assessment.We used an exploratory test-and-refine process, drawing on empirical evidence from the process evaluation of Supporting Policy In health with Research: an Intervention Trial (SPIRIT). Mixed methods data was triangulated to identify, critique and revise how the intervention’s essential elements should be articulated and scored.ResultsOver 50 provisional elements were refined to a final list of 20 and the scoring rationalised. Six (often overlapping) challenges to the validity of the essential elements were identified. They were (1) redundant—the element was not essential; (2) poorly articulated—unclear, too specific or not specific enough; (3) infeasible—it was not possible to implement the essential element as intended; (4) ineffective—the element did not effectively deliver the change principles; (5) paradoxical—counteracting vital goals or change principles; or (6) absent or suboptimal—additional or more effective ways of operationalising the theory were identified. We also identified potentially valuable ‘prohibited’ elements that could be used to help reduce threats to validity.ConclusionsWe devised a method for critiquing the construct validity of our intervention’s essential elements and modifying how they were articulated and measured, while simultaneously using them as fidelity indicators. This process could be used or adapted for other contextualised interventions, taking evaluators closer to making theoretically and contextually sensitive decisions upon which to base fidelity assessments.