Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ralph MacKinnon is active.

Publication


Featured researches published by Ralph MacKinnon.


Advances in Simulation | 2016

Reporting guidelines for health care simulation research: Extensions to the CONSORT and STROBE statements

Adam Cheng; David Kessler; Ralph MacKinnon; Todd P. Chang; Vinay Nadkarni; Elizabeth A. Hunt; Jordan Duval-Arnould; Yiqun Lin; David A. Cook; Martin Pusic; Joshua Hui; David Moher; Matthias Egger; Marc Auerbach

BackgroundSimulation-based research (SBR) is rapidly expanding but the quality of reporting needs improvement. For a reader to critically assess a study, the elements of the study need to be clearly reported. Our objective was to develop reporting guidelines for SBR by creating extensions to the Consolidated Standards of Reporting Trials (CONSORT) and Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statements.MethodsAn iterative multistep consensus-building process was used on the basis of the recommended steps for developing reporting guidelines. The consensus process involved the following: (1) developing a steering committee, (2) defining the scope of the reporting guidelines, (3) identifying a consensus panel, (4) generating a list of items for discussion via online premeeting survey, (5) conducting a consensus meeting, and (6) drafting reporting guidelines with an explanation and elaboration document.ResultsThe following 11 extensions were recommended for CONSORT: item 1 (title/abstract), item 2 (background), item 5 (interventions), item 6 (outcomes), item 11 (blinding), item 12 (statistical methods), item 15 (baseline data), item 17 (outcomes/ estimation), item 20 (limitations), item 21 (generalizability), and item 25 (funding). The following 10 extensions were recommended for STROBE: item 1 (title/abstract), item 2 (background/rationale), item 7 (variables), item 8 (data sources/measurement), item 12 (statistical methods), item 14 (descriptive data), item 16 (main results), item 19 (limitations), item 21 (generalizability), and item 22 (funding). An elaboration document was created to provide examples and explanation for each extension.ConclusionsWe have developed extensions for the CONSORT and STROBE Statements that can help improve the quality of reporting for SBR (Sim Healthcare 00:00-00, 2016).


Simulation in Healthcare | 2016

Reporting Guidelines for Health Care Simulation Research

Adam Cheng; David Kessler; Ralph MacKinnon; Todd P. Chang; Vinay Nadkarni; Elizabeth A. Hunt; Jordan Duval-Arnould; Yiqun Lin; David A. Cook; Martin Pusic; Joshua Hui; David Moher; Matthias Egger; Marc Auerbach

Introduction Simulation-based research (SBR) is rapidly expanding but the quality of reporting needs improvement. For a reader to critically assess a study, the elements of the study need to be clearly reported. Our objective was to develop reporting guidelines for SBR by creating extensions to the Consolidated Standards of Reporting Trials (CONSORT) and Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statements. Methods An iterative multistep consensus-building process was used on the basis of the recommended steps for developing reporting guidelines. The consensus process involved the following: (1) developing a steering committee, (2) defining the scope of the reporting guidelines, (3) identifying a consensus panel, (4) generating a list of items for discussion via online premeeting survey, (5) conducting a consensus meeting, and (6) drafting reporting guidelines with an explanation and elaboration document. Results The following 11 extensions were recommended for CONSORT: item 1 (title/abstract), item 2 (background), item 5 (interventions), item 6 (outcomes), item 11 (blinding), item 12 (statistical methods), item 15 (baseline data), item 17 (outcomes/estimation), item 20 (limitations), item 21 (generalizability), and item 25 (funding). The following 10 extensions were recommended for STROBE: item 1 (title/abstract), item 2 (background/rationale), item 7 (variables), item 8 (data sources/measurement), item 12 (statistical methods), item 14 (descriptive data), item 16 (main results), item 19 (limitations), item 21 (generalizability), and item 22 (funding). An elaboration document was created to provide examples and explanation for each extension. Conclusions We have developed extensions for the CONSORT and STROBE Statements that can help improve the quality of reporting for SBR.


Advances in Simulation | 2017

Conducting multicenter research in healthcare simulation: Lessons learned from the INSPIRE network

Adam Cheng; David Kessler; Ralph MacKinnon; Todd P. Chang; Vinay Nadkarni; Elizabeth A. Hunt; Jordan Duval-Arnould; Yiqun Lin; Martin Pusic; Marc Auerbach

Simulation-based research has grown substantially over the past two decades; however, relatively few published simulation studies are multicenter in nature. Multicenter research confers many distinct advantages over single-center studies, including larger sample sizes for more generalizable findings, sharing resources amongst collaborative sites, and promoting networking. Well-executed multicenter studies are more likely to improve provider performance and/or have a positive impact on patient outcomes. In this manuscript, we offer a step-by-step guide to conducting multicenter, simulation-based research based upon our collective experience with the International Network for Simulation-based Pediatric Innovation, Research and Education (INSPIRE). Like multicenter clinical research, simulation-based multicenter research can be divided into four distinct phases. Each phase has specific differences when applied to simulation research: (1) Planning phase, to define the research question, systematically review the literature, identify outcome measures, and conduct pilot studies to ensure feasibility and estimate power; (2) Project Development phase, when the primary investigator identifies collaborators, develops the protocol and research operations manual, prepares grant applications, obtains ethical approval and executes subsite contracts, registers the study in a clinical trial registry, forms a manuscript oversight committee, and conducts feasibility testing and data validation at each site; (3) Study Execution phase, involving recruitment and enrollment of subjects, clear communication and decision-making, quality assurance measures and data abstraction, validation, and analysis; and (4) Dissemination phase, where the research team shares results via conference presentations, publications, traditional media, social media, and implements strategies for translating results to practice. With this manuscript, we provide a guide to conducting quantitative multicenter research with a focus on simulation-specific issues.


BMJ Simulation and Technology Enhanced Learning | 2015

Self-motivated learning with gamification improves infant CPR performance, a randomised controlled trial

Ralph MacKinnon; R Stoeter; C Doherty; Catherine Fullwood; Adam Cheng; Vinay Nadkarni; Terese Stenfors-Hayes; Todd P. Chang

Background Effective paediatric basic life support improves survival and outcomes. Current cardiopulmonary resuscitation (CPR) training involves 4-yearly courses plus annual updates. Skills degrade by 3–6 months. No method has been described to motivate frequent and persistent CPR practice. To achieve this, we explored the use of competition and a leaderboard, as a gamification technique, on a CPR training feedback device, to increase CPR usage and performance. Objective To assess whether self-motivated CPR training with integrated CPR feedback improves quality of infant CPR over time, in comparison to no refresher CPR training. Design Randomised controlled trial (RCT) to assess the effect of self-motivated manikin-based learning on infant CPR skills over time. Setting A UK tertiary childrens hospital. Participants 171 healthcare professionals randomly assigned to self-motivated CPR training (n=90) or no refresher CPR training (n=81) and followed for 26 weeks. Intervention The intervention comprised 24 h a day access to a CPR training feedback device and anonymous leaderboard. The CPR training feedback device calculated a compression score based on rate, depth, hand position and release and a ventilation score derived from rate and volume. Main outcome measure The outcome measure was Infant CPR technical skill performance score as defined by the mean of the cardiac compressions and ventilations scores, provided by the CPR training feedback device software. The primary analysis considered change in score from baseline to 6 months. Results Overall, the control group showed little change in their scores (median 0, IQR −7.00–5.00) from baseline to 6 months, while the intervention group had a slight median increase of 0.50, IQR 0.00–33.50. The two groups were highly significantly different in their changes (p<0.001). Conclusions A significant effect on CPR performance was demonstrated by access to self-motivated refresher CPR training, a competitive leaderboard and a CPR training feedback device.


Pediatric Anesthesia | 2017

Ten years of simulation-based training in pediatric anesthesia: The inception, evolution, and dissemination of the Managing Emergencies in Paediatric Anaesthesia (MEPA) course

Tobias Everett; Ralph MacKinnon; David de Beer; Matthew Taylor; Matthew D. Bould

2016 marked the 10‐year anniversary of the inception of the Managing Emergencies in Paediatric Anaesthesia (MEPA) course. This simulation‐based program was originally created to allow trainees in pediatric anesthesia to experience operating room emergencies which although infrequent, would be considered key competencies for any practicing anesthetist with responsibility for providing care to children. Since its original manifestation, the course has evolved in content, scope, and worldwide availability, such that it is now available at over 60 locations on five continents. The content has been modified for different learner groups and translated into several languages. This article describes the history, evolution, and dissemination of the MEPA course to share lessons learnt with educators considering the launch of similar initiatives in their field.


BMJ Open | 2015

Fitness for purpose study of the Field Assessment Conditioning Tool (FACT): a research protocol

Ralph MacKinnon; C. Kennedy; Catherine Doherty; Michael Shepherd; Joanne Cole; Terese Stenfors-Hayes

Introduction As part of a programme of research aiming to improve the outcomes of traumatically injured children, a multisource healthcare advocacy tool has been developed to allow trauma team members and hospital governance administrators to reflect and to act on complex trauma team-hospital systems interactions. We have termed this tool a Field Assessment Conditioning Tool (FACT). The FACT draws on quantitative data including clinical care points in addition to self-reflective qualitative data. The FACT is designed to provide feedback on this assessment data both horizontally across fellow potential team members and vertically to the hospital/organisation governance structure, enabling process gap identification and allowing an agenda of improvements to be realised. The aim of the study described in this paper is to explore the perceived fitness for purpose of the FACT to provide an opportunity for healthcare advocacy by healthcare professionals caring for traumatically injured children. Methods and analysis The FACT will be implemented and studied in three district hospitals, each around a major trauma centre in the UK, USA and New Zealand. Using a qualitative approach with standardised semi-structured interviews and thematic analysis we will explore the following question: Is the FACT fit for purpose in terms of providing a framework to evaluate, reflect and act on the individual hospitals own performance (trauma team—hospital interactions) in terms of readiness to receive traumatically injured children? Ethics and dissemination Ethics opinion was sought for each research host organisation participating and deemed not required. The results will be disseminated to participating sites, networks and published in high-impact journals.


BMJ Simulation and Technology Enhanced Learning | 2017

P99 The definition of quality and measurement of the resuscitation of traumatically injured children – a phenomenographic study

Ralph MacKinnon; Terese Stenfors-Hayes; E Pukk-Härenstam; A Mishra; C Kennedy; A Thiele-Schwarz

Trauma is the leading cause of death of children.1 The quality of resuscitation of a severely injured child is very highly valued both as a team and as an individual member of a team. With no current definition of what a high-quality resuscitation of an injured child constitutes or a mechanism to capture this, both quality assurance and future improvement is significantly hampered. The purpose of this study was to describe the perceptions of trauma team members and administrators alike, as to what constitutes a high-quality resuscitation of a severely injured child and how to measure this. We describe the perceptions of thirty-six UK trauma team members and governance administrators from three UK district general hospitals. A phenomenographic methodology as described by Marton was employed.2 This approach qualitatively maps the collective different ways in which people experience, conceptualise and understand various aspects of a phenomena. In this study the phenomena is the resuscitation of a severely traumatically injured child and how we measure it. This approach also highlights how the variation of perspectives is inter-related and provides an insight how of the architecture of the variation defines the phenomenon.2,3,4 The study followed the Consolidated criteria for reporting qualitative studies.5 Analysis of differing and synergistic perceptions identified six categories which define high-quality resuscitation and how we can measure this quality, these are system, team, process, individual, data and culture. A hierarchy of perceptions is evident from a simple process-driven perspective to a complex architecture of quality and measurement that combines the six categories. Combining this with the shared and divergent views of both trauma team members and trauma governance administrators further defines what we currently understand, in terms the quality of resuscitation and the measurement thereof. Our current inability to define quality hinders both quality assurance at present and quality improvement in the future. Our future ability to capture and disseminate system, team, process, individual, data and culture perspectives of the quality of a resuscitation could be a key advance patient care. References . Krug E, Sharma G, Lozano R. The global burden of injuries. American Journal of Public Health 2000;90:523–6. . Marton F. Phenomenography-describing conceptions of the world around us. Instr Sci 1981;10:177–200. . Barnard A, McCosker H, Gerber R. Phenomenography: A qualitative research approach for exploring understanding in health care. Qual Health Res 1999;9:212–26. . Sjöström B, Dahlgren LO. Applying phenomenography in nursing research. J Adv Nurs 2002;40:339–45. . Consolidated criteria for reporting qualitative studies (COREQ): 32-item checklist. https://academic.oup.com/intqhc/article/19/6/349/1791966/Consolidated-criteria-for-reporting-qualitative. last accessed 06 July 2017.


BMJ Simulation and Technology Enhanced Learning | 2016

65 Frequential, temporal, and spatial analysis of cpr training in healthcare professionals

Deborah Aitken; Todd P. Chang; Terese Stenfors-Hayes; Ralph MacKinnon

The aim is; to understand frequency and timing of CPR training and to explore differences in training between cohorts of healthcare professionals. This should establish how to target CPR training for maximum effectiveness and sustain high quality CPR. A convenience sample of healthcare professionals was recruited from a UK children’s hospital. Inclusion criteria was a current certification in basic life support. Participants were given unlimited access to infant CPR manikins and feedback devices over 6 months in their clinical areas.1 A leaderboard was updated every 4 weeks with scores. A randomly generated number was allocated to each participant to track their progress.2 The date and time for each training session was collected to investigate usage and popular training times. Demographics were recorded so comparisons could be made between speciality areas, and professions. Anticipated Of the 90 participants, 75% returned to the manikins for training. 1000 uses of the manikins were recorded; the mean average was 11 uses (IQR 1.00 – 12.00), this increases CPR training to 2200% from once a year. The median was 5.00 and increases training 10x. 80% of training sessions were completed between 8am and 8pm and the most popular time was 1– 3 pm. However, this did not extend to all therefore, provision of training should be 24/7. Nurses trained the most, Doctors and ODPs contributed the approximately the same. This training has the advantage of not creating an additional time demand, and could be cheaper than traditional training as it negates costs of a CPR trainer and backfill of staff. Therefore, this training could improve patient outcomes, for the same time commitment, likely at a lower cost.2,3 This team will continue to expand this alternative CPR training with the aim of improving health outcomes of cardiac arrest patients across the world. References Laerdal Inc., CPR scoring explained 2013 [Online]. Available: http://www.laerdal.com/downloads/f2729/Scoring_CPR_November_v2.pdf. [Accessed 10 ?September 2014]. RJ MacKinnon, R Stoeter, C Doherty, C Fullwood, A Cheng, V Nadkarni, T Stenfors-Hayes and T Chang. Self-motivated learning with gamification improves infant CPR performance, a randomised controlled trial. BMJ Simulation and Technology Enhanced Learning 2015. S Wallace, B Abella, L Becker. Quantifying the effect of cardiopulmonary resuscitation quality on cardiac arrest outcome: a systematic review and meta-analysis. Circ Cardiovasc Qual Outcomes 2013;6:148–156.


BMJ Simulation and Technology Enhanced Learning | 2015

0162 Development of a field assessment conditioning tool (FACT) – an exploration of the role of healthcare advocacy

Ralph MacKinnon; Chris Kennedy; Rachael Fleming; Terese Stenfors-Hayes

Background We have designed an assessment instrument, to empower health care advocacy by trauma team members and managers.1 The context of the assessment is the readiness of hospitals to receive traumatically injured children, as trauma is the leading cause of mortality in infants and children.2 The instrument is to be used by the healthcare professionals that constitute or manage trauma teams, and highlights a series of trauma team hospital interactions and performances. The instrument enables the description, reflection, evaluation and eventual improvement of team – hospital interactions by health advocacy. Methodology We have run unannounced fully immersive in-situ/point of care paediatric trauma simulations in a major paediatric trauma centre, once a month, for over 24 months, to date. We tested the instrument (Field Assessment Conditioning Tool (FACT)) utilising high fidelity patient simulators as surrogates for real children presenting to trauma bays. These were followed by semi-structured interviews with both trauma team members and trauma governance board administrators. Results Four themes emerged from interviews: The support for a more holistic approach to evaluating and assessing both the organisation’s and trauma team’s readiness to receive paediatric trauma. The support for harnessing internal expertise of all team members to evaluate quality of trauma care. The FACT provides a language to describe, evaluate quality and potentially invoke changes. Perceived usefulness by all of the staff (team members and governance boards) will determine to a large extent to which the FACT will be used. Potential impact Assessing all aspects of medical performance is complex and requires a programme of assessment incorporating both psychometric measurements instruments and framework tools. This is especially important to support the role of health advocates. Preliminary data from the FACT implementation and evaluation contributes to the conceptual validity of this approach to assessment. References MacKinnon RJ, Kennedy C, Doherty C, Shepherd M, Cole J, Stenfors-Hayes T, on behalf of the INSPIRE Trauma Outreach. Fitness for purpose study of the Field Assessment Conditioning Tool (FACT): a research protocol. Med Educ Train BMJ Open 2015;5:e006386 doi:10.1136/bmjopen-2014-006386 Centers for Disease Control and Prevention. National Center for Health Statistics. VitalStats. http://www.cdc.gov/nchs/vitalstats.htm. [accessed April 20, 2013]


BMJ Simulation and Technology Enhanced Learning | 2015

0148 Self-motivated learning with gamification improves and maintains CPR performance, a randomised controlled trial

Ralph MacKinnon; Rachel Stoeter; Catherine Doherty; Catherine Fullwood; Terese Stenfors-Hayes; Adam Cheng; Vinay Nadkarni; Todd P. Chang

Background Effective paediatric basic life support improves survival and neurological outcomes.1 Current CPR training involves 4-yearly courses plus annual updates, yet skills degrade significantly by 3–6 months.2,3 To date, no method has been described to motivate frequent and persistent CPR practice. To achieve this we explored the use of competition and peer pressure, as a gamification technique, to increase CPR usage and performance. Methodology We performed a prospective, randomised controlled trial to assess the effect of self-motivated gamification-based learning on CPR skills over time. 170 participants of all grades of healthcare from theatres and PICU were randomised to unlimited access to a work-place based infant CPR manikin providing immediate feedback on CPR performance or to the control group without such access. The manikin calculated a compression score based on rate, depth, hand position and release and a ventilation score from rate and volume, developed collaborating with the American Heart Association.4 Overall scores for each two minute session were calculated by averaging compression and ventilation scores. Participant scores were ranked anonymously on monthly updated leaderboards, posted close to the manikin. Baseline and final 6-month scores were compared via paired Wilcoxon tests. For participants not motivated to continue for 6 months, their last recorded score was taken as a final score. Results 91 participants were in the intervention group (53.5%) and 79 (46.5%) the control group with no notable demographical differences between the two arms. The median (IQR) baseline scores for the control and intervention groups respectively were 47.0 (31.75–63.00) and 47.5 (33.50–63.00). The median 6 month scores for the control and intervention groups respectively were 47.0 (34.50–58.25) and 62.0 (42.00–81.75). Conclusion CPR performance in the intervention group improved significantly over the 6-month period (p < 0.001), compared with the control, suggesting that self-motivated, gamification-based CPR training can improve quality of CPR over time. References Abella BS, Sandbo, N, Vassilatos P, et al. Chest compression rates during cardiopulmonary resuscitation are suboptimal: a prospective study during in-hospital cardiac arrest. Circulation 2005;1(111):428–34 Na JU, Sim MS, Jo IJ, Song HG, Song KJ. Basic life support skill retention of medical interns and the effect of clinical experience of cardiopulmonary resuscitation. Emerg Med J. 2012;29(10):833–7 Hamilton R. Nurses’ knowledge and skill retention following cardiopulmonary resuscitation training: a review of the literature. J Adv Nurs. 2005;51(3):288–97 Laerdal Inc. CPR scoring explained 2013. http://cdn.laerdal.com/downloads/f2729/Scoring_CPR_November_v2.pdf

Collaboration


Dive into the Ralph MacKinnon's collaboration.

Top Co-Authors

Avatar

Todd P. Chang

Children's Hospital Los Angeles

View shared research outputs
Top Co-Authors

Avatar

Adam Cheng

Alberta Children's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vinay Nadkarni

Children's Hospital of Philadelphia

View shared research outputs
Top Co-Authors

Avatar

Jordan Duval-Arnould

Johns Hopkins University School of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Catherine Doherty

Boston Children's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elizabeth A. Hunt

Johns Hopkins University School of Medicine

View shared research outputs
Researchain Logo
Decentralizing Knowledge