Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Katie Anne Adamson is active.

Publication


Featured researches published by Katie Anne Adamson.


Journal of Nursing Education | 2012

Assessing the Reliability, Validity, and Use of the Lasater Clinical Judgment Rubric: Three Approaches

Katie Anne Adamson; Paula Gubrud; Stephanie Sideras; Kathie Lasater

The purpose of this article is to summarize the methods and findings from three different approaches examining the reliability and validity of data from the Lasater Clinical Judgment Rubric (LCJR) using human patient simulation. The first study, by Adamson, assessed the interrater reliability of data produced using the LCJR using intraclass correlation (2,1). Interrater reliability was calculated to be 0.889. The second study, by Gubrud-Howe, used the percent agreement strategy for assessing interrater reliability. Results ranged from 92% to 96%. The third study, by Sideras, used level of agreement for reliability analyses. Results ranged from 57% to 100%. Findings from each of these studies provided evidence supporting the validity of the LCJR for assessing clinical judgment during simulated patient care scenarios. This article provides extensive information about psychometrics and appropriate use of the LCJR and concludes with recommendations for further psychometric assessment and use of the LCJR.


Nursing education perspectives | 2012

A method and resources for assessing the reliability of simulation evaluation instruments.

Katie Anne Adamson; Suzan Kardong-Edgren

Aim. This article describes a successfully piloted method for facilitating rapid psychometric assessments of three simulation evaluation instruments: the Lasater Clinical Judgment Rubric, the Seattle University Evaluation Tool, and the Creighton‐Simulation Evaluation Instrument™. Background. To provide valid and reliable evaluations of student performance in simulation activities, it is important to assess the psychometric properties of evaluation instruments. Method. This novel method incorporates the use of a database of validated, video‐archived simulations depicting nursing students performing at varying levels of proficiency. A widely dispersed sample of 29 raters viewed and scored multiple scenarios over a six‐week period. Analyses are described including inter‐ and intrarater reliability, internal consistency, and validity assessments. Results and Conclusion. Descriptive and inferential statistics supported the validity of the leveled scenarios. The inter‐ and intrarater reliability and internal consistencies of data from the three tools are provided. The article provides information and resources for readers to access in order to assess their own simulation evaluation instruments using the described methods.


Nursing education perspectives | 2015

A Systematic Review of the Literature Related to the NLN/Jeffries Simulation Framework.

Katie Anne Adamson

AIM The purpose of this manuscript it to disseminate findings from a systematic review of the literature related to the NLN/Jeffries Simulation Framework. BACKGROUND This review was initiated by the National League for Nursing to illuminate what is currently known about best simulation practices, research to support these practices, and priorities for future research. It is part of a larger project aimed at further developing the NLN/Jeffries Simulation Framework. METHOD Searches using CINAHL, the journal Simulation in Healthcare, and reference lists from key documents yielded 1,533 relevant publications for the period January 2000 to September 2014. RESULTS The final review of the literature includes 153 studies. Three themes, along with key issues, gaps, and best practices supported by the literature, were identified. CONCLUSION This systematic review provides empirical support for the major components of the NLN/Jeffries Simulation Framework and contributes to its further development.


Journal of Nursing Education | 2011

Reliability and internal consistency findings from the C-SEI

Katie Anne Adamson; Mary E. Parsons; Kim Hawkins; Julie Manz; Martha Todd; Maribeth Hercinger

Human patient simulation (HPS) is increasingly being used as both a teaching and an evaluation strategy in nursing education. To meaningfully evaluate student performance in HPS activities, nurse educators must be equipped with valid and reliable instruments for measuring student performance. This study used a novel method, including leveled, video-archived simulation scenarios, a virtual classroom, and webinar and e-mail communication, to assess the reliability and internal consistency of data produced using the Creighton Simulation Evaluation Instrument. The interrater reliability, calculated using intraclass correlation (2,1) and 95% confidence interval, was 0.952 (0.697, 0.993). The intrarater reliability, calculated using intraclass correlation (3,1) and 95% confidence interval, was 0.883 (-0.001, 0.992), and the internal consistency, calculated using Cronbachs alpha, was α = 0.979. This article includes a sample of the instrument and provides valuable resources and reliability data for nurse educators and researchers interested in measuring student performance in HPS activities.


Nurse Educator | 2015

Effects of an Experiential Learning Simulation Design on Clinical Nursing Judgment Development.

Joyce Victor Chmil; Melanie T. Turk; Katie Anne Adamson; Charles Larew

Simulation design should be theory based and its effect on outcomes evaluated. This study (1) applied a model of experiential learning to design a simulation experience, (2) examined how this design affected clinical nursing judgment development, and (3) described the relationship between clinical nursing judgment development and student performance when using the experiential learning design. Findings suggest that using an experiential learning simulation design results in more highly developed nursing judgment and competency in simulation performance.


Nursing education perspectives | 2016

Rater Bias in Simulation Performance Assessment: Examining the Effect of Participant Race/Ethnicity

Katie Anne Adamson

AIMThe purpose of this study was to determine whether scores assigned to simulation participants using the Lasater Clinical Judgment Rubric (LCJR) were influenced by participants’ racial/ethnic backgrounds. BACKGROUNDScores on the LCJR demonstrate strong reliability and validity. However, little evidence exists about whether scores are influenced by factors that are not relevant to the demonstration of clinical judgment, such as simulation participants’ racial/ethnic backgrounds. METHODUsing video-recorded simulations portraying male and female nursing students of different racial/ethnic backgrounds, LCJR scores assigned by 68 raters were compared to determine whether there were significant differences among them. RESULTSThis study provides validity evidence indicating LCJR scores were not significantly affected by the simulation participants’ racial/ethnic backgrounds. CONCLUSIONFindings support the use of the LCJR for providing valid data about student performance in simulation activities and provide a catalyst for further examination of simulation evaluation practices.


Clinical Simulation in Nursing | 2013

An Updated Review of Published Simulation Evaluation Instruments

Katie Anne Adamson; Suzan Kardong-Edgren; Janet Willhaus


Clinical Simulation in Nursing | 2013

Outcome-Based Evaluation Tool to Evaluate Student Performance in High-Fidelity Simulation

Anita Weismantel Mikasa; Terry F. Cicero; Katie Anne Adamson


Clinical Simulation in Nursing | 2013

Reliability: Measuring Internal Consistency Using Cronbach's α

Katie Anne Adamson; Susan Prion


Nursing education perspectives | 2015

NLN Jeffries Simulation Theory: Brief Narrative Description

Pamela R. Jeffries; Beth L. Rodgers; Katie Anne Adamson

Collaboration


Dive into the Katie Anne Adamson's collaboration.

Top Co-Authors

Avatar

Susan Prion

University of San Francisco

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Janet Willhaus

Washington State University Spokane

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Beth L. Rodgers

University of Wisconsin–Milwaukee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge