Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anna T. Cianciolo is active.

Publication


Featured researches published by Anna T. Cianciolo.


Journal of Graduate Medical Education | 2013

Behavioral Specification of the Entrustment Process

Anna T. Cianciolo; Jason A. Kegg

Both authors are at Southern Illinois University School of Medicine. Anna T. Cianciolo, PhD, is an Assistant Professor in the Department of Medical Education; and Jason A. Kegg, MD, FAAEM, is an Assistant Professor and Director of Simulation-Based Education in the Division of Emergency Medicine. Funding: The authors report no external funding source for this study.


Medical Teacher | 2016

Competencies, milestones, and EPAs – Are those who ignore the past condemned to repeat it?

Debra L. Klamen; Reed G. Williams; Nicole K. Roberts; Anna T. Cianciolo

Abstract Background: The idea of competency-based education sounds great on paper. Who wouldn’t argue for a standardized set of performance-based assessments to assure competency in graduating students and residents? Even so, conceptual concerns have already been raised about this new system and there is yet no evidence to refute their veracity. Aims: We argue that practical concerns deserve equal consideration, and present evidence strongly suggesting these concerns should be taken seriously. Method: Specifically, we share two historical examples that illustrate what happened in two disparate contexts (K-12 education and the Department of Defense [DOD]) when competency (or outcomes-based) assessment frameworks were implemented. We then examine how observation and assessment of clinical performance stands currently in medical schools and residencies, since these methodologies will be challenged to a greater degree by expansive lists of competencies and milestones. Results/Conclusions: We conclude with suggestions as to a way forward, because clearly the assessment of competency and the ability to guarantee that graduates are ready for medical careers is of utmost importance. Hopefully the headlong rush to competencies, milestones, and core entrustable professional activities can be tempered before even more time, effort, frustration and resources are invested in an endeavor which history suggests will collapse under its own weight.


Academic Medicine | 2014

Variations in senior medical student diagnostic justification ability.

Reed G. Williams; Debra L. Klamen; Stephen Markwell; Anna T. Cianciolo; Jerry A. Colliver; Steven J. Verhulst

Purpose To determine the diagnostic justification proficiency of senior medical students across a broad spectrum of cases with common chief complaints and diagnoses. Method The authors gathered diagnostic justification exercise data from the Senior Clinical Comprehensive Examination taken by Southern Illinois University School of Medicine’s students from the classes of 2011 (n = 67), 2012 (n = 66), and 2013 (n = 79). After interviewing and examining standardized patients, students listed their key findings and diagnostic possibilities considered, and provided a written explanation of how they used key findings to move from their initial differential diagnoses to their final diagnosis. Two physician judges blindly rated responses. Results Student diagnostic justification performance was highly variable from case to case and often rated below expectations. Of the students in the classes of 2011, 2012, and 2013, 57% (38/67), 23% (15/66), and 33% (26/79) were judged borderline or poor on diagnostic justification performance for more than 50% of the cases on the examination. Conclusions Student diagnostic justification performance was inconsistent across the range of cases, common chief complaints, and underlying diagnoses used in this study. More than 20% of students exhibited borderline or poor diagnostic justification performance on more than 50% of the cases. If these results are confirmed in other medical schools, attention needs to be directed to investigating new curricular methods that ensure deliberate practice of these competencies across the spectrum of common chief complaints and diagnoses and do not depend on the available mix of patients.


Teaching and Learning in Medicine | 2013

Theory Development and Application in Medical Education

Anna T. Cianciolo; Kevin W. Eva; Jerry A. Colliver

The role and status of theory is by no means a new topic in medical education. Yet summarizing where we have been and where we are going with respect to theory development and application is difficult because our community has not yet fully elucidated what constitutes medical education theory. In this article, we explore the idea of conceptualizing theory as an effect on scholarly dialogue among medical educators. We describe theory-enabled conversation as argumentation, which frames inquiry, permits the evaluation of evidence, and enables the acquisition of community understanding that has utility beyond investigators’ local circumstances. We present ideas for assessing argumentation quality and suggest approaches to increasing the frequency and quality of argumentation in the exchange among diverse medical education scholars.


Medical Education | 2013

Biomedical knowledge, clinical cognition and diagnostic justification: a structural equation model

Anna T. Cianciolo; Reed G. Williams; Debra L. Klamen; Nicole K. Roberts

Context  The process whereby medical students employ integrated analytic and non‐analytic diagnostic strategies is not fully understood. Analysing academic performance data could provide a perspective complementary to that of laboratory experiments when investigating the nature of diagnostic strategy. This study examined the performance data of medical students in an integrated curriculum to determine the relative contributions of biomedical knowledge and clinical pattern recognition to diagnostic strategy.


Teaching and Learning in Medicine | 2015

Conceptualizing Interprofessional Teams as Multi-Team Systems—Implications for Assessment and Training

Courtney West; Karen Landry; Anna Graham; Lori Graham; Anna T. Cianciolo; Adina Kalet; Michael A. Rosen; Deborah Witt Sherman

SGEA 2015 CONFERENCE ABSTRACT (EDITED) Evaluating Interprofessional Teamwork During a Large-Scale Simulation Courtney West, Karen Landry, Anna Graham, and Lori Graham. Construct: This study investigated the multidimensional measurement of interprofessional (IPE) teamwork as part of large-scale simulation training. Background: Healthcare team function has a direct impact on patient safety and quality of care. However, IPE team training has not been the norm. Recognizing the importance of developing team-based collaborative care, our College of Nursing implemented an IPE simulation activity called Disaster Day and invited other professions to participate. The exercise consists of two sessions: one in the morning and another in the afternoon. The disaster scenario is announced just prior to each session, which consists of team building, a 90-minute simulation, and debriefing. Approximately 300 Nursing, Medicine, Pharmacy, Emergency Medical Technicians, and Radiology students and over 500 standardized and volunteer patients participated in the Disaster Day event. To improve student learning outcomes, we created 3 competency-based instruments to evaluate collaborative practice in multidimensional fashion during this exercise. Approach: A 20-item IPE Team Observation Instrument designed to assess interprofessional teams attainment of Interprofessional Education Collaborative (IPEC) competencies was completed by 20 faculty and staff observing the Disaster Day simulation. One hundred sixty-six standardized patients completed a 10-item Standardized Patient IPE Team Evaluation Instrument developed from the IPEC competencies and adapted items from the 2014 Henry et al. PIVOT Questionnaire. This instrument assessed the standardized or volunteer patients perception of the teams collaborative performance. A 29-item IPE Teams Perception of Collaborative Care Questionnaire, also created from the IPEC competencies and divided into 5 categories of Values/Ethics, Roles and Responsibilities, Communication, Teamwork, and Self-Evaluation, was completed by 188 students including 99 from Nursing, 43 from Medicine, 6 from Pharmacy, and 40 participants who belonged to more than one component, were students at another institution, or did not indicate their institution. The team instrument was designed to assess each team members perception of how well the team and him- or herself met the competencies. Five of the items on the team perceptions questionnaire mirrored items on the standardized patient evaluation: demonstrated leadership practices that led to effective teamwork, discussed care and decisions about that care with patient, described roles and responsibilities clearly, worked well together to coordinate care, and good/effective communication. Results: Internal consistency reliability of the IPE Team Observation Instrument was 0.80. In 18 of the 20 items, more than 50% of observers indicated the item was demonstrated. Of those, 6 of the items were observed by 50% to 75% of the observers, and the remaining 12 were observed by more than 80% of the observers. Internal consistency reliability of the IPE Teams Perception of Collaborative Care Instrument was 0.95. The mean response score—1 (strongly disagree) to 4 (strongly agree)—was calculated for each section of the instrument. The overall mean score was 3.57 (SD = .11). Internal consistency reliability of the Standardized Patient IPE Team Evaluation Instrument was 0.87. The overall mean score was 3.28 (SD = .17). The ratings for the 5 items shared by the standardized patient and team perception instruments were compared using independent sample t tests. Statistically significant differences (p < .05) were present in each case, with the students rating themselves higher on average than the standardized patients did (mean differences between 0.2 and 0.6 on a scale of 1–4). Conclusions: Multidimensional, competency-based instruments appear to provide a robust view of IPE teamwork; however, challenges remain. Due to the large scale of the simulation exercise, observation-based assessment did not function as well as self- and standardized patient-based assessment. To promote greater variation in observer assessments during future Disaster Day simulations, we plan to adjust the rating scale from “not observed,” “observed,” and “not applicable” to a 4-point scale and reexamine interrater reliability.


Teaching and Learning in Medicine | 2016

What's in a Transition? An Integrative Perspective on Transitions in Medical Education

Jorie M. Colbert-Getz; Steven Baumann; Kerri Shaffer; Sara Lamb; Janet E. Lindsley; Robert Rainey; Kristin Randall; Danielle Roussel; Adam Stevenson; Anna T. Cianciolo; Tyler Maines; Bridget O'Brien; Michael Westerman

ABSTRACT This Conversation Starters article presents a selected research abstract from the 2016 Association of American Medical Colleges Western Region Group on Educational Affairs annual spring meeting. The abstract is paired with the integrative commentary of three experts who shared their thoughts stimulated by the needs assessment study. These thoughts explore how the general theoretical mechanisms of transition may be integrated with cognitive load theory in order to design interventions and environments that foster transition.


Medical Education | 2016

Observational analysis of near‐peer and faculty tutoring in problem‐based learning groups

Anna T. Cianciolo; Bryan Kidd; Sean Murray

Near‐peer and faculty staff tutors may facilitate problem‐based learning (PBL) through different means. Near‐peer tutors are thought to compensate for their lack of subject matter expertise with greater adeptness at group facilitation and a better understanding of their learners. However, theoretical explanations of tutor effectiveness have been developed largely from recollections of tutor practices gathered through student evaluation surveys, focus groups and interviews. A closer look at what happens during PBL sessions tutored by near‐peers and faculty members seems warranted to augment theory from a grounded perspective.


Teaching and Learning in Medicine | 2017

Teachers as Learners: Developing Professionalism Feedback Skills via Observed Structured Teaching Encounters

Constance Tucker; Beth Choby; Andrew Moore; Robert Scott Parker; Benjamin R. Zambetti; Sarah Naids; Jillian Scott; Jennifer Loome; Sierra Gaffney; Anna T. Cianciolo; Leslie A. Hoffman; Jaden R. Kohn; Patricia O'Sullivan; Robert L. Trowbridge

ABSTRACT This Conversations Starter article presents a selected research abstract from the 2017 Association of American Medical Colleges Southern Region Group on Educational Affairs annual spring meeting. The abstract is paired with the integrative commentary of 4 experts who shared their thoughts stimulated by the study. These thoughts explore the value of the Observed Structured Teaching Encounter in providing structured opportunities for medical students to engage with the complexities of providing peer feedback on professionalism.


Teaching and Learning in Medicine | 2015

Developing Professionalism via Multisource Feedback in Team-Based Learning.

Amanda R. Emke; Steven Cheng; Carolyn Dufault; Anna T. Cianciolo; David W. Musick; Boyd F. Richards; Claudio Violato

CGEA 2015 CONFERENCE ABSTRACT (EDITED) A Novel Approach to Assessing Professionalism in Preclinical Medical Students Using Paired Self- and Peer Evaluations. Amanda R. Emke, Steven Cheng, and Carolyn Dufault Construct: This study sought to assess the professionalism of 2nd-year medical students in the context of team-based learning. Background: Professionalism is an important attribute for physicians and a core competency throughout medical education. Preclinical training often focuses on individual knowledge acquisition with students working only indirectly with faculty assessors. As such, the assessment of professionalism in preclinical training continues to present challenges. We propose a novel approach to preclinical assessment of medical student professionalism to address these challenges. Approach: Second-year medical students completed self- and peer assessments of professionalism in two courses (Pediatrics and Renal/Genitourinary Diseases) following a series of team-based learning exercises. Assessments were composed of nearly identical 9-point rating scales. Correlational analysis and linear regression were used to examine the associations between self- and peer assessments and the effects of predictor variables. Four subgroups were formed based on deviation from the median ratings, and logistic regression was used to assess stability of subgroup membership over time. A missing data analysis was conducted to examine differences between average peer-assessment scores as a function of selective nonparticipation. Results: There was a significant positive correlation (r = .62, p < .0001) between self-assessments completed alone and those completed at the time of peer assessment. There was also a significant positive correlation between average peer-assessment and self-assessment alone (r = .19, p < .0002) and self-assessment at the time of peer assessment (r = .27, p < .0001). Logistic regression revealed that subgroup membership was stable across measurement at two time points (T1 and T2) for all groups, except for members of the high self-assessment/low peer assessment at T1, who were significantly more likely to move to a new group at T2, χ2(3, N = 129) = 7.80, p < .05. Linear regression revealed that self-assessment alone and course were significant predictors of self-assessment at the time of peer assessment (Fself_alone = 144.74, p < .01 and Fcourse = 4.70, p < .05), whereas average peer rating, stage (T1, T2) and academic year (13–14, 14–15) were not. Linear regression also revealed that students who completed both self-assessments had significantly higher average peer assessment ratings (average peer rating in students with both self-assessments = 8.42, no self-assessments = 8.10, self_at_peer = 8.37, self_alone = 8.28) compared to students who completed one or no self-assessments (F = 5.34, p < .01). Conclusions: When used as a professionalism assessment within team-based learning, stand-alone and simultaneous peer and self-assessments are highly correlated within individuals across different courses. However, although self-assessment alone is a significant predictor of self-assessment made at the time of assessing ones peers, average peer assessment does not predict self-assessment. To explore this lack of predictive power, we classified students into four subgroups based on relative deviation from median peer and self-assessment scores. Group membership was found to be stable for all groups except for those initially sorted into the high self-assessment/low peer assessment subgroup. Members of this subgroup tended to move into the low self-assessment/low peer assessment group at T2, suggesting they became more accurate at self-assessing over time. A small group of individuals remained in the group that consistently rated themselves highly while their peers rated them poorly. Future studies will track these students to see if similar deviations from accurate professional self-assessment persist into the clinical years. In addition, given that students who fail to perform self-assessments had significantly lower peer assessment scores than their counterparts who completed self-assessments in this study, these students may also be at risk for similar professionalism concerns in the clinical years; follow-up studies will examine this possibility.

Collaboration


Dive into the Anna T. Cianciolo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Debra L. Klamen

Southern Illinois University School of Medicine

View shared research outputs
Top Co-Authors

Avatar

Jerry A. Colliver

Southern Illinois University School of Medicine

View shared research outputs
Top Co-Authors

Avatar

Reed G. Williams

Southern Illinois University School of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bryan Kidd

Southern Illinois University School of Medicine

View shared research outputs
Top Co-Authors

Avatar

Nicole K. Roberts

Southern Illinois University School of Medicine

View shared research outputs
Top Co-Authors

Avatar

Steven J. Verhulst

Southern Illinois University School of Medicine

View shared research outputs
Top Co-Authors

Avatar

Karen M. Evans

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge