Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elaine F. Dannefer is active.

Publication


Featured researches published by Elaine F. Dannefer.


Medical Education | 2005

Peer assessment of professional competence.

Elaine F. Dannefer; Lindsey C. Henson; S. Beth Bierer; Tana A. Grady-Weliky; Sean Meldrum; Anne C. Nofziger; Craig R. Barclay; Ronald M. Epstein

Background  Current assessment formats for medical students reliably test core knowledge and basic skills. Methods for assessing other important domains of competence, such as interpersonal skills, humanism and teamwork skills, are less well developed. This study describes the development, implementation and results of peer assessment as a measure of professional competence of medical students to be used for formative purposes.


Teaching and Learning in Medicine | 2004

Comprehensive Assessment of Professional Competence: The Rochester Experiment

Ronald M. Epstein; Elaine F. Dannefer; Anne C. Nofziger; John T. Hansen; Stephen Schultz; Nicholas Jospe; Laura W. Connard; Sean Meldrum; Lindsey C. Henson

Background: A required 2-week comprehensive assessment (CA) for 2nd-year medical students that integrates basic science, clinical skills, information management, and professionalism was implemented. Description: The CA links standardized patients (SPs) with computer-based exercises, a teamwork exercise, and peer assessments; and culminates in student-generated learning plans. Evaluation: Scores assigned by SPs showed acceptable interrater reliability. Factor analyses defined meaningful subscales of the peer assessment and communication rating scales. Ratings of communication skills were correlated with information gathering, patient counseling, and peer assessments; these, in turn, were strongly correlated with the written exercises. Students found the CA fair, with some variability in opinion of the peer and written exercises. Useful learning plans and positive curricular changes were undertaken in response to the CA results. Conclusion: A CA that integrates multiple domains of professional competence is feasible, useful to students, and fosters reflection and change. Preliminary data suggest that this format is reliable and valid.


Medical Teacher | 2012

Student perspectives on assessment: Experience in a competency-based portfolio system

Faysal Altahawi; Bryan Sisk; Stacey L. Poloskey; Caitlin W. Hicks; Elaine F. Dannefer

Despite considerable evidence recognizing the importance of learners’ perceptions of the assessment process, there is little literature depicting the participants’ experience. We aim to capture these perceptions in order to gain insights into the strengths and weaknesses of a competency-based assessment system. Cleveland Clinic Lerner College of Medicine has implemented a learner-centered portfolio assessment system built around competency standards and continuous formative feedback. Promotion of students is based upon their feedback-supported portfolio essays, but feedback itself is individualized and formative in nature under the umbrella of the competencies. Importantly, there are no grades or ranking awarded for the competencies or at promotion. Four students share personal reflections of their experience to illuminate themes from the subjective experience of the learner and to understand how to align the learners’ interests with the requirements of an assessment program.


Journal of Palliative Medicine | 2003

An Integrated Biopsychosocial Approach to Palliative Care Training of Medical Students

Timothy E. Quill; Elaine F. Dannefer; Kathryn M. Markakis; Ronald M. Epstein; Jane Greenlaw; Kathy McGrail; Maria Milella

In 1996 the University of Rochester School of Medicine, Rochester, New York, began a major curricular reform called the Double Helix Curriculum, integrating basic science and clinical training over 4 years of medical school. This transition provided a unique opportunity to develop and implement a fully integrated, comprehensive palliative care curriculum. In this three-part paper, we will describe: (1) our process of finding curricular time, setting priorities, and deciding on pedagogical strategies; (2) an overview of how palliative care teaching was integrated into the general curriculum, including examples of different teaching opportunities; and (3) our evaluation process, and some ongoing challenges. Because palliative care is a core element in the care of all seriously ill patients, we chose to integrate our teaching into multiple courses over 4 years of undergraduate medical education, and not isolate it in a particular course. We view this report not as an ideal curriculum to be emulated in its entirety but as a work in progress that may be somewhat unique to our institution. We intend to illustrate a process of incremental curriculum building, and to generate some fresh teaching ideas from which palliative care educators can select depending on their own curricular needs and objectives.


Medical Teacher | 2013

Beyond assessment of learning toward assessment for learning: educating tomorrow's physicians.

Elaine F. Dannefer

Beyond its importance in informing high-stakes decisions, the assessment process can also be designed to foster learning. To be effective, this requires developing a program in which curricular experiences, assessment practices and support activities are aligned to provide an educational culture that encourages self-regulated learning. We describe a program (based at Cleveland Clinic Lerner College of Medicine) in which explicit performance standards align these components and provide a roadmap for students to manage their learning. Information-rich assessment data, structured opportunities for reflection, and facilitated self-assessment using a portfolio approach are designed to support development of habits of reflective practice. Promotion depends on the achievement of competencies rather than grades. Preliminary evidence suggests that the program directs students towards learning, rather than on achieving a grade for grades sake.


Medical Teacher | 2008

Methods to assess students' acquisition, application and integration of basic science knowledge in an innovative competency-based curriculum.

S. Beth Bierer; Elaine F. Dannefer; Christine A. Taylor; Phillip Hall; Alan L. Hull

Background: The Cleveland Clinic Lerner College of Medicine was designed to encourage medical students to pursue careers as physician investigators. Our faculty decided that assessment should enhance learning and adopted only formative assessments to document student performance in relation to nine broad-based competencies. No grades are used to judge student performance throughout the 5-year program. Instead, assessments are competency-based, relate directly to performance standards, and are stored in e-Portfolios to track progress and document student achievement. The class size is limited to 32 students a year. Aims: Schools with competency-based curricula must provide students with formative feedback to identify performance gaps and monitor progress. We describe a systematic approach to assess medical knowledge using essay-type questions (CAPPs) and multiple choice questions (SAQs) to provide medical students with weekly, formative feedback about their abilities to acquire, apply and integrate basic and clinical science concepts. Method: Processes for developing performance standards, creating assessment items, training faculty, reporting student performance and monitoring outcomes are described. A case study of a Year 1 course is presented with specific examples of CAPPs and SAQs to illustrate how formative assessment data are interpreted and reported in students’ e-Portfolios. Results: Preliminary evidence suggests that CAPPs and SAQs have a positive impact on students’ education, a justifiable cost in light of obtained benefits and growing acceptance among stakeholders. Two student cohorts performed significantly above the population mean on USMLE Step 1, which suggests that these assessment methods have not disadvantaged students. More evidence is needed to assess the reliability and validity of these tools for formative purposes. Conclusions: Using assessment data for formative purposes may encourage application and integration of knowledge, help students identify performance gaps, foster student development of learning plans and promote student responsibility for learning. Discussion provides applications for institutions with larger classes to consider.


Medical Teacher | 2012

Evidence within a portfolio-based assessment program: What do medical students select to document their performance?

Elaine F. Dannefer; S. Beth Bierer; Sophia P. Gladding

Background: Decisions about performance in programs of assessment that provide an array of assessment evidence require judgments about the quality of different pieces of assessment data to determine which combination of data points best represent a trainees overall performance. Aim: In this article, we examine the nature of evidence selected by first-year medical students to include in a portfolio used to make promotion decisions. Methods: We reviewed portfolios to examine the number, type, and source of assessments selected by students (n = 32) to document their performance in seven competencies. The quality of assessment data selected for each competency was rated by promotion committee members (n = 14). Results: Findings indicate that students cited multiple types and sources of available assessments. The promotion committee rated evidence quality highest for competencies where the program provided sufficient evidence for students to cite a broad range of assessments. When assessments were not provided by the program, students cited self-generated evidence. Conclusion: We found that when student-constructed portfolios are part of an overall assessment system, students generally select evidence in proportion to the number and types of assessments available.


Academic Medicine | 1998

Communication and the process of educational change.

Elaine F. Dannefer; Mary Anne Johnston; Sharon K. Krackov

In this chapter, the authors describe the role of communication in the process of curricular reform at the eight schools that participated in The Robert Wood Johnson Foundations “Preparing Physicians for the Future: Program in Medical Education.” The collective experience of these eight schools suggests that despite its general neglect in the discourse on educational innovation, good communication is a decisive element of any successful reform initiative. The authors focus this chapter on effective communication patterns for supporting educational reform. First, the authors discuss a four-stage model of change--recognizing the need for change, and planning, implementing, and institutionalizing change--and describe the role of communication in each of them. They outline the communication strategies needed to promote a sense of ownership among all participants; structures and mechanisms for supporting positive communication; and common lessons learned by all schools about successful communication.


Medical Teacher | 2012

Towards a systems approach to assessment

C.P.M. van der Vleuten; Elaine F. Dannefer

In her plenary at the AMEE conference 2009 in Malaga, Professor Janet Grant labeled the assessment of competence field ‘‘the crown jewels’’ of medical education. Indeed, assessment has a very rich history in our field. In a recent analysis of the medical education literature from 1988 to 2010, assessment was found to be the most popular topic with approximately 26% of the total number of papers dedicated to assessment (Rotgans 2011). Over these years, we see four major developments. One is the plethora of methods that have been proposed and investigated. Within a few decades, we have in essence been able to ‘climb’ Miller’s (1990) pyramid with numerous assessment instruments. One might say that in the arena of standardized assessment technology (first three layers of the pyramid), we could speak of an established technology which is heavily used in assessment practices in our training programs across the whole spectrum of the training continuum in medical education. Currently, we are in the midst of developing non-standardized methods, assessing performance in the authentic setting, either in the educational environment or in the professional workplace. Associated with this authentic assessment is the prominence of complex competencies, sometimes called domain-independent skills, generic skills, or soft skills, such as professionalism, communication, and collaboration. These skills are increasingly being considered to be essential for optimal professional functioning. These complex competencies can hardly be assessed with standardized assessment technology. A second development is related to the first one and that is the well-developed methodology around assessment. In areas of item and test construction, stimulus and response formats, scoring, itemand test analysis, standard setting, and validation strategies, a lot of ‘technology’ is available. A lot of these methods and associated technologies have impacted training programs and are even becoming part of accreditation requirements. A third development is the notion of assessment for learning as opposed to assessment of learning. In the latter notion, certification, decision making, and promotion are central concepts, basically to ascertain if a person assessed has acquired (minimum) mastery of a certain domain. In assessment for learning, the learning function of assessment is emphasized. Feedback and learning support for students to promote growth and excellence development become central notions. We think we are still at the start of a whole new area of assessment here. Finally, the fourth development is the move beyond the individual assessment method toward a more system-oriented approach, also called programmatic assessment. This is the view that an assessment program is a deliberate arrangement of a set of assessment activities. Individual methods do not stand alone in a programmatic approach, but may have their own meaning in relation to the total program. In a well-arranged assessment program, one tries to apply as many insights from all previous developments into a whole, reconciling local traditions, regulations, educational philosophies, and resources. We think the assessment field requires urgent progression in the development of the systems approach. Although the assessment literature is filled with information on how to design individual assessment instruments, very little information is found on assessment programs. Except for the work of Baartman et al. (2007) and the initial work of Dijkstra et al. (2010), very little can be found on designing assessment programs and virtually nothing about their implementations and their functioning. With the following four papers, we wish to make some first steps in this direction. The first paper by van der Vleuten and coworkers is a theoretical one. It describes a model for an assessment program that tries to optimize both assessment for learning and of learning. The model is described generically in terms of learning activities, assessment activities, and learner support activities and has periodic progression evaluation moments. Strategies are described on how the combination of these activities makes the program as a whole fit for purpose. Fit for purpose in terms of being educationally meaningful, while at the same time allowing robust decision making on learner progress. The model is also generic for the setting, but assumes a learner-centered program. The presented model in the first paper is abstract and rather idealistic. Two questions that probably come to mind when confronted with this model is how might it function in actual practice and how is it then evaluated. These questions are addressed in the following three papers. Dannefer and coworkers provide a description of the assessment program of the Cleveland Clinic Lerner College of Medicine. It very much functions in conformity with the theoretical model as in the first paper. It relies heavily on the gathering of feedback from all kinds of assessment sources. Students use this feedback to direct their learning and select evidence used for progress decisions. The paper also evaluates the program by looking at the strength of evidence that learners provide for their summative performance review. In a second paper,


Perspectives on medical education | 2016

Factors influencing students’ receptivity to formative feedback emerging from different assessment cultures

Christopher Harrison; Karen D. Könings; Elaine F. Dannefer; Valerie Wass; Cees van der Vleuten

IntroductionFeedback after assessment is essential to support the development of optimal performance, but often fails to reach its potential. Although different assessment cultures have been proposed, the impact of these cultures on students’ receptivity to feedback is unclear. This study aimed to explore factors which aid or hinder receptivity to feedback.MethodsUsing a constructivist grounded theory approach, the authors conducted six focus groups in three medical schools, in three separate countries, with different institutional approaches to assessment, ranging from a traditional summative assessment structure to a fully implemented programmatic assessment system. The authors analyzed data iteratively, then identified and clarified key themes.ResultsHelpful and counterproductive elements were identified within each school’s assessment system. Four principal themes emerged. Receptivity to feedback was enhanced by assessment cultures which promoted students’ agency, by the provision of authentic and relevant assessment, and by appropriate scaffolding to aid the interpretation of feedback. Provision of grades and comparative ranking provided a helpful external reference but appeared to hinder the promotion of excellence.ConclusionsThis study has identified important factors emerging from different assessment cultures which, if addressed by programme designers, could enhance the learning potential of feedback following assessments. Students should be enabled to have greater control over assessment and feedback processes, which should be as authentic as possible. Effective long-term mentoring facilitates this process. The trend of curriculum change towards constructivism should now be mirrored in the assessment processes in order to enhance receptivity to feedback.

Collaboration


Dive into the Elaine F. Dannefer's collaboration.

Top Co-Authors

Avatar

S. Beth Bierer

Cleveland Clinic Lerner College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lindsey C. Henson

Cleveland Clinic Lerner College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christine A. Taylor

Cleveland Clinic Lerner College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan L. Hull

Cleveland Clinic Lerner College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John E. Tetzlaff

Cleveland Clinic Lerner College of Medicine

View shared research outputs
Researchain Logo
Decentralizing Knowledge