Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rebecca S. Miller is active.

Publication


Featured researches published by Rebecca S. Miller.


Journal of Graduate Medical Education | 2010

Residency Programs' Evaluations of the Competencies: Data Provided to the ACGME About Types of Assessments Used by Programs

Kathleen D. Holt; Rebecca S. Miller; Thomas J. Nasca

BACKGROUND In 1999, the Accreditation Council for Graduate Medical Education (ACGME) Outcome Project began to focus on resident performance in the 6 competencies of patient care, medical knowledge, professionalism, practice-based learning and improvement, interpersonal communication skills, and professionalism. Beginning in 2007, the ACGME began collecting information on how programs assess these competencies. This report provides information on the nature and extent of those assessments. METHODS Using data collected by the ACGME for site visits, we use descriptive statistics and percentages to describe the number and type of methods and assessors accredited programs (n  =  4417) report using to assess the competencies. Observed differences among specialties, methodologies, and assessors are tested with analysis of variance procedures. RESULTS Almost all (>97%) of programs report assessing all of the competencies and using multiple methods and multiple assessors. Similar assessment methods and evaluator types were consistently used across the 6 competencies. However, there were some differences in the use of patient and family as assessors: Primary care and ambulatory specialties used these to a greater extent than other specialties. CONCLUSION Residency programs are emphasizing the competencies in their evaluation of residents. Understanding the scope of evaluation methodologies that programs use in resident assessment is important for both the profession and the public, so that together we may monitor continuing improvement in US graduate medical education.


Journal of Graduate Medical Education | 2015

Reflections on the First 2 Years of Milestone Implementation.

Eric S. Holmboe; Kenji Yamazaki; Laura Edgar; Lisa N. Conforti; Nicholas Yaghmour; Rebecca S. Miller; Stanley J. Hamstra

The Accreditation Council for Graduate Medical Education (ACGME) and the American Board of Medical Specialties (ABMS) collectively constitute the foundation of professional self-regulation in the United States. In February 1999, the 2 organizations approved 6 general competencies broadly relevant for all medical practice, followed by the official launch of the Outcomes Project in 2001. It was expected that the competencies would be an antidote to overspecification of accreditation standards, and that they would empower programs to create training programs grounded in meaningful outcomes in a developmental approach. As many programs can attest, the implementation of outcomes-based (eg, competency-based) medical education has been challenging. One reason has been the difficulty in implementing the competencies in both curriculum and assessment. Program leaders lacked shared mental models within their own training programs, accompanied by a lack of shared understanding nationally within disciplines. It is important to remember that 1 of the thorny problems the milestones were intended to address was the sources of unwanted and unwarranted variability in educational and, by extension, clinical outcomes. In addition, the community cannot improve at scale what cannot be measured, and prior frames and approaches to measurement were insufficient and ineffective. A key goal for milestones thus is to help improve the state and quality of measurement through better assessment in graduate medical education to facilitate the improved outcomes everyone desires. Approximately 10 years ago, conversations began on how to more effectively and meaningfully operationalize the competencies to help improve the design of residency and fellowship programs through the use of a developmental framework. In parallel, the ACGME began to explore mechanisms to move the accreditation system to a focus on outcomes using a continuous quality improvement philosophy. Developmental milestones, using narratives to describe in more descriptive terms the professional trajectories of residents, were seen as a way to move the outcomes project forward. Starting in 2007, the disciplines of internal medicine, pediatrics, and surgery began to create developmental milestones for the 6 competencies. Surgery would subsequently delay the development of their milestones focusing first on the SCORE curriculum. The ACGME began to restructure its accreditation processes in 2009, and soon after, milestone groups were constituted for all specialties. Milestone writing groups were cosponsored by the ACGME and the ABMS member certification boards. Early groups had significant latitude in developing their subcompetencies and milestones; specialties that started the process after 2010 used a standard template. Each milestone set was subjected to review by the educational community in the specialty. BOX 1 provides an overview of the purposes of the milestones across key stakeholders, and FIGURE 1 provides an example of a key driver diagram of milestones as an educational and clinical intervention. As FIGURE 1 highlights, milestones can potentially trigger a number of drivers, or mechanisms, to help enable changes in residency and fellowship education. In 2013, the milestones were officially launched in 7 core specialties (emergency medicine, internal medicine, neurological surgery, orthopaedic surgery, pediatrics, diagnostic radiology, and urology) as a formative, continuous quality improvement component of the new accreditation system. The remaining core disciplines and the majority of subspecialties implemented the milestones starting in July 2014. We have now reached an important ‘‘milestone’’ in the implementation process, and our commentary proDOI: http://dx.doi.org/10.4300/JGME-07-03-43


Academic Medicine | 2017

Causes of Death of Residents in ACGME-Accredited Programs 2000 Through 2014: Implications for the Learning Environment.

Nicholas Yaghmour; Timothy P. Brigham; Thomas Richter; Rebecca S. Miller; Ingrid Philibert; DeWitt C. Baldwin; Thomas J. Nasca

Purpose To systematically study the number of U.S. resident deaths from all causes, including suicide. Method The more than 9,900 programs accredited by the Accreditation Council for Graduate Medical Education (ACGME) annually report the status of residents. The authors aggregated ACGME data on 381,614 residents in training during years 2000 through 2014. Names of residents reported as deceased were submitted to the National Death Index to learn causes of death. Person-year calculations were used to establish resident death rates and compare them with those in the general population. Results Between 2000 and 2014, 324 individuals (220 men, 104 women) died while in residency. The leading cause of death was neoplastic disease, followed by suicide, accidents, and other diseases. For male residents the leading cause was suicide, and for female residents, malignancies. Resident death rates were lower than in the age- and gender-matched general population. Temporal patterns showed higher rates of death early in residency. Deaths by suicide were higher early in training, and during the first and third quarters of the academic year. There was no upward or downward trend in resident deaths over the 15 years of this study. Conclusions Neoplastic disease and suicide were the leading causes of death in residents. Data for death by suicide suggest added risk early in residency and during certain months of the academic year. Providing trainees with a supportive environment and with medical and mental health services is integral to reducing preventable deaths and fostering a healthy physician workforce.


Journal of Graduate Medical Education | 2009

Assessing duty hour compliance: practical lessons for programs and institutions.

Ingrid Philibert; Rebecca S. Miller; Jeanne K. Heard; Kathleen D. Holt

In the 6 years since implementation of the common duty hour standards on July 1, 2003, programs and institutions have made changes in education and patient care and have achieved sizable gains in compliance, with the percentage of programs cited for duty hour noncompliance currently hovering around 7 percent. Concurrently, the Accreditation Council for Graduate Medical Education (ACGME) has made enhancements to its processes for monitoring and promoting program and institutional oversight of resident duty hours. In 2003, it piloted a resident survey, and between 2004 and 2006, it surveyed all accredited specialty programs and subspecialty programs with 4 or more fellows. The survey was repeated in 2007 and 2008, with half of the programs included each year, and in 2009, the ACGME surveyed all specialty programs, as well as subspecialty programs with 4 or more fellows. During each administration, the survey found a small number of programs (around 5 percent of those surveyed) in which a significant percentage of residents reported noncompliance with several standards. Follow-up has included resurveying, requests for information on how the problem is being addressed, and assessment during the next site visit. In 2008, the ACGME stepped up compliance measures for this cohort, and between October 2008 and March 2009, it conducted site visits of the programs with annually recurring (“continuous”) and multiyear “significant” noncompliance. Program visits entailed review of a full program information form, with added emphasis on duty hour compliance. The concurrent focused institutional reviews for programs in the chronic (successive year) violator group encompassed a review of program-level citations, monitoring of duty hour compliance, documentation of how the program and institution addressed noncompliance problems, and meetings with residents, with oversampling of residents from programs with duty hour noncompliance and those from procedural and other specialties in which residents are known to work longer hours.


Journal of Graduate Medical Education | 2009

The ACGME Resident Survey Aggregate Reports: An Analysis and Assessment of Overall Program Compliance

Kathleen D. Holt; Rebecca S. Miller

BACKGROUND The Accreditation Council for Graduate Medical Education (ACGME) uses a 29-question Resident Survey for yearly residency program assessments. This article describes methodology for aggregating Resident Survey data into 5 discrete areas of program performance for use in the accreditation process. This article also describes methodology for setting thresholds that may assist Residency Review Committees in identifying programs with potential compliance problems. METHODS A team of ACGME staff and Residency Review Committee chairpersons reviewed the survey for content and proposed thresholds (through a modified Angoff procedure) that would indicate problematic program functioning. RESULTS Interrater agreement was high for the 5 content areas and for the threshold values (percentage of noncompliant residents), indicating that programs above these thresholds may warrant follow-up by the accrediting organization. Comparison of the Angoff procedure and the actual distribution of the data revealed that the Angoff thresholds were extremely similar to 1 standard deviation above the content area mean. CONCLUSION Data from the ACGME Resident Survey may be aggregated into internally consistent and consensually valid areas that may help Residency Review Committees make more targeted and specific judgments about program compliance.


Journal of Graduate Medical Education | 2017

Program Performance in the Next Accreditation System (NAS): Results of the 2015–2016 Annual Data Review

Lauren M. Byrne; Rebecca S. Miller; Ingrid Philibert; Louis J. Ling; John R. Potts; Mary Lieh-Lai; Thomas J. Nasca

In 2013, the Accreditation Council for Graduate Medical Education (ACGME) implemented a new accreditation model that emphasized annual datadriven reviews and accreditation decisions, targeted feedback, and the freedom for thriving programs to innovate. Review Committees (RCs) are provided annual data in September and October of each year that allow them to identify overall performance, assess trends, and identify specialty and subspecialty programs that may be underperforming. Data reviewed include program characteristics, participating teaching sites, the clinical learning and work environment, changes in faculty and program leadership, resident attrition, scholarly activities, faculty and resident surveys, resident clinical experience, and programlevel performance on the certification examination of the American Board of Medical Specialties member boards. RCs use this information to give each program an updated accreditation status annually. When the annual data suggest potential problems in a program, the RC may request clarifying information or schedule the program for a site visit. If problems are confirmed, RCs may issue citations for noncompliance or suggest areas for improvement. Programs are required to write a formal response to each citation, which is reviewed by the RC the following year. The ACGME implemented the Next Accreditation System (NAS) using a phased-in approach. The 2015– 2016 annual review cycle is the third review cycle for the 7 Phase I RCs, and the second review cycle for the 20 RCs in Phase II. It represents the first year of a stable state for NAS, as every RC had issued accreditation decisions through the annual review process for at least 1 prior year. Our work previously captured the outcomes associated with the transition of accreditation decisions in NAS and the presence or absence of citations. In this article, we study the accreditation decisions and citation resolution and/or extension for the 2015–2016 annual review cycle.


Journal of Graduate Medical Education | 2015

Comments: ACGME Response.

Ingrid Philibert; Rebecca S. Miller; Thomas J. Nasca

In their letter, Balon and Stromberg comment that clinical productivity demands are a barrier to scholarly activities for faculty who devote significant time to bedside and clinic teaching.1 We concur with the observation of mounting clinical pressures on faculty, but we are deeply troubled by the assertion that the Accreditation Council for Graduate Medical Education (ACGME) annual review of scholarly activities may promote dishonesty among program directors, and that the appropriate response to such a lack of professionalism is a reexamination of the scholarly activity requirements. We also question whether individuals who teach “numerous hours,” yet lack time or interest for scholarly pursuits, are the best teachers. Their knowledge may have grown outdated, and their teaching might not use current approaches or focus on competencies critical to medical practice in this century. At the heart of the stated concerns is a process that uses existing standards in the annual review of data for all programs. Of 9000 accredited programs beyond initial accreditation that are reviewed annually, across all accreditation standards only about 1% were identified as needing a site visit for potential problems, and it is extremely unlikely a program would garner Review Committee attention solely for its dearth of scholarly activity. We agree with the observation that the scholarly activity requirements are in need of review. Yet contrary to statements in the letter, grand rounds and single lectures are already reportable in the “Other Presentations” rubric. The scholarly activity data that have been collected will be used by the ACGME and the Review Committees to gain a better understanding of the environment for scholarship in graduate medical education. The ACGME also encourages the community to deliberate on what constitutes high-impact scholarly activity for faculty engaged in teaching, supervision, and assessment. We believe that being deeply engaged in resident education and assessment, and spending many hours teaching at the bedside or in clinic, generates knowledge about education that deserves dissemination, and that can serve as the basis for scholarly activity. The Journal of Graduate Medical Education was developed to ensure a publication venue for this type of scholarly activity. The Review Committees will use the information to ensure that accreditation standards reflect the current, relevant forms of scholarship. This discussion will undoubtedly need to address the issue of clinical productivity pressures on faculty. Regardless of the outcomes of this deliberation, accepting clinical pressure as justification for lowering scholarly activity expectations, or as an excuse for dishonesty and lack of professionalism by program directors, will not contribute to the high-quality learning environment our residents deserve.


JAMA | 1996

The Initial Employment Status of Physicians Completing Training in 1994

Rebecca S. Miller; Harry S. Jonas; Michael E. Whitcomb


JAMA | 1998

Employment-Seeking Experiences of Resident Physicians Completing Training During 1996

Rebecca S. Miller; Marvin R. Dunn; Thomas Richter; Michael E. Whitcomb


JAMA | 1998

Graduate medical education, 1997-1998.

Marvin R. Dunn; Rebecca S. Miller; Thomas Richter

Collaboration


Dive into the Rebecca S. Miller's collaboration.

Top Co-Authors

Avatar

Thomas J. Nasca

Thomas Jefferson University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harry S. Jonas

American Medical Association

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Timothy P. Brigham

Thomas Jefferson University

View shared research outputs
Top Co-Authors

Avatar

DeWitt C. Baldwin

American Medical Association

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Furman S. McDonald

American Board of Internal Medicine

View shared research outputs
Top Co-Authors

Avatar

Jeanne K. Heard

University of Arkansas for Medical Sciences

View shared research outputs
Top Co-Authors

Avatar

Lisa M. Bellini

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge