Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amy V. Blue is active.

Publication


Featured researches published by Amy V. Blue.


Academic Medicine | 2010

Changing the Future of Health Professions: Embedding Interprofessional Education Within an Academic Health Center

Amy V. Blue; Maralynne D. Mitcham; Tom Smith; John R. Raymond; Raymond S. Greenberg

Institutions are increasingly considering interprofessional education (IPE) as a means to improve health care and reduce medical errors in the United States. Effective implementation of IPE within health professions education requires a strategic institutional approach to ensure longevity and sustainability. In 2007, the Medical University of South Carolina (MUSC) established Creating Collaborative Care (C3), an IPE initiative that takes a multifaceted approach to weaving interprofessional collaborative experiences throughout MUSCs culture to prepare students to participate in interprofessional, collaborative health care and research settings. In this article, the authors describe C3s guiding conceptual foundation and student learning goals. They present its implementation framework to illustrate how C3 is embedded within the institutional culture. It is housed in the provosts office, and an overarching implementation committee functions as a central coordinating group. Faculty members develop and implement C3 activities across professions by contributing to four collaborating domains—curricular, extracurricular, faculty development, and health care simulation—each of which captures an IPE component. The authors provide examples of IPE activities developed by each domain to illustrate the breadth of IPE at MUSC. The authors believe that MUSCs efforts, including the conceptual foundation and implementation framework, can be generalized to other institutions intent on developing IPE within their organizational cultures.


Academic Medicine | 2011

Core Competencies for Interprofessional Collaborative Practice: Reforming Health Care by Transforming Health Professionalsʼ Education

Madeline H. Schmitt; Amy V. Blue; Carol A. Aschenbrener; Thomas R. Viggiano

Concerns about the quality and safety of health care delivery continue to mount, and the deficiencies cannot be addressed by any health profession alone.1 Despite numerous reports citing the need for team-based education in health professions schools,2 meaningful preparation for collaborative practi


Academic Medicine | 1998

Comparing fourth-year medical students with faculty in the teaching of physical examination skills to first-year students

Steven A. Haist; John F. Wilson; Nancy L. Brigham; Sue E. Fosson; Amy V. Blue

PURPOSE: To see whether fourth-year medical students can teach the physical examination to first-year students as effectively as can faculty preceptors. METHOD: Ninety-three first-year students studying the physical examination were randomly assigned to one of ten fourth-year student preceptors or one of 15 faculty preceptors. Test results and course evaluations were compared by type of preceptor. Fourth-year student preceptors were surveyed regarding their experience. RESULTS: The mean test scores did not differ between the first-year students with fourth-year student preceptors and those with faculty preceptors. The first-year students rated the fourth-year student preceptors higher than they did the faculty preceptors. The fourth-year students rated their experience favorably. CONCLUSION: A select group of fourth-year medical students provides a successful alternative to faculty in the teaching of the physical examination to first-year students.


Advances in Health Sciences Education | 2000

The Effect of Gender and Age on Medical School Performance: An Important Interaction

Steven A. Haist; John F. Wilson; Carol L. Elam; Amy V. Blue; Sue E. Fosson

Being able to predict medical school performance is essential to help ensure the supply of quality physicians. The purpose of our study was to examine the influence of gender and age on academic performance (AP) and on academic difficulty (AD). The study involved all matriculants of 3 classes at one medical school. Independent variables included gender, age (categorized into younger and older than 23 years) and the gender by age interaction. Dependent variables included an AP scales core, a clinically based performance examination and AD. The Wilson AP Scale score was developed to assess both excellent and poor performance. The Wilson AP Scale included first-, second-, and third-year medical school grade-point-averages, USMLE Step 1score and USMLE Step 2 score. Older women as a group had the highest mean Wilson AP Scale score. Women performed better than men on the clinically based performance examinations. Younger men were least likely to have AD and younger women were most likely to have AD. Five of 123 younger men versus 13/66 older men had AD. Also, 15/63 younger women had AD versus 2/27 older women. A significant gender by age interaction was present in predicting the Wilson AP Scale score (p =0.009) and AD (p = 0.002). Older women performed better than both older men and younger women in 3 classes of medical students at one medical school. A significant gender by age interaction was predictive of AP and AD. These findings may have implications on admission decisions.


Medical Education | 1999

Students’ attitudes towards computer testing in a basic science course

Robert W. Ogilvie; Thomas C. Trusk; Amy V. Blue

The introduction of computerized testing offers several advantages for test administration, however, little research has examined students’ attitudes toward computerized testing. This paper, reports the attitudes of 202 students in a first year cell biology and histology course toward computerized testing and its influence on their study habits over a three year period.


Surgery | 1998

Assessing residents' clinical performance: Cumulative results of a four-year study with the Objective Structured Clinical Examination

Richard W. Schwartz; Donald B. Witzke; Michael B. Donnelly; Terry D. Stratton; Amy V. Blue; David A. Sloan

BACKGROUND The Objective Structural Clinical Examination (OSCE) is an objective method for assessing clinical skills and can be used to identify deficits in clinical skill. During the past 5 years, we have administered 4 OSCEs to all general surgery residents and interns. METHODS Two OSCEs (1993 and 1994) were used as broad-based examinations of the core areas of general surgery; subsequent OSCEs (1995 and 1997) were used as needs assessments. For each year, the reliability of the entire examination was calculated with Cronbachs alpha. A reliability-based minimal competence score (MCS) was defined as the mean performance (in percent) minus the standard error of measurement for each group in 1997 (interns, junior residents, and senior residents). RESULTS The reliability of each OSCE was acceptable, ranging from 0.63 to 0.91. The MCS during the 4-year period ranged from 45% to 65%. In 1997, 4 interns, 2 junior residents, and 2 senior residents scored below their groups MCS. MCS for the groups increased across training levels in developmental fashion (P < .05). CONCLUSIONS Given the relatively stable findings observed, we conclude (1) the OSCE can be used to identify group and individual differences reliably in clinical skills, and (2) we continue to use this method to develop appropriate curricular remediation for deficits in both individuals and groups.


Journal of Interprofessional Care | 2010

Interprofessional education in US medical schools

Amy V. Blue; James S. Zoller; Terry D. Stratton; Carol L. Elam; John Gilbert

IntroductionInterprofessional education (IPE) is called for in United States health professionseducation (Institute of Medicine, 2003). The Association of American Medical Colleges(AAMC) includes interprofessional health education and practice as a strategic area inwhich the organization and members should engage (AAMC, 2007). The current statusof IPE within United States medical schools has remained largely unexamined.Therefore, we sought to learn the current practice of IPE in US medical schools,including program features, institutional governance and resource contexts, and barriersto implementation.MethodsWe surveyed college of medicine education deans or dean designees of 126 US medicalschools as identified by the AAMC in late summer, 2008, using an instrument we developedfollowing a literature review. The instrument was composed of three sections: (1) adescription of specific IPE offerings at the school, (2) information regarding institutionalsupports and IPE resources, and (3) perceptions of potential barriers to IPE. With respect tothe description of specific IPE offerings, respondents were asked the following: (a) if offeringwas required or elective, (b) learner disciplines involved, (c) faculty disciplines involved, (d)type of learning experience, (e) type of learning setting, (f) general content area of offering,and (g) student assessment methods. With respect to institutional supports and resourcesfor IPE, respondents were asked the following: (a) administrative unit with responsibility forcoordinating IPE, (b) budget for IPE, (c) governance of IPE, (d) resources (monetary and


Academic Medicine | 2005

The relationship between the National Board of Medical Examiners' prototype of the Step 2 clinical skills exam and interns' performance.

Marcia L. Taylor; Amy V. Blue; Arch G. Mainous; Mark E. Geesey; William T. Basco

Purpose To examine the relationship between graduates’ performances on a prototype of the National Board of Medical Examiners’ Step 2 CS and other undergraduate measures with their residency directors’ ratings of their performances as interns. Method Data were collected for the 2001 and 2002 graduates from the study institution. Checklist and interpersonal scores from the prototype Step 2 CS, along with United States Medical Licensing Examination (USMLE) Step 1 and 2 scores and undergraduate grade-point average (GPA), were correlated with residency directors’ ratings (average score for six competencies, quartile ranking, and isolated interpersonal communication competency score). Stepwise linear regression was used to identify the best outcome predictors. Results Quartile ranking was more highly correlated with GPA than Step 2 CS prototype interpersonal score, USMLE Step 2 score, USMLE Step 1 score, and Step 2 CS prototype checklist score. The average score on the residency directors survey was more highly correlated with GPA than USMLE Step 2 score, USMLE Step 1 score, Step 2 CS prototype interpersonal score, and Step 2 CS prototype checklist score. The best predictors for both quartile ranking and average competency score were GPA and Step 2 CS prototype interpersonal score (R2 = 0.26 and 0.28). Conclusion Both scores from the Step 2 CS prototype significantly correlated with the interns’ quartile ranking and average competency score. Only GPA and Step 2 CS prototype interpersonal score accounted for most of the variance of performance in the regression model.


Academic Medicine | 2000

Does institutional selectivity aid in the prediction of medical school performance

Amy V. Blue; Gregory E. Gilbert; Carol L. Elam; William T. Basco

Various factors are considered in the decision to offer an admission interview to a medical school applicant, including Medical College Admission Test (MCAT) scores, undergraduate grade-point average (GPA), and the selectivity of the degree-granting undergraduate institution. Admission officers view MCAT scores, undergraduate GPA, and institutional selectivity as having high or moderate importance. Research has indicated that these factors, most notably the MCAT scores and the undergraduate GPA, are reliable in helping predict medical school performance. The strongest association has been shown between MCAT scores and performance on the United States Medical Licensing Examination, Step 1. Institutional selectivity data are used to help control for differences in grading stringency across undergraduate institutions. Previous reports have examined the role of institutional selectivity, or a specific undergraduate institution, as a predictor of performance in the first two years of medical school. With the exception of the study of Zelesnik et al., which examined ten specific undergraduate institutions, these reports have used the Higher Education Research Institute (HERI) Index, also called the ‘‘Astin Index,’’ as a measure of institutional selectivity. Other measures of institutional selectivity or categorization that schools of medicine may employ include the Barron’s Profiles of American Colleges Admissions Selector Rating and the Carnegie Classification from the Carnegie Foundation for the Advancement of Teaching. (These measures are explained in the next section.) Institutional validity studies of admission decision-making data help to determine which characteristics should be accorded highest importance in applicant selection. Given the reliance upon institutional selectivity as an important admission characteristic and the different types of selectivity classifications available for medical schools to use, the purpose of this study was to examine how well three measures of institutional selectivity could predict medical students’ performances, specifically their performances on the USMLE Step 1 and Step 2 and their final medical school GPAs.


Medical Teacher | 2009

Assessment of matriculating medical students' knowledge and attitudes towards professionalism

Amy V. Blue; Sonia J. Crandall; George Nowacek; Richard M. Luecht; Sheila W. Chauvin; Herbert Swick

Background: Students’ perceptions of traditional attributes of professionalism are important for understanding their professional development needs, and determining appropriate curricular initiatives and assessment methods. Aim: This study assessed the knowledge and attitudes towards professionalism of three classes of matriculating students at two institutions. Methods: Subjects completed four instruments: a multiple-choice test and a clinical scenario instrument assessed knowledge; and a semantic differential scale and Likert-format statement instrument assessed attitudes. Items reflected traditional professionalism attributes. Factor analysis identified scales and descriptive statistics were computed for each scale. Results: Six hundred and forty six students (82%) completed the instruments. Correlations among scales were low to moderate. Knowledge scores were highest for the attributes ‘humanism’ and ‘professional responsibility’ and lowest for the attribute ‘professional commitment’. Attitude scores were highest for ‘humanistic values’ and lowest for ‘subordinating self-interests’. Conclusions: Results indicate students’ attitudes are positive about several of the attributes associated with traditional professionalism definitions; however, there were cases where students’ knowledge and attitudes towards professionalism appear incongruent with traditional definitions. Further development of self-assessments of knowledge and attitudes towards professionalism are suggested.

Collaboration


Dive into the Amy V. Blue's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

William T. Basco

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Gregory E. Gilbert

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander W. Chessman

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mainous Ag rd

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Steven A. Haist

National Board of Medical Examiners

View shared research outputs
Researchain Logo
Decentralizing Knowledge