Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick C. Alguire is active.

Publication


Featured researches published by Patrick C. Alguire.


Annals of Internal Medicine | 2012

Appropriate Use of Screening and Diagnostic Tests to Foster High-Value, Cost-Conscious Care

Amir Qaseem; Patrick C. Alguire; Paul Dallas; Lawrence E. Feinberg; Faith T. Fitzgerald; Carrie Horwitch; Linda Humphrey; Richard F. LeBlond; Darilyn V. Moyer; Jeffrey G. Wiese; Steven E. Weinberger

Unsustainable rising health care costs in the United States have made reducing costs while maintaining high-quality health care a national priority. The overuse of some screening and diagnostic tests is an important component of unnecessary health care costs. More judicious use of such tests will improve quality and reflect responsible awareness of costs. Efforts to control expenditures should focus not only on benefits, harms, and costs but on the value of diagnostic tests-meaning an assessment of whether a test provides health benefits that are worth its costs or harms. To begin to identify ways that practicing clinicians can contribute to the delivery of high-value, cost-conscious health care, the American College of Physicians convened a workgroup of physicians to identify, using a consensus-based process, common clinical situations in which screening and diagnostic tests are used in ways that do not reflect high-value care. The intent of this exercise is to promote thoughtful discussions about these tests and other health care interventions to promote high-value, cost-conscious care.


Annals of Internal Medicine | 2003

Competency in Interpretation of 12-Lead Electrocardiograms: A Summary and Appraisal of Published Evidence

Stephen M. Salerno; Patrick C. Alguire; Herbert S. Waxman

Interpretations of 12-lead resting electrocardiograms (ECGs) are often required in both ambulatory and inpatient settings. Organizations, including the American College of Cardiology (ACC) and the American Heart Association (AHA), have published consensus-based competency standards suggesting the optimal number of ECGs necessary to obtain and maintain competency in ECG interpretation skills. The ACC/AHA guidelines recommend interpreting a minimum of 500 supervised ECGs during initial training, using standardized testing in ECG interpretation to confirm initial competency and interpreting 100 ECGs yearly to maintain competency (1). The consensus statement asserts that these standards should apply to all practice settings and situations; however, the statement is controversial because of the lack of evidence-based literature on the optimal techniques for learning, maintaining, and testing competency in ECG interpretation. This systematic review attempts to synthesize the literature that may prove useful for physicians, program directors, and organizations seeking alternative evidence-based information on physician ECG interpretation skills. Methods We retrieved information through systematic searches and ongoing surveillance of MEDLINE (1996 to 2002), EMBASE (1974 to 2002), and the Cochrane Controlled trials Register (1975 to 2002). We used the following index terms and text words: electrocardiogram interpretation, electrocardiogram competency, electrocardiogram training, electrocardiogram errors, and computer electrocardiogram interpretation. Our search was limited to English-language articles that studied adult participants. References of the full-length articles were analyzed for additional citations. The search revealed 419 articles of potential interest. After we analyzed the abstracts of these articles, we eliminated 378 because ECG interpretation was not the main study focus. Thirty-nine articles and 2 letters to the editor contained research data directly related to ECG interpretation. We divided the articles into the following broad categories: studies that included clinical outcomes (Table 1), studies that included discussion of the accuracy of computer ECG interpretation (Table 2), and all other studies that compared noncardiologists to a cardiologist reference standard (Table 3). The studies were not graded by quality. Although criteria exist for grading the quality of studies of diagnostic tests, the focus of our study was the users of the test rather than the usefulness of electrocardiography for diagnosing specific diseases. We also reviewed the recommendations of the American Board of Internal Medicine, American College of Physicians, AHA, ACC, and Accreditation Council on Graduate Medical Education Residency Review Committees for Internal Medicine and Cardiovascular Diseases (1, 43-45). Table 1. Electrocardiogram Interpretation Studies with Clinical Outcomes Table 2. Electrocardiogram Interpretation Studies Comparing Computer and Physician Interpretation Table 3. Electrocardiogram Interpretation Studies without Outcomes Related to Interpretation Errors Reference Standards for Determining ECG Interpretation Accuracy One common feature of most ECG interpretation studies is the use of an expert electrocardiographer gold standard, typically a consensus panel of cardiologists. This may be problematic because interpretations by several cardiologists reading the same ECG often vary substantially (14, 17, 29, 42). Even one cardiologist reading the same ECG on separate occasions may have substantially different interpretations (14, 29, 42). Most studies on ECG interpretation by cardiologists report the proportion of abnormal diagnoses that are correctly identified, as determined by a consensus panel. These studies report that the participating cardiologists correctly determined 53% to 96% of the abnormalities identified by the reference standard (4, 17, 21, 23, 24, 28, 30). Two recent studies examined whether cardiologists agreed among themselves and their colleagues by using statistics to adjust for interpretations that agreed on the basis of chance alone (14, 29). Holmvang and colleagues reviewed 502 ECGs examined by both local cardiologists and an expert electrocardiography consensus panel (29). Agreement was poor to moderate on identification of ST-segment elevation ( = 0.05), ST-segment depression ( = 0.38), and a normal ECG ( = 0.42). Interrater agreement on detecting T-wave inversion was very good ( = 0.63). In contrast, intrarater agreement by the expert electrocardiographic consensus panel was good ( = 0.58 to 0.67). Levels of agreement may be higher for serious abnormalities, such as ST-segment elevation criteria for use of thrombolytic therapy. Massel and colleagues reported substantial agreement ( = 0.78) among three cardiologists examining whether 75 ECGs met criteria for thrombolytic therapy (14). The study also showed good intrarater agreement ( = 0.67 to 0.71) when the three cardiologists determined criteria for thrombolytic therapy on two separate occasions. Because cardiologists do not agree on many aspects of ECG interpretation, future studies examining noncardiologist or computer interpretation should include statistics and control groups of cardiologists. Readers can then draw more sophisticated conclusions about the importance of disagreements in ECG analysis. More accurate measures of agreement, such as weighted statistics, should be included because disagreement may relate to only some aspects of the ECG interpretation. Frequency of ECG Interpretation Errors by Staff and Resident Physicians Numerous trials have compared the ECG interpretation skills of cardiologists and noncardiologists. Most studies measured the proportion of ECG diagnoses determined by an expert consensus panel gold standard that noncardiologist physicians could identify. Seven studies measuring comprehensive ECG analysis found that the proportion of ECG diagnoses correctly identified by noncardiologist physicians ranged from 36% to 96% (4-6, 8, 11, 13, 33). In several studies that focused on a particular aspect of ECG interpretation, noncardiologists identified 87% to 100% of ECGs showing acute myocardial ischemia (6, 27), correctly classified 72% to 94% of ECGs as meeting inclusion criteria for thrombolytic therapy (3, 26, 30, 32, 37, 40), diagnosed 57% to 95% of ST-segment abnormalities (2, 9, 26), and correctly measured about 25% of PR and QT intervals (36). Although most studies combined resident and staff physicians in their analyses, some provided data allowing subgroup analysis. Twelve articles included information specifically on resident physician interpretation skills. Resident physicians detected 96% of abnormal ECGs (15), correctly identified inclusion criteria for thrombolytic therapy 73% to 84% of the time (3, 26, 30, 32), demonstrated 36% to 80% of ECG diagnoses as determined by expert electrocardiographers (5, 6, 11, 13, 21, 28, 41), and discovered 38% of technical ECG abnormalities (41). Only two articles provided sufficient information for subgroup analysis on noncardiologist staff physicians. The articles did not provide information on specialty background or board certification. Staff physicians correctly identified inclusion criteria for thrombolytic therapy 77% of the time (30) and diagnosed abnormal ST-segment and T-wave abnormalities 57% to 97% of the time (16). Four studies provided information on the ECG interpretation skills of nonphysicians. In two studies, nurses correctly identified ECG criteria for thrombolytic therapy 84% to 94% of the time (30, 37). The other studies examined the interpretation skills of medical students (28, 38), who identified 17% to 63% of ECG abnormalities identified by an expert reference standard. Trainees often gain experience in ECG interpretation when they use an ECG to assist in the clinical management of a patient. Not surprisingly, research has shown that clinical history may affect interpretation of ECGs (28, 31, 38). Providers with less training are influenced by the history to a greater extent than are more experienced electrocardiographers. For example, when given a misleading history, diagnostic accuracy was reduced by 5% for cardiologists but up to 25% for residents in one recent study (28). Cardiologists performed better than other interpreters in all settings and were 90% accurate in their diagnoses, even when no history was provided. Another study also demonstrated that interpretations by cardiologists were minimally dependent on the presence or absence of history (46). This information suggests that noncardiologist interpreters may make more accurate interpretations when they know the clinical context of the ECG. False-positive ECG interpretations could lead to unnecessary patient treatment. Six studies examined this aspect of interpretation (9, 15-17, 37, 40). Cardiologists typically had fewer false-positive interpretations than did noncardiologists. Cardiologists demonstrated a specificity of 93% to 100% for diagnosis of substantial ECG abnormalities, such as myocardial ischemia; specificity for noncardiologist physicians was 73% to 100%. In simpler determinations, such as differentiating normal from abnormal ECGs, the specificity of noncardiologists also approached that of cardiologists (15). Severity and Consequences of Errors in ECG Interpretation Eleven studies examined whether ECG interpretation errors could affect patient management, and seven measured patient outcomes (Table 1). Thirteen studies assessing the severity of interpretation errors reported that 4% to 33% of interpretations contained errors of major importance (2-13, 21). Expert consensus panels studied the charts of patients with ECG interpretation errors and determined whether a correct ECG interpretation would have changed patient management. This more detailed analysis revealed inappropriate management as a result of interpretation errors in 0% to 11% of cases (Tabl


Journal of General Internal Medicine | 1998

A Review of Journal Clubs in Postgraduate Medical Education

Patrick C. Alguire

OBJECTIVE: To review the goals, organization, and teaching methods of journal clubs, summarize elements of successful clubs, and evaluate their effect on reading habits, and effectiveness in meeting teaching goals. Examples of clubs that utilize principles of adult learning are reviewed.DATA SOURCES: English language articles identified through a MEDLINE search (1966–1997) using the MeSH terms “internship” and “residency,” and text words “journal club” and “critical appraisal.”STUDY SELECTION: Articles on learning goals and organization were included if they represented national or regional surveys with a response rate of 65% or greater. Articles that evaluated teaching effectiveness were included if they used a controlled, educational design, or if they exemplified important adult learning principles.DATA EXTRACTION: Data were manually extracted from selected studies and reviews.DATA SYNTHESIS: A major goal for most clubs is to teach critical appraisal skills. Clubs with high attendance and longevity are characterized by mandatory attendance, availability of food, and perceived importance by the program director. Residents who are taught critical appraisal report paying more attention to the methods and are more skeptical of the conclusions, and have increased knowledge of clinical epidemiology and biostatistics, but studies have failed to demonstrate that these residents read more, or read more critically. Reading guidelines may be useful for teaching critical appraisal skills, and may be associated with increased resident satisfaction.CONCLUSIONS: Journal club formats are educationally diverse, can incorporate adult learning principles, and are an adaptable format for teaching the “new basic sciences.”


Annals of Internal Medicine | 2004

A New Model for Accreditation of Residency Programs in Internal Medicine

Allan H. Goroll; Carl Sirio; F. Daniel Duffy; Richard F. LeBlond; Patrick C. Alguire; Thomas A. Blackwell; William E. Rodak; Thomas Nasca

Medical education is experiencing a back-to-basics movement, with increased emphasis on mastery of core clinical competencies (1-3). Debates over curricular time, clinical rotations, and conferences are being replaced by discussions about clinical competence and its assessment (4-8). The change is driven largely by evolving societal mandates for quality, safety, and accountability in health care (9-11) and is resulting in re-examinations of priorities and programs not only by medical schools and residency training programs but also by certifying, licensing, and accrediting bodies (12, 13). As the accrediting body for the nations medical residency programs, the Accreditation Council for Graduate Medical Education (ACGME) bears responsibility for the quality of graduate medical education (14). Through its accrediting authority, the ACGME has the potential to serve as a constructive force for reform of graduate medical education by better aligning accreditation standards with desired medical education outcomes (5, 6). This paper describes a new outcomes-based model for residency program accreditation in internal medicine initiated by the ACGMEs Residency Review Committee for Internal Medicine (RRC-IM). The RRC-IMs Long-Range Planning Committee (LRPC) was asked to perform a blue sky examination of residency program accreditation in internal medicine, taking up the ACGME charge to make the system more outcomes based (that is, focused on trainee clinical competence) (5). In response, the LRPC developed a new accreditation model that is designed to serve as a long-range plan and template for advancing graduate medical education in internal medicine through reform of the accreditation process. Related goals included 1) enhancing the validity, reliability, and efficiency of the accreditation system; 2) encouraging continuous program improvement; and 3) stimulating educational innovation. Current Approach to Accreditation in Internal Medicine and Its Shortcomings The current approach relies on documentation of compliance with an extensive list of requirements in such areas as facilities, faculty, teaching program, and methods of evaluation. There are nearly 400 specific requirements listed (15), and educational processes account for the vast majority. The only objective outcome measure is the 3-year rolling-average aggregate pass rate of program graduates on the American Board of Internal Medicine (ABIM) certification examination. While substantive improvements have been made to this accreditation system (for example, use of standardized computer-based resident questionnaires and constant updating and revision of program requirements in close consultation with stakeholders), the system remains a largely passive process for the training programs, relying on periodic external audit. Shortcomings include extensive documentation that must be prepared by training program directors, hundreds of hours of review required annually by RRC members, little incentive for program directors to monitor key educational outcomes or to continuously improve educational programs between audits, and absence of comprehensive objective measurements of program effectiveness. These limitations leave some program directors and reviewers questioning the value of the current accreditation process and others concerned about the accountability and societal responsiveness of our training system in internal medicine (12, 13). Toward a New Accreditation System: Basing Program Accreditation on Aggregate Clinical Competence and Essential Educational and Clinical Infrastructures The LRPC accreditation model shifts the focus of accreditation from intermittent external audit of educational process to continuous internal monitoring and improvement of trainee clinical competence. In addition, the model specifies essential resident and patient care protections that foster a safe and effective training environment. Rationale The main reason for changing to a competency-based accreditation system is to better align the accreditation process with desired educational outcomes. By making clinical competence the principal basis for residency program accreditationa high-stakes determinationthe model redirects attention to and reinforces the primary educational mission. The existing accreditation systems concentration on educational process prompts the question of whether a programs compliance with current requirements ensures the graduation of skilled internists and enables differentiation between good-quality training programs and substandard ones (12, 13). An accreditation system that makes aggregate trainee clinical competence the prime basis for accreditation should enhance program accountability and provide a powerful stimulus to improve training. In addition, by emphasizing outcomes over process, the new model gives program directors considerably more freedom to innovate. As important as aggregate clinical competence is for judging the educational effectiveness of a residency program, it is insufficient to ensure fundamental patient and trainee protections. Consequently, the proposed model specifies the inclusion of a limited set of patient care and educational infrastructure requirements that foster safe, well-functioning systems of care and training. If this reform is to achieve the goal of continuous program improvement, the audit process associated with accreditation needs to become more continuous and internal to the program rather than infrequent and predominantly external. For this reason, the current review process of extensive documentation and periodic external audits for compliance with process requirements is largely replaced by regular ongoing internal (faculty) assessment of trainee clinical performance. Continuous program self-monitoring provides the opportunity for real-time feedback that can be applied both to improving an individual trainees clinical performance and to strengthening the overall educational program. The alternative, a periodic external evaluation of trainee competence, while appealing for its potential objectivity and uniformity, cannot deliver the necessary information in timely fashion and would be very expensive and difficult to carry out without disrupting training and patient care. Despite the emphasis on local assessment of trainee competence as the principal basis for accreditation, the model proposes a balanced approach to program evaluation that recognizes the need to have complementary external oversights and national performance standards. Included in the external oversight would be certification and audit of the local evaluation process. In addition, some measures of clinical competence, such as assessment of medical knowledge and reasoning, will continue to be performed by external testing (for example, through the secured ABIM examination). Moreover, external peer review is retained in this model, relying on the RRC to examine all data and use its judgment to render the final accreditation decision. An outcomes-based system will require time and much developmental work before it can be universally implemented; however, the LRPC views immediate pilot implementation as a desirable first step. On the basis of the models potential to stimulate continuous program improvement, the LRPC has recommended that programs with a strong accreditation history be able to opt out of the current accreditation system in return for 1) achieving a higher minimum ABIM pass rate, 2) meeting a strengthened set of infrastructure standards, and 3) implementing a comprehensive (possibly home-grown) competency assessment program that includes critical review of evaluation methods and use of clinical performance data to continuously improve the teaching program. Developing and implementing assessment methods and setting performance standards represent major undertakings that will require years of shared work and pooled resources. Although the RRC-IM has ultimate responsibility for setting aggregate trainee competency standards for program accreditation, its goal of fundamentally changing the accreditation system will need to be a collaborative process that solicits detailed input from all stakeholders (including the public). Working with stakeholders and using its accrediting authority, the RRC-IM can stimulate development and testing of clinical competency measures and help bring about a sound, acceptable, publically responsive outcomes-based accreditation system. The Outcome Requirements: Mastery of Core Competencies The proposed outcome measures are the clinical competencies that all residents should master by the end of their training. A set of such competencies has been identified through an ACGME initiative (5) and refined by stakeholders in internal medicine into a set of working definitions expressed in behavioral terms (Table 1) (16). The ABIM incorporated these 6 competencies into its resident evaluation forms (17), and the ACGME has asked all of its RRCs to begin making reference to the competencies in their program requirements (18). Performance standards for the competencies will need to be specified and might be stated in terms of outcome statements that define safe, patient-centered, efficient, effective, and appropriate care (similar to the Institute of Medicines Aims [19]). Table 1. The Core Clinical Competencies for Internal Medicine* The Infrastructure Requirements: Essential Trainee and Patient Protections Complementing the competency standards in this model would be requirements for institutional and program infrastructures essential to safe and effective education and patient care. Examples of educational infrastructure requirements might include workload standards, duty-hour limits, faculty qualifications, and procedures ensuring regular evaluation and timely feedback. Requirements to ensure patient safety might encompass electronic medical records, order-entry and tracking systems, and well-organized and adequa


Annals of Internal Medicine | 1996

Resident Research in Internal Medicine Training Programs

Patrick C. Alguire; William A. Anderson; Richard R. Albrecht; Gregory A. Poland

Resident research is strongly supported by many training programs and is solidly endorsed by the Accreditation Council of Graduate Medical Education (ACGME) in their general requirements (1, p. 17). Recently, the Residency Review Committee for Internal Medicine mandated evidence of scholarly activity for each resident before graduation (1, p. 52); scholarly activity was defined as original research, comprehensive case reports, or review of assigned clinical and research topics. We used this broad definition to define resident research in our study. Proponents of resident research see it as a way to improve resident education, promote quality patient care, and provide essential skills for lifelong learning [2-5]. Others have suggested that resident research enhances analytic reading skills and critical thinking and that it can prepare graduates for various research roles in academia and the community [6-11]. Many residents value research training, and the absence of this training has been criticized by graduates of respected medicine training programs [12, 13]. In one university program, no other learning activity was rated as more important than the required research project, and 86% of the graduates and 66% of the senior residents agreed that all physicians should have research experience [14]. Despite the widespread appeal of resident research, there are many perceived barriers to it, including lack of mentors, lack of training opportunities, and lack of infrastructure [2, 15-19]. Other reported barriers include lack of resident interest, lack of curricular time, lack of background instruction, lack of financial support, and the pressures of clinical duties [20-22]. Most of the literature on resident research [2, 8, 10, 16, 17, 23-30] comes from disciplines without a strong research base, such as family medicine, psychiatry, and rehabilitation medicine. In these disciplines, resident research has important links to the viability of the disciplines themselves and to the promotion of academic careers, and it is a justification for reimbursement for services. Similar arguments have been echoed by leaders in subspecialty and general internal medicine [31, 32], and it has been proposed that internal medicine programs become more flexible to allow interested residents to obtain research experience [33]. To this end, the American Board of Internal Medicine established the Clinical Investigator Pathway, which permits board eligibility after 2 years of clinical training and 2 years of research training (34, pp. 8-10). Some programs, most notably that at Brigham and Womens Hospital and the Subspecialty Training and Research program at the University of California, Los Angeles, have developed parallel clinical and research tracks to launch young physicians on research careers [35, 36]. Research training initiatives have been proposed for junior faculty and general internal medicine fellowships [26, 37, 38], and yet the question of research experience for most residents has been largely ignored. In the past decade, two studies pertinent to internal medicine have attempted to quantify research activity during residency. A 1991 survey of all graduate medical education programs [39] reported that more than 75% of these programs had research rotations and that 66% required these rotations. Almost half of the reporting programs required a research project, but subspecialty programs were more likely than programs accepting postgraduate year (PGY)-1 residents to require a project. A survey of internal medicine program directors in 1983 [40] found that 53% of programs offered a research elective, but no information was given about how many residents took advantage of this, what was offered, or what was accomplished. Resident research is considered to be an important pedagogical skill vital to the growth and development of subspecialty and general internal medicine. Yet, basic questions remain unanswered, such as what constitutes resident research; what the requisite knowledge, skills, and attitudes are; and what resources are needed. In an attempt to answer these questions and to determine current readiness to meet the mandated research requirements, we describe 1) the current level of resident research activity; 2) the research environment and available resources; 3) educational outcomes and skills considered important; and 4) important barriers to resident research. Methods A 33-question survey was mailed to all ACGME-accredited internal medicine training programs in September 1993. The surveys were addressed to the program directors listed in the 1993 ACGME Directory of Graduate Medical Education Programs. Subsequent mailings were sent to nonresponders in November 1993 and February 1994. The questions were organized into four sections: research activities of categorical internal medicine residents; opinions about resident research activities; research activities of faculty; and residency demographic information. The following definitions were provided in the survey: 1. Hypothesis-driven research mandates an a priori establishment of a hypothesis, collection of data, and analysis of data with inferential or descriptive statistics. 2. Descriptive studies are observations not driven by a specific hypothesis and may consist of a single case report, a case series, or a description of a population. 3. Literature reviews do not involve the collection of original data or observations. Literature reviews may be analytical reviews that provide a comprehensive, critical assessment of the available published data on a medical subject, and may be subject to meta-analytical techniques. Nonanalytical reviews meet few or none of these criteria, but simply report on findings published in past and current literature without a critical, predesigned framework of appraisal or statistical analysis. In section one of the questionnaire, 21 questions addressed aspects of resident research activities, including mandatory and minimal research expectations for residents; the presence, nature, and efforts of a research director; the presence, components, resources, and format of an organized research program; the presence and nature of protected time for resident research; the current level of resident research activity; and the educational and skill outcomes desired from resident research. Three questions in section two elicited opinions about resident research, asking directors to state the three most important barriers to that research; the three most important reasons that residents engage in research; their level of agreement with the idea of mandatory research activities; and the ability of their programs to implement such activities. Four questions in section three addressed the expectations for and activity of faculty members with regard to research and the availability and suitability of faculty members as resident research mentors. Five questions in section four collected demographic information on the training program, including total faculty number, faculty number by type (full-time or volunteer), and university affiliation. The survey instrument was piloted in six different residency programs, and the results of the pilot were included in the final results. The surveys were completed anonymously. Survey responses were divided into two mutually exclusive groups: those from university-based programs and those from non-university-based programs. University-based programs were defined as programs administered by a department of medicine at a medical school or as community programs integrated with university programs. Non-university-based programs were all other programs, including those at Veterans hospitals, community hospitals [university-affiliated or independent], military hospitals, health maintenance organizations, and large, multispecialty clinics. We chose to divide programs this way because 1) we believed that university-based programs were more likely to have resources pertinent to research and 2) we wanted our data to be comparable with that in previously published studies. Because we surveyed all ACGME-accredited internal medicine training programs, we made a finite population correction before calculating Student t-test results. Percentages and chi-square analyses were used to compare categorical data, and means SDs and Student t-tests were used to compare continuous data. Ranked categorical variables were weighted. First ranking was awarded 3 points, second ranking was awarded 2 points, and third ranking was awarded 1 point. Weighted mean scores were calculated by dividing the total score for an item by the number of responders. For each item, the differences in weighted means for university-based and non-university-based programs were compared using the Student t-test. For all analyses, the level was set at 0.05. Results are given as means SD. Results Of the 415 surveys mailed, 271 were completed and returned, yielding a response rate of 65%. A telephone survey of 10% of randomly selected non-responders was completed, and significant differences between nonresponders and responders were noted. The data were similar for university-based and non-university-based training programs, but those statistically significant differences that did exist are noted. Otherwise, the data reflect the overall trends of all training programs combined. Between 253 and 271 responders answered any given item, and the data are presented as percentages of responders answering a question rather than as percentages of all responders. For most questions, responders were allowed to choose more than one answer; therefore, the cumulative response rate may exceed 100%. The distribution of training programs in the survey was as follows: 55% were at university-affiliated community hospitals; 35% were at university hospitals; 21% were at Veterans hospitals; 18% were at community hospitals integrated with a university; 6% were at community hospitals and were not


Journal of General Internal Medicine | 1998

Skin biopsy techniques for the internist.

Patrick C. Alguire; Barbara M. Mathes

AbstractOBJECTIVE: To review three commonly performed skin biopsy procedures: shave, punch, and excision. DATA SOURCES: English-language articles identified through a MEDLINE search (1966–1997) using the MeSH headings skin and biopsy, major dermatology and primary care textbooks, and cross-references. STUDY SELECTION: Articles that reviewed the indications, contraindications, choice of procedure, surgical technique, specimen handling, and wound care. DATA EXTRACTION: Information was manually extracted from all selected articles and texts; emphasis was placed on information relevant to internal medicine physicians who want to learn skin biopsy techniques. DATA SYNTHESIS: Shave biopsies require the least experience and time but are are limited to superficial, nonpigmented lesions. Punch biopsies are simple to perform, have few complications, and if small, can heal without suturing. Closing the wound with unbraided nylon on a C-17 needle will enhance the cosmetic result but requires more expertise and time. Elliptical excisions are ideal for removing large or deep lesions, provide abundant material for many studies, and can be curative for a number of conditions, but require the greatest amount of time, expertise, and office resources. Elliptical excisions can be closed with unbraided nylon using a CE-3 or FS-3 needle in thick skin or a P-3 needle on the face. All specimens should be submitted in a labeled container with a brief clinical description and working diagnosis. CONCLUSIONS: Skin biopsies are an essential technique in the management of skin diseases and can enhance the dermatologic care rendered by internists.


Journal of General Internal Medicine | 1991

Efficacy of a one-month training block in psychosocial medicine for residents : a controlled study

Robert C. Smith; Gerald G. Osborn; Ruth B. Hoppe; Judith S. Lyles; Lawrence F. Van Egeren; Rebecca C. Henry; Doug Sego; Patrick C. Alguire; Bertram E. Stoffelmayr

Study objective:To determine the efficacy of a comprehensive, one-month psychosocial training program for first-year medical residents.Design:Nonrandomized, controlled study with immediate pre/post evaluation. Limited evaluation of some residents was also conducted an average of 15 months after teaching.Setting:Community-based, primary care-oriented residency program at Michigan State University (MSU).Subjects:All 28 interns from the single-track MSU residency program during 1986/87–88/89 participated in this required rotation; there was no dropout or instance of noncompliance with the study. In the follow-up study in 1989, all 13 available trainees participated. Of 20 untrained, volunteer controls, ten were second/third-year residents in the same program during 1986/87 and ten were interns from a similar MSU program in Kalamazoo, MI, during 1988/89.Teaching intervention:An experiential, skill-oriented, and learner-centered rotation with competency-based objects focused on communication and relationship-building skills and on the diagnosis and management of psychologically disturbed medical patients.Measurements and main results:The two subsets of the control group were combined because residents and training programs were similar and because means and standard deviations for the subsets were similar on all measures. By two-way analyses of variance (group×gender), the trainee group showed significantly greater gains (p<0.001) on questionnaires addressing knowledge, self-assessment, and attitudes; a mean of 15 months following training, there was no significant deterioration of attitude scores. All trainees were also able to identify previously unrecognized, potentially deleterious personal responses using a systematic rating procedure. Residents’ acceptance of the program was high.Conclusions:Intensive, comprehensive psychosocial training was well accepted by residents. It improved their knowledge, self-awareness, self-assessment, and attitudes, the latter improvement persisting well beyond training.


Journal of General Internal Medicine | 2010

Procedures Performed by Hospitalist and Non-hospitalist General Internists

Rajiv Thakkar; Scott M. Wright; Patrick C. Alguire; Robert S. Wigton; Romsai T. Boonyasai

BACKGROUNDIn caring exclusively for inpatients, hospitalists are expected to perform hospital procedures. The type and frequency of procedures they perform are not well characterized.OBJECTIVESTo determine which procedures hospitalists perform; to compare procedures performed by hospitalists and non-hospitalists; and to describe factors associated with hospitalists performing inpatient procedures.DESIGNCross-sectional survey.PARTICIPANTSNational sample of general internist members of the American College of Physicians.METHODSWe characterized respondents to a national survey of general internists as hospitalists and non-hospitalists based on time-activity criteria. We compared hospitalists and non-hospitalists in relation to how many SHM core procedures they performed. Analyses explored whether hospitalists’ demographic characteristics, practice setting, and income structure influenced the performance of procedures.RESULTSOf 1,059 respondents, 175 were classified as “hospitalists”. Eleven percent of hospitalists performed all 9 core procedures compared with 3% of non-hospitalists. Hospitalists also reported higher procedural volumes in the previous year for 7 of the 9 procedures, including lumbar puncture (median of 5 by hospitalists vs. 2 for non-hospitalists), abdominal paracentesis (5 vs. 2), thoracenteses (5 vs. 2) and central line placement (5.5 vs. 3). Performing a greater variety of core procedures was associated with total time in patient care, but not time in hospital care, year of medical school graduation, practice location, or income structure. Multivariate analysis found no independent association between demographic factors and performing all 9 core procedures.CONCLUSIONSHospitalists perform inpatient procedures more often and at higher volumes than non-hospitalists. Yet many do not perform procedures that are designated as hospitalist “core competencies.”


Journal of General Internal Medicine | 2004

Outcomes of a National Faculty Development Program in Teaching Skills: Prospective Follow-up of 110 Internal Medicine Faculty Development Teams

Thomas K. Houston; Jeanne M. Clark; Rachel B. Levine; Gary S. Ferenchick; Judith L. Bowen; William T. Branch; Dennis W. Boulware; Patrick C. Alguire; Richard H. Esham; Charles P. Clayton; David E. Kern

BACKGROUND: Awareness of the need for ambulatory care teaching skills training for clinician-educators is increasing. A recent Health Resources and Services Administration (HRSA)-funded national initiative trained 110 teams from U.S. teaching hospitals to implement local faculty development (FD) in teaching skills.OBJECTIVE: To assess the rate of successful implementation of local FD initiatives by these teams.METHODS: A prospective observational study followed the 110 teams for up to 24 months. Self-reported implementation, our outcome, was defined as the time from the training conference until the team reported that implementation of their FD project was completely accomplished. Factors associated with success were assessed using Kaplan-Meier analysis.RESULTS: The median follow-up was 18 months. Fifty-nine of the teams (54%) implemented their local FD project and subsequently trained over 1,400 faculty, of whom over 500 were community based. Teams that implemented their FD projects were more likely than those that did not to have the following attributes: met more frequently (P=.001), had less turnover (P=.01), had protected time (P=.01), rated their likelihood of success high (P=.03), had some project or institutional funding for FD (P=.03), and came from institutions with more than 75 department of medicine faculty (P=.03). The cost to the HRSA. wwas


The American Journal of the Medical Sciences | 1991

Pseudoaneurysm of the Abdominal Aorta: A Case Report and Review of the Literature

Richard G. Potts; Patrick C. Alguire

22,033 per successful team and

Collaboration


Dive into the Patrick C. Alguire's collaboration.

Top Co-Authors

Avatar

Cynthia D. Smith

American College of Physicians

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Thomas K. Houston

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David E. Kern

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge