Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brendan M. Reilly is active.

Publication


Featured researches published by Brendan M. Reilly.


The Lancet | 2003

Physical examination in the care of medical inpatients: an observational study

Brendan M. Reilly

BACKGROUND Little is known about the clinical importance of skilled physical examination in the care of patients in hospital. METHODS Hospital records of a systematic consecutive sample of patients admitted to a general medical inpatient service were reviewed retrospectively to determine whether physical findings by the attending physician led to important changes in clinical management. Patients with pivotal physical findings were defined by an outcomes adjudication panel as those whose diagnosis and treatment in hospital changed substantially as a result of the attending physicians physical examination. Pivotal findings were classed as validated if the resulting treatment change involved the active collaboration of a consulting specialist. Findings were classed as discoverable if subsequent diagnostic testing (other than physical examination) would probably have led to the correct diagnosis. Class 1 findings were those deemed validated but not discoverable. FINDINGS Among 100 patients, 26 had pivotal physical findings (26%; 95% CI 18-36). 15 of these (58%; 95% CI 37-77) were validated (13 with urgent surgical or other invasive procedures) and 14 were discoverable (54%; 95% CI 33-73). Seven patients had class 1 findings (7%; 95% CI 3-14). INTERPRETATION Physical examination can have a substantial effect on the care of medical inpatients. If replicated in other settings, these findings might have important implications for medical educators and quality improvement initiatives.


Journal of General Internal Medicine | 2000

Teaching Residents Evidence-based Medicine Skills: A Controlled Trial of Effectiveness and Assessment of Durability

Christopher A. Smith; Pamela Ganschow; Brendan M. Reilly; Arthur T. Evans; Robert McNutt; Albert Osei; Muhammad Saquib; Satish Surabhi; Sunil Yadav

OBJECTIVES: To measure the effectiveness of an educational intervention designed to teach residents four essential evidence-based medicine (EBM) skills: question formulation, literature searching, understanding quantitative outcomes, and critical appraisal.DESIGN: Firm-based, controlled trial.SETTING: Urban public hospital.PARTICIPANTS: Fifty-five first-year internal medicine residents: 18 in the experimental group and 37 in the control group.INTERVENTION: An EBM course, taught 2 hours per week for 7 consecutive weeks by senior faculty and chief residents focusing on the four essential EBM skills.MEASUREMENTS AND MAIN RESULTS: The main outcome measure was performance on an EBM skills test that was administered four times over 11 months: at baseline and at three time points postcourse. Postcourse test 1 assessed the effectiveness of the intervention in the experimental group (primary outcome); postcourse test 2 assessed the control group after it crossed over to receive the intervention; and postcourse test 3 assessed durability. Baseline EBM skills were similar in the two groups. After receiving the EBM course, the experimental group achieved significantly higher postcourse test scores (adjusted mean difference, 21%; 95% confidence interval, 13% to 28%; P<.001). Postcourse improvements were noted in three of the four EBM skill domains (formulating questions, searching, and quantitative understanding [P<.005 for all], but not in critical appraisal skills [P=.4]). After crossing over to receive the educational intervention, the control group achieved similar improvements. Both groups sustained these improvements over 6 to 9 months of follow-up.CONCLUSIONS: A brief structured educational intervention produced substantial and durable improvements in residents’ cognitive and technical EBM skills.


The Lancet | 2007

Inconvenient truths about effective clinical teaching

Brendan M. Reilly

I’ve been teaching clinical medicine for more than 30 years but it seems to be getting harder, not easier. Conventional wisdom in the USA holds that the problem is time and money (or, more precisely: time is money). Hospitalised patients, discharged before doctors can get to know them, are sicker and quicker today. Outpatient teaching is no less awkward, 10-minute offi ce visits and outdated Medicare reimbursement rules gumming up the works. Long overdue restrictions on resident work hours won’t solve these problems. Too little time and money for clinical teaching betokens lack of respect too. Most academic centres in the USA don’t provide adequate support for clinician-educators’ salaries despite substantial government subsidies for postgraduate education. This shortfall is not an oversight; it is a calculated budgetary decision. Insult compounds injury when physician-researchers openly disparage the academic gravitas of physician-educators on the same faculty. This situation raises the obvious question: is clinical teaching today not only more diffi cult but also less eff ective? One might assume that our research-proud profession would know the answer. In fact, despite shocking indictments of the quality, safety, and equity of US medical care, we know little about the eff ect of clinical teaching on learners or patients, nor even how to measure it. Worse, we don’t seem very concerned about this situation. In 2006, four major medical journals (BMJ, JAMA, Lancet, and New England Journal of Medicine) and four medical education journals (Academic Medicine, BMC Medical Education, Medical Education, and Medical Teacher) published a total of one original outcomes study of this kind (which found no correlation between measures of teaching eff ectiveness and patients’ clinical outcomes). Lacking evidence, I do what clinicians do when we don’t have the data we need: I go with my gut instinct. My gut tells me that clinical teaching today—my own and others’—is less eff ective than it used to be and needs to be. Among those who will disagree are many academic leaders and quality gurus who don’t even acknowledge the question. They maintain plausible deniability by looking elsewhere: we need better systems, they say, not better doctors. No doubt they are right about the systems. I propose that the decline of clinical teaching in our training programmes is, like global warming, an inconvenient truth. Even if we saw evidence as eerily convincing as Al Gore’s pictures of melting polar ice-caps, many in academic medicine would look the other way. Rather than take remedial action, we will be tempted to do the greenhouse-gas-shuffl e: blame it on random variation or transient aberration (anything but ourselves) and hope the hurricanes and heat waves just go away. Doubly inconvenient would be to learn that fi xes from the past might not work in the present. For example, due to digital information systems, clinical trainees inevitably review patients’ laboratory data and diagnostic images before they do a history or physical examination. This change portends more than the devaluation of bedside skills; it is nothing less than complete inversion of the conventional diagnostic process. The good news is that innovation in medical education eventually catches up with advances in science and technology. The bad news is that the pace of change is glacial. Worse, we know so little about medicine’s informal curriculum (clinical training) that it’s hard to know where to start. In this spirit, I describe eight habits of exemplary clinical teachers I have known and try to emulate still.


Journal of General Internal Medicine | 2004

Evaluating the Performance of Inpatient Attending Physicians: A New Instrument for Today's Teaching Hospitals

Christopher A. Smith; Anita Varkey; Arthur T. Evans; Brendan M. Reilly

OBJECTIVE: Instruments available to evaluate attending physicians fail to address their diverse roles and responsibilities in current inpatient practice. We developed a new instrument to evaluate attending physicians on medical inpatient services and tested its reliability and validity.DESIGN: Analysis of 731 evaluations of 99 attending physicians over a 1-year period.SETTING: Internal medicine residency program at a university-affiliated public teaching hospital.PARTICIPANTS: All medical residents (N=145) and internal medicine attending physicians (N=99) on inpatient ward rotations for the study period.MEASUREMENTS: A 32-item questionnaire assessed attending physician performance in 9 domains: evidence-based medicine, bedside teaching, clinical reasoning, patient-based teaching, teaching sessions, patient care, rounding, professionalism, and feedback. A summary score was calculated by averaging scores on all items.RESULTS: Eighty-five percent of eligible evaluations were completed and analyzed. Internal consistency among items in the summary score was 0.95 (Cronbach’s α). Interrater reliability, using an average of 8 evaluations, was 0.87. The instrument discriminated among attending physicians with statistically significant differences on mean summary score and all 9 domain-specific mean scores (all comparisons, P<.001). The summary score predicted winners of faculty teaching awards (odds ratio [OR], 17; 95% confidence interval [CI], 8 to 36) and was strongly correlated with residents’ desire to work with the attending again (r=.79; 95% CI, 0.74 to 0.83). The single item that best predicted the summary score was how frequently the physician made explicit his or her clinical reasoning in making medical decisions (r2=.90).CONCLUSION: The new instrument provides a reliable and valid method to evaluate the performance of inpatient teaching attending physicians.


The New England Journal of Medicine | 2014

Don't Learn on Me — Are Teaching Hospitals Patient-Centered?

Brendan M. Reilly

When a patient at a U.S. teaching hospital says she wants no trainees involved in her care, do we tell her to go elsewhere? Transfer her to the nonteaching service? Or somehow convince her that she will receive better care on the teaching service?


The New England Journal of Medicine | 2009

A Question Well Put

Brendan M. Reilly; Peter D. Hart; Susana Mascarell; Hemant Chatrath

A 51-year-old woman with a history of hypertension and depression reported progressively worsening pain in the left thigh over a period of several months, which had made her unable to walk for the past week. She also described generalized weakness and pains in her lower back, arms, and chest. She reported no weight loss, anorexia, trauma, or fever.


Journal of General Internal Medicine | 1996

New method to predict patients’ intravenous heparin dose requirements

Brendan M. Reilly; Robert Raschke

OBJECTIVE: To predict intravenous heparin dose requirements of patients treated for thromboembolic disorders.DESIGN: A retrospective cohort study in which we used simple linear regression to predict patients’ effective maintenance dose (EMD) of heparin (units/kg/hour needed to achieve and maintain APTT therapeutic range) from patients’ “heparin responsiveness” (the APTT increase after the initial 6 hours of heparin treatment per units/kg/hour received).SETTING/PATIENTS: The model was derived from 46 patients treated at one hospital (Hospital A) and then tested in 42 patients treated at another hospital (Hospital B).MEASUREMENTS AND MAIN RESULTS: Among Hospital A patients, there was a strong linear correlation (r=−.880;p <. 001) between EMD (mean 16.02 units/kg/hour; 95% CI 14.9, 17.15) and “heparin responsiveness” (HR): EMD=25.651 — [95.118×HR]. This model accurately predicted Hospital B patients’ EMD: 97% (37/38) fell within the model’s 95% prediction interval; the mean absolute difference between predicted and actual EMD was 1.73 units/kg/hour (95% CI 1.39, 2.08); and only 16% of patients had EMD’s more than 3 units/kg/hour different from that predicted by the regression model. The model’s accuracy was comparable to that of our gold standard, the weight-based heparin dosing nomogram.CONCLUSION: The infusion dose of intravenous heparin effective for an individual patient can be predicted accurately from the patient’s body weight and APTT response to the initial 6 hours of treatment. Especially in hospitals where validated heparin dosing nomograms are not used, clinicians may find this simple technique useful in achieving timely therapeutic anticoagulation.


Annals of Internal Medicine | 1993

THE WEIGHT-BASED HEPARIN DOSING NOMOGRAM COMPARED WITH A “STANDARD CARE” NOMOGRAM. A RANDOMIZED CONTROLLED TRIAL

Robert Raschke; Brendan M. Reilly; James R. Guidry; Joseph R. Fontana; Sandhya Srinivas


Annals of Internal Medicine | 2006

Translating Clinical Research into Clinical Practice: Impact of Using Prediction Rules To Make Decisions

Brendan M. Reilly; Arthur T. Evans


JAMA | 2002

Impact of a Clinical Decision Rule on Hospital Triage of Patients With Suspected Acute Cardiac Ischemia in the Emergency Department

Brendan M. Reilly; Arthur T. Evans; Jeffrey Schaider; Krishna Das; James E. Calvin; Lea Anne Moran; Rebecca R. Roberts; Enrique Martinez

Collaboration


Dive into the Brendan M. Reilly's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert Raschke

Good Samaritan Medical Center

View shared research outputs
Top Co-Authors

Avatar

Krishna Das

Wayne State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge