Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Malathi Srinivasan is active.

Publication


Featured researches published by Malathi Srinivasan.


Academic Medicine | 2007

Comparing problem-based learning with case-based learning: Effects of a major curricular shift at two institutions

Malathi Srinivasan; Michael S. Wilkes; Frazier T. Stevenson; Thuan Nguyen; Stuart J. Slavin

Purpose Problem-based learning (PBL) is now used at many medical schools to promote lifelong learning, open inquiry, teamwork, and critical thinking. PBL has not been compared with other forms of discussion-based small-group learning. Case-based learning (CBL) uses a guided inquiry method and provides more structure during small-group sessions. In this study, we compared faculty and medical students’ perceptions of traditional PBL with CBL after a curricular shift at two institutions. Method Over periods of three years, the medical schools at the University of California, Los Angeles (UCLA) and the University of California, Davis (UCD) changed first-, second-, and third-year Doctoring courses from PBL to CBL formats. Ten months after the shift (2001 at UCLA and 2004 at UCD), students and faculty who had participated in both curricula completed a 24-item questionnaire about their PBL and CBL perceptions and the perceived advantages of each format Results A total of 286 students (86%–97%) and 31 faculty (92%–100%) completed questionnaires. CBL was preferred by students (255; 89%) and faculty (26; 84%) across schools and learner levels. The few students preferring PBL (11%) felt it encouraged self-directed learning (26%) and valued its greater opportunities for participation (32%). From logistic regression, students preferred CBL because of fewer unfocused tangents (59%, odds ration [OR] 4.10, P = .01), less busy-work (80%, OR 3.97, P = .01), and more opportunities for clinical skills application (52%, OR 25.6, P = .002). Conclusions Learners and faculty at two major academic medical centers overwhelmingly preferred CBL (guided inquiry) over PBL (open inquiry). Given the dense medical curriculum and need for efficient use of student and faculty time, CBL offers an alternative model to traditional PBL small-group teaching. This study could not assess which method produces better practicing physicians.


Journal of General Internal Medicine | 2002

Medicare Financing of Graduate Medical Education: Intractable Problems, Elusive Solutions

Eugene C. Rich; Mark Liebow; Malathi Srinivasan; David C. Parish; James O. Wolliscroft; Oliver Fein; Robert Blaser

The past decade has seen ongoing debate regarding federal support of graduate medical education, with numerous proposals for reform. Several critical problems with the current mechanism are evident on reviewing graduate medical education (GME) funding issues from the perspectives of key stakeholders. These problems include the following: substantial interinstitutional and interspecialty variations in per-resident payment amounts; teaching costs that have not been recalibrated since 1983; no consistent control by physician educators over direct medical education (DME) funds; and institutional DME payments unrelated to actual expenditures for resident education or to program outcomes. None of the current GME reform proposals adequately address all of these issues. Accordingly, we recommend several fundamental changes in Medicare GME support. We propose a re-analysis of the true direct costs of resident training (with appropriate adjustment for local market factors) to rectify the myriad problems with per-resident payments. We propose that Medicare DME funds go to the physician organization providing resident instruction, keeping DME payments separate from the operating revenues of teaching hospitals. To ensure financial accountability, we propose that institutions must maintain budgets and report expenditures for each GME program. To establish educational accountability, Residency Review Committees should establish objective, annually measurable standards for GME program performance; programs that consistently fail to meet these minimum standards should lose discretion over GME funds. These reforms will solve several long-standing, vexing problems in Medicare GME funding, but will also uncover the extent of undersupport of GME by most other health care payers. Ultimately, successful reform of GME financing will require “all-payer” support.


Journal of General Internal Medicine | 2002

Early introduction of an evidence-based Medicine course to preclinical medical students

Malathi Srinivasan; Michael W. Weiner; Philip P. Breitfeld; Fran Brahmi; Keith L. Dickerson; Gary Weiner

Evidence-based Medicine (EBM) has been increasingly integrated into medical education curricula. Using an observational research design, we evaluated the feasibility of introducing a 1-month problem-based EBM course for 139 first-year medical students at a large university center. We assessed program performance through the use of a web-based curricular component and practice exam, final examination scores, student satisfaction surveys, and a faculty questionnaire. Students demonstrated active involvement in learning EBM and ability to use EBM principles. Facilitators felt that students performed well and compared favorably with residents whom they had supervised in the past year. Both faculty and students were satisfied with the EBM course. To our knowledge, this is the first report to demonstrate that early introduction of EBM principles as a short course to preclinical medical students is feasible and practical.


Annals of Family Medicine | 2007

Ratings of Physician Communication by Real and Standardized Patients

Kevin Fiscella; Peter Franks; Malathi Srinivasan; Richard L. Kravitz; Ronald M. Epstein

PURPOSE Patient ratings of physician’s patient-centered communication are used by various specialty credentialing organizations and managed care organizations as a measure of physician communication skills. We wanted to compare ratings by real patients with ratings by standardized patients of physician communication. METHODS We assessed physician communication using a modified version of the Health Care Climate Questionnaire (HCCQ) among a sample of 100 community physicians. The HCCQ measures physician autonomy support, a key dimension in patient-centered communication. For each physician, the questionnaire was completed by roughly 49 real patients and 2 unannounced standardized patients. Standardized patients portrayed 2 roles: gastroesophageal disorder reflux symptoms and poorly characterized chest pain with multiple unexplained symptoms. We compared the distribution, reliability, and physician rank derived from using real and standardized patients after adjusting for patient, physician, and standardized patient effects. RESULTS There were real and standardized patient ratings for 96 of the 100 physicians. Compared with standardized patient scores, real-patient–derived HCCQ scores were higher (mean 22.0 vs 17.2), standard deviations were lower (3.1 vs 4.9), and ranges were similar (both 5–25). Calculated real patient reliability, given 49 ratings per physician, was 0.78 (95% confidence interval [CI], 0.71–0.84) compared with the standardized patient reliability of 0.57 (95% CI, 0.39–0.73), given 2 ratings per physician. Spearman rank correlation between mean real patient and standardized patient scores was positive but small to moderate in magnitude, 0.28. CONCLUSION Real patient and standardized patient ratings of physician communication style differ substantially and appear to provide different information about physicians’ communication style.


Academic Medicine | 2005

Factors affecting resident performance: development of a theoretical model and a focused literature review.

Maya Mitchell; Malathi Srinivasan; Daniel C. West; Peter Franks; Craig R. Keenan; Mark C. Henderson; Michael S. Wilkes

Purpose The clinical performances of physicians have come under scrutiny as greater public attention is paid to the quality of health care. However, determinants of physician performance have not been well elucidated. The authors sought to develop a theoretical model of physician performance, and explored the literature about factors affecting resident performance. Method Using expert consensus panel, in 2002–03 the authors developed a hypothesis-generating model of resident performance. The developed model had three input factors (individual resident factors, health care infrastructure, and medical education infrastructure), intermediate process measures (knowledge, skills, attitudes, habits), and final health outcomes (affecting patient, community and population). The authors used factors from the model to focus a PubMed search (1967–2002) for all original articles related to the factors of individual resident performance. Results The authors found 52 original studies that examined factors of an individual residents performance. They describe each studys measurement instrument, study design, major findings, and limitations. Studies were categorized into five domains: learning styles/personality, social/financial factors, practice preferences, personal health, and response to job environment. Few studies examined intermediate or final performance outcomes. Most were single-institution, cross-sectional, and survey-based studies. Conclusions Attempting to understand resident performance without understanding factors that influence performance is analogous to examining patient adherence to medication regimens without understanding the individual patient and his or her environment. Based on a systematic review of the literature, the authors found few discrete associations between the factors of individual resident and the residents actual job performance. Additionally, they identify and discuss major gaps in the educational literature.


PLOS ONE | 2011

The Validity of Peer Review in a General Medicine Journal

Jeffrey L. Jackson; Malathi Srinivasan; Joanna Rea; Kathlyn E. Fletcher; Richard L. Kravitz

All the opinions in this article are those of the authors and should not be construed to reflect, in any way, those of the Department of Veterans Affairs. Background Our study purpose was to assess the predictive validity of reviewer quality ratings and editorial decisions in a general medicine journal. Methods Submissions to the Journal of General Internal Medicine (JGIM) between July 2004 and June 2005 were included. We abstracted JGIM peer review quality ratings, verified the publication status of all articles and calculated an impact factor for published articles (Rw) by dividing the 3-year citation rate by the average for this group of papers; an Rw>1 indicates a greater than average impact. Results Of 507 submissions, 128 (25%) were published in JGIM, 331 rejected (128 with review) and 48 were either not resubmitted after revision was requested or were withdrawn by the author. Of 331 rejections, 243 were published elsewhere. Articles published in JGIM had a higher citation rate than those published elsewhere (Rw: 1.6 vs. 1.1, p = 0.002). Reviewer quality ratings of article quality had good internal consistency and reviewer recommendations markedly influenced publication decisions. There was no quality rating cutpoint that accurately distinguished high from low impact articles. There was a stepwise increase in Rw for articles rejected without review, rejected after review or accepted by JGIM (Rw 0.60 vs. 0.87 vs. 1.56, p<0.0005). However, there was low agreement between reviewers for quality ratings and publication recommendations. The editorial publication decision accurately discriminated high and low impact articles in 68% of submissions. We found evidence of better accuracy with a greater number of reviewers. Conclusions The peer review process largely succeeds in selecting high impact articles and dispatching lower impact ones, but the process is far from perfect. While the inter-rater reliability between individual reviewers is low, the accuracy of sorting is improved with a greater number of reviewers.


Academic Medicine | 2008

Measuring knowledge structure: Reliability of concept mapping assessment in medical education

Malathi Srinivasan; Matthew McElvany; Jane M. Shay; Richard J. Shavelson; Daniel C. West

Purpose To test the reliability of concept map assessment, which can be used to assess an individual’s “knowledge structure,” in a medical education setting. Method In 2004, 52 senior residents (pediatrics and internal medicine) and fourth-year medical students at the University of California–Davis School of Medicine created separate concept maps about two different subject domains (asthma and diabetes) on two separate occasions each (four total maps). Maps were rated using four different scoring systems: structural (S; counting propositions), quality (Q; rating the quality of propositions), importance/quality (I/Q; rating importance and quality of propositions), and a hybrid system (H; combining elements of S with I/Q). The authors used generalizability theory to determine reliability. Results Learners (universe score) contributed 40% to 44% to total score variation for the Q, I/Q, and H scoring systems, but only 10% for the S scoring system. There was a large learner–occasion–domain interaction effect (19%–23%). Subsequent analysis of each subject domain separately demonstrated a large learner–occasion interaction effect (31%–37%) and determined that administration on four to five occasions was necessary to achieve adequate reliability. Rater variation was uniformly low. Conclusions The Q, I/Q, and H scoring systems demonstrated similar reliability and were all more reliable than the S system. The findings suggest that training and practice are required to perform the assessment task, and, as administered in this study, four to five testing occasions are required to achieve adequate reliability. Further research should focus on whether alterations in the concept mapping task could allow it to be administered over fewer occasions while maintaining adequate reliability.


Medical Care | 2006

Connoisseurs of care? Unannounced standardized patients' ratings of physicians.

Malathi Srinivasan; Peter Franks; Lisa S. Meredith; Kevin Fiscella; Ronald M. Epstein; Richard L. Kravitz

Background:Patient satisfaction surveys can be informative, but bias and poor response rates may limit their utility as stable measures of physician performance. Using unannounced standardized patients (SPs) may overcome some of these limitations because their experience and training make them able judges of physician behavior. Objectives:We sought to understand the reliability of unannounced SPs in rating primary care physicians when covertly presenting as real patients. Study Design:Data from 2 studies (Patient Centered Communication [PCC]; Social Influences in Practice [SIP]) were included. For the PCC study, 5 SPs made 192 visits to 96 physicians; for the SIP study, 18 SPs made 292 visits to 146 physicians. SPs visits to physicians were randomized, thus avoiding mutual selection bias. Each SP rated 16 to 38 physicians on interpersonal skills (autonomy support: PCC, SIP), technical skills (information gathering: SIP-only), and overall satisfaction (SIP-only). We evaluated SP evaluation consistency (physician vs. total variance ρ), and SPs’ overall satisfaction with specific dimensions of physician performance. Results:Scale reliability varied from 0.71 to 0.92. Physician rhos (95% confidence intervals) for autonomy support were 0.40 (0.22–0.58; PCC) and 0.30 (0.14–0.45; SIP); information gathering rho was 0.46 (0.33–0.59; SIP). Overall SP satisfaction rho was 0.47 (0.34–0.60; SIP). SPs varied significantly in adjusted overall satisfaction levels, but not other dimensions. Conclusions:These analyses provide some evidence that medical connoisseurship can be learned. When adequately sampled by trained SPs, some physician skills can be reliably measured in community practice settings.


Annals of Family Medicine | 2013

Physician Communication Regarding Prostate Cancer Screening: Analysis of Unannounced Standardized Patient Visits

Bo Feng; Malathi Srinivasan; Jerome R. Hoffman; Julie A. Rainwater; Erin Griffin; Marko Dragojevic; Frank C. Day; Michael S. Wilkes

PURPOSE Prostate cancer screening with prostate-specific antigen (PSA) is a controversial issue. The present study aimed to explore physician behaviors during an unannounced standardized patient encounter that was part of a randomized controlled trial to educate physicians using a prostate cancer screening, interactive, Web-based module. METHODS Participants included 118 internal medicine and family medicine physicians from 5 health systems in California, in 2007–2008. Control physicians received usual education about prostate cancer screening (brochures from the Center for Disease Control and Prevention). Intervention physicians participated in the prostate cancer screening module. Within 3 months, all physicians saw unannounced standardized patients who prompted prostate cancer screening discussions in clinic. The encounter was audio-recorded, and the recordings were transcribed. Authors analyzed physician behaviors around screening: (1) engagement after prompting, (2) degree of shared decision making, and (3) final recommendations for prostate cancer screening. RESULTS After prompting, 90% of physicians discussed prostate cancer screening. In comparison with control physicians, intervention physicians showed somewhat more shared decision making behaviors (intervention 14 items vs control 11 items, P <.05), were more likely to mention no screening as an option (intervention 63% vs control 26%, P <.05), to encourage patients to consider different screening options (intervention 62% vs control 39%, P <.05) and seeking input from others (intervention 25% vs control 7%, P<.05). CONCLUSIONS A brief Web-based interactive educational intervention can improve shared decision making, neutrality in recommendation, and reduce PSA test ordering. Engaging patients in discussion of the uses and limitations of tests with uncertain value can decrease utilization of the tests.


Annals of Family Medicine | 2013

Pairing Physician Education With Patient Activation to Improve Shared Decisions in Prostate Cancer Screening: A Cluster Randomized Controlled Trial

Michael S. Wilkes; Frank C. Day; Malathi Srinivasan; Erin Griffin; Daniel J. Tancredi; Julie A. Rainwater; Richard L. Kravitz; Douglas S. Bell; Jerome R. Hoffman

BACKGROUND Most expert groups recommend shared decision making for prostate cancer screening. Most primary care physicians, however, routinely order a prostate-specific antigen (PSA) test with little or no discussion about whether they believe the potential benefits justify the risk of harm. We sought to assess whether educating primary care physicians and activating their patients to ask about prostate cancer screening had a synergistic effect on shared decision making, rates and types of discussions about prostate cancer screening, and the physician’s final recommendations. METHODS Our study was a cluster randomized controlled trial among primary care physicians and their patients, comparing usual education (control), with physician education alone (MD-Ed), and with physician education and patient activation (MD-Ed+A). Participants included 120 physicians in 5 group practices, and 712 male patients aged 50 to 75 years. The interventions comprised a Web-based educational program for all intervention physicians and MD-Ed+A patients compared with usual education (brochures from the Centers for Disease Control and Prevention). The primary outcome measure was patients’ reported postvisit shared decision making regarding prostate cancer screening; secondary measures included unannounced standardized patients’ reported shared decision making and the physician’s recommendation for prostate cancer screening. RESULTS Patients’ ratings of shared decision making were moderate and did not differ between groups. MD-Ed+A patients reported that physicians had higher prostate cancer screening discussion rates (MD-Ed+A = 65%, MD-Ed = 41%, control=38%; P <.01). Standardized patients reported that physicians seeing MD-Ed+A patients were more neutral during prostate cancer screening recommendations (MD-Ed+A=50%, MD-Ed=33%, control=15%; P <.05). Of the male patients, 80% had had previous PSA tests. CONCLUSIONS Although activating physicians and patients did not lead to significant changes in all aspects of physician attitudes and behaviors that we studied, interventions that involved physicians did have a large effect on their attitudes toward screening and in the discussions they had with patients, including their being more likely than control physicians to engage in prostate cancer screening discussions and more likely to be neutral in their final recommendations.

Collaboration


Dive into the Malathi Srinivasan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erin Griffin

University of California

View shared research outputs
Top Co-Authors

Avatar

Frank C. Day

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Su Ting T Li

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel C. West

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge