Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kulamakan Kulasegaram is active.

Publication


Featured researches published by Kulamakan Kulasegaram.


BMC Medical Education | 2016

How to set the bar in competency-based medical education: standard setting after an Objective Structured Clinical Examination (OSCE)

Tim Dwyer; Sarah Wright; Kulamakan Kulasegaram; John Theodoropoulos; Jaskarndip Chahal; David Wasserstein; Charlotte Ringsted; Brian Hodges; Darrell Ogilvie-Harris

BackgroundThe goal of the Objective Structured Clinical Examination (OSCE) in Competency-based Medical Education (CBME) is to establish a minimal level of competence. The purpose of this study was to 1) to determine the credibility and acceptability of the modified Angoff method of standard setting in the setting of CBME, using the Borderline Group (BG) method and the Borderline Regression (BLR) method as a reference standard; 2) to determine if it is feasible to set different standards for junior and senior residents, and 3) to determine the desired characteristics of the judges applying the modified Angoff method.MethodsThe results of a previous OSCE study (21 junior residents, 18 senior residents, and six fellows) were used. Three groups of judges performed the modified Angoff method for both junior and senior residents: 1) sports medicine surgeons, 2) non-sports medicine orthopedic surgeons, and 3) sports fellows. Judges defined a borderline resident as a resident performing at a level between competent and a novice at each station. For each checklist item, the judges answered yes or no for “will the borderline/advanced beginner examinee respond correctly to this item?” The pass mark was calculated by averaging the scores. This pass mark was compared to that created using both the BG and the BLR methods.ResultsA paired t-test showed that all examiner groups expected senior residents to get significantly higher percentage of checklist items correct compared to junior residents (all stations p < 0.001). There were no significant differences due to judge type. For senior residents, there were no significant differences between the cut scores determined by the modified Angoff method and the BG/BLR method. For junior residents, the cut scores determined by the modified Angoff method were lower than the cut scores determined by the BG/BLR Method (all p < 0.01).ConclusionThe results of this study show that the modified Angoff method is an acceptable method of setting different pass marks for senior and junior residents. The use of this method enables both senior and junior residents to sit the same OSCE, preferable in the regular assessment environment of CBME.


Academic Medicine | 2013

Cognition before curriculum: rethinking the integration of basic science and clinical learning.

Kulamakan Kulasegaram; Maria Athina Martimianakis; Maria Mylopoulos; Cynthia Whitehead; Nicole N. Woods

Purpose Integrating basic science and clinical concepts in the undergraduate medical curriculum is an important challenge for medical education. The health professions education literature includes a variety of educational strategies for integrating basic science and clinical concepts at multiple levels of the curriculum. To date, assessment of this literature has been limited. Method In this critical narrative review, the authors analyzed literature published in the last 30 years (1982–2012) using a previously published integration framework. They included studies that documented approaches to integration at the level of programs, courses, or teaching sessions and that aimed to improve learning outcomes. The authors evaluated these studies for evidence of successful integration and to identify factors that contribute to integration. Results Several strategies at the program and course level are well described but poorly evaluated. Multiple factors contribute to successful learning, so identifying how interventions at these levels result in successful integration is difficult. Evidence from session-level interventions and experimental studies suggests that integration can be achieved if learning interventions attempt to link basic and clinical science in a causal relationship. These interventions attend to how learners connect different domains of knowledge and suggest that successful integration requires learners to build cognitive associations between basic and clinical science. Conclusions One way of understanding the integration of basic and clinical science is as a cognitive activity occurring within learners. This perspective suggests that learner-centered, content-focused, and session-level-oriented strategies can achieve cognitive integration.


Annals of Emergency Medicine | 2016

Examining Reliability and Validity of an Online Score (ALiEM AIR) for Rating Free Open Access Medical Education Resources

Teresa Man Yee Chan; Andrew Grock; Michael Paddock; Kulamakan Kulasegaram; Lalena M. Yarris; Michelle Lin

STUDY OBJECTIVE Since 2014, Academic Life in Emergency Medicine (ALiEM) has used the Approved Instructional Resources (AIR) score to critically appraise online content. The primary goals of this study are to determine the interrater reliability (IRR) of the ALiEM AIR rating score and determine its correlation with expert educator gestalt. We also determine the minimum number of educator-raters needed to achieve acceptable reliability. METHODS Eight educators each rated 83 online educational posts with the ALiEM AIR scale. Items include accuracy, usage of evidence-based medicine, referencing, utility, and the Best Evidence in Emergency Medicine rating score. A generalizability study was conducted to determine IRR and rating variance contributions of facets such as rater, blogs, posts, and topic. A randomized selection of 40 blog posts previously rated through ALiEM AIR was then rated again by a blinded group of expert medical educators according to their gestalt. Their gestalt impression was subsequently correlated with the ALiEM AIR score. RESULTS The IRR for the ALiEM AIR rating scale was 0.81 during the 6-month pilot period. Decision studies showed that at least 9 raters were required to achieve this reliability. Spearman correlations between mean AIR score and the mean expert gestalt ratings were 0.40 for recommendation for learners and 0.35 for their colleagues. CONCLUSION The ALiEM AIR scale is a moderately to highly reliable, 5-question tool when used by medical educators for rating online resources. The score displays a fair correlation with expert educator gestalt in regard to the quality of the resources. The score displays a fair correlation with educator gestalt.


Western Journal of Emergency Medicine | 2016

Derivation of Two Critical Appraisal Scores for Trainees to Evaluate Online Educational Resources: A METRIQ Study.

Teresa M. Chan; Keeth Krishnan; Michelle Lin; Christopher R. Carpenter; Matt Astin; Kulamakan Kulasegaram

Introduction Online education resources (OERs), like blogs and podcasts, increasingly augment or replace traditional medical education resources such as textbooks and lectures. Trainees’ ability to evaluate these resources is poor, and few quality assessment aids have been developed to assist them. This study aimed to derive a quality evaluation instrument for this purpose. Methods We used a three-phase methodology. In Phase 1, a previously derived list of 151 OER quality indicators was reduced to 13 items using data from published consensus-building studies (of medical educators, expert podcasters, and expert bloggers) and subsequent evaluation by our team. In Phase 2, these 13 items were converted to seven-point Likert scales used by trainee raters (n=40) to evaluate 39 OERs. The reliability and usability of these 13 rating items was determined using responses from trainee raters, and top items were used to create two OER quality evaluation instruments. In Phase 3, these instruments were compared to an external certification process (the ALiEM AIR certification) and the gestalt evaluation of the same 39 blog posts by 20 faculty educators. Results Two quality-evaluation instruments were derived with fair inter-rater reliability: the METRIQ-8 Score (Inter class correlation coefficient [ICC]=0.30, p<0.001) and the METRIQ-5 Score (ICC=0.22, p<0.001). Both scores, when calculated using the derivation data, correlated with educator gestalt (Pearson’s r=0.35, p=0.03 and r=0.41, p<0.01, respectively) and were related to increased odds of receiving an ALiEM AIR certification (odds ratio=1.28, p=0.03; OR=1.5, p=0.004, respectively). Conclusion Two novel scoring instruments with adequate psychometric properties were derived to assist trainees in evaluating OER quality and correlated favourably with gestalt ratings of online educational resources by faculty educators. Further testing is needed to ensure these instruments are accurate when applied by trainees.


Medical Education | 2016

Collaborative learning of clinical skills in health professions education: the why, how, when and for whom.

Martin G. Tolsgaard; Kulamakan Kulasegaram; Charlotte Ringsted

This study is designed to provide an overview of why, how, when and for whom collaborative learning of clinical skills may work in health professions education.


Medical Education | 2018

Back from basics: integration of science and practice in medical education

Glen Bandiera; Ayelet Kuper; Maria Mylopoulos; Cynthia Whitehead; Mariela Ruetalo; Kulamakan Kulasegaram; Nicole N. Woods

In 1988, the Edinburgh Declaration challenged medical teachers, curriculum designers and leaders to make an organised effort to change medical education for the better. Among a series of recommendations was a call to integrate training in science and clinical practice across a breadth of clinical contexts. The aim was to create physicians who could serve the needs of all people and provide care in a multitude of contexts. In the years since, in the numerous efforts towards integration, new models of curricula have been proposed and implemented with varying levels of success.


Medical Education | 2017

Contexts, concepts and cognition: principles for the transfer of basic science knowledge

Kulamakan Kulasegaram; Zarah Chaudhary; Nicole N. Woods; Kelly L. Dore; Alan J. Neville; Geoffrey Norman

Transfer of basic science aids novices in the development of clinical reasoning. The literature suggests that although transfer is often difficult for novices, it can be optimised by two complementary strategies: (i) focusing learners on conceptual knowledge of basic science or (ii) exposing learners to multiple contexts in which the basic science concepts may apply. The relative efficacy of each strategy as well as the mechanisms that facilitate transfer are unknown. In two sequential experiments, we compared both strategies and explored mechanistic changes in how learners address new transfer problems.


Academic Medicine | 2016

Avoiding Common Data Analysis Pitfalls in Health Professions Education Research.

Jimmie Leppink; Kulamakan Kulasegaram

Avoiding Common Data Analysis Pitfalls in Health Professions Education Research Jimmie Leppink;Kulamakan Kulasegaram; Academic Medicine


Journal of Evaluation in Clinical Practice | 2018

Developing the experts we need: Fostering adaptive expertise through education

Maria Mylopoulos; Kulamakan Kulasegaram; Nicole N. Woods

In this era of increasing complexity, there is a growing gap between what we need our medical experts to do and the training we provide them. While medical education has a long history of being guided by theories of expertise to inform curriculum design and implementation, the theories that currently underpin our educational programs do not account for the expertise necessary for excellence in the changing health care context. The more comprehensive view of expertise gained by research on both clinical reasoning and adaptive expertise provides a useful framing for re-shaping physician education, placing emphasis on the training of clinicians who will be adaptive experts. That is, have both the ability to apply their extensive knowledge base as well as create new knowledge as dictated by patient needs and context. Three key educational approaches have been shown to foster the development of adaptive expertise: learning that emphasizes understanding, providing students with opportunities to embrace struggle and discovery in their learning, and maximizing variation in the teaching of clinical concepts. There is solid evidence that a commitment to these educational approaches can help medical educators to set trainees on the path towards adaptive expertise.


Advances in Physiology Education | 2018

Beyond “formative”: assessments to enrich student learning

Kulamakan Kulasegaram; P. K. Rangachari

Formative assessments can enhance and enrich student learning. Typically, these have been used to provide feedback against end-of-course standards and prepare students for summative assessments of performance or measurement of competence. Here, we present the case for using assessments for learning to encompass a wider range of important outcomes. We discuss 1) the rationale for using assessment for learning; 2) guiding theories of expertise that inform assessment for learning; 3) theoretical and empirical evidence; 4) approaches to rigor and validation; and 5) approaches to implementation at multiple levels of the curriculum. The literature strongly supports the use of assessments as an opportunity to reinforce and enhance learning. Physiology teachers have a wide range of theories, models, and interventions from which to prepare students for retention, application, transfer, and future learning by using assessments.

Collaboration


Dive into the Kulamakan Kulasegaram's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Wasserstein

Sunnybrook Health Sciences Centre

View shared research outputs
Researchain Logo
Decentralizing Knowledge