Marcel D'Eon
University of Saskatchewan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marcel D'Eon.
Journal of Interprofessional Care | 2005
Marcel D'Eon
Interprofessional education (IPE) has been promoted as a method to enhance the ability of health professionals to learn to work together. This article examines several approaches to learning that can help IPE fulfill its expectations. The first is aimed at the transfer of learning novel situations and involves two ideas. Students need to be challenged with progressively more complex tasks and those tasks need to reflect the reality in which they will be working. Second, the learning situation needs to be structured using the five elements of best-practice cooperative learning: positive interdependence, face-to-face promotive interaction, individual accountability, interpersonal and small-group skills, and group processing. Finally, the learning process itself needs to be approached from an experiential learning framework cycling through the four-stage model of planning, doing, observing and reflecting. By using increasingly complex and relevant cases in cooperative groups with an experiential learning process interprofessional education can be successful.
Advances in Health Sciences Education | 2000
Marcel D'Eon; Valerie Overgaard; Sheila Rutledge Harding
What we believe about the nature of teaching has important implications for faculty development. In this article we contrast three different beliefs about the nature of teaching and highlight the implications for faculty development. If teaching were merely a technical enterprise where well trained teachers delivered packaged lessons, a very directive style of faculty development might be appropriate. If teaching were primarily a craft where teachers made personal judgments daily about how and what to teach, then faculty development which encouraged individual reflection and artistry might be more suitable. This article advances the argument that teaching generally(and teaching in medical schools in particular) is best characterized as a type of social practice. Social practices (such as parenting, being polite, and going to university) are purposive, rational, moral, communal, and are identified by their activities. The communal aspect of teaching means, among other things, that the prevailing social norms of faculty at particular institutions of higher education have a large role to play in shaping the practice of teaching. This being the case, faculty development needs to provide teachers the opportunity to address and reshape these powerful social norms where necessary.
Gerontology & Geriatrics Education | 2012
Jenny Basran; Vanina Dal Bello-Haas; Doreen Walker; Peggy MacLeod; Bev Allen; Marcel D'Eon; Meredith McKague; Nicola S. Chopin; Krista Trinder
The University of Saskatchewans Longitudinal Elderly Person Shadowing (LEPS) is an interprofessional senior mentors program (SMP) where teams of undergraduate students in their first year of medicine, pharmacy, and physiotherapy; 2nd year of nutrition; 3rd year nursing; and 4th year social work partner with community-dwelling older adults. Existing literature on SMPs provides little information on the sustainability of attitudinal changes toward older adults or changes in interprofessional attitudes. LEPS students completed Polizzis Aging Semantic Differential and the Interdisciplinary Education Perception Scale. Perceptions of older men and women improved significantly and changes were sustained after one year. However, few changes were seen in interprofessional attitudes.
Journal of Interprofessional Care | 2010
Nora McKee; Donna Goodridge; Fred Remillard; Marcel D'Eon
The demand for high-quality palliative care continues to escalate as life expectancies increase and illness trajectories lengthen (Teno et al., 2004). Integral to providing excellent end-of-life care is a team-based approach that draws upon the expertise of diverse health care providers (Mularski, Bascom, & Osborn, 2001) and hence the need for interprofessional education (IPE), ‘‘occasions when two or more professions learn from, with and about each other to improve collaboration and the quality of care’’ (Oandasan & Reeves, 2005). This report describes the evaluation of a Problem-based Learning (PBL) (Barrows & Wee, 2007) module in palliative care developed for health science students at the University of Saskatchewan. The interprofessional authors collaborated because of involvement with the local EFFPEC project (Educating Future Physicians in Palliative and End-of-Life Care, 2004) and were granted funding from a Health Canada initiative in Saskatchewan called P-CITE (Patient-Centered Interprofessional Team Experiences, 2008). The objective was to create and evaluate an educational tool to accelerate education in palliative care, interprofessional collaboration and the use of PBL methodology.
Medical Teacher | 2005
Marcel D'Eon; Robert Crawford
A major problem for curriculum and course planners is coping simultaneously with the expanding knowledge base and having less time to teach. A widely used solution is to include huge amounts of information in the curriculum. A better solution is to identify a manageable core of relevant knowledge. One way is to begin with program goals and systematically identify content with increasing specificity that would be needed to achieve those goals. Another is the empirical determination of content, which has not been widely attempted. These studies would include experiments and practice analyses. There is a need to mount greater and more rigorous efforts to help advance the scholarship and to provide useful information to curriculum planners. Large-scale, multi-site studies that compare the results from various methods and from different sources will be more useful to medical education generally. In these days of exploding information and technology and greater understanding of how people learn, more than ever, efforts need to be focused on finding the very specific content that will result in the best learning for our students.
Journal of Continuing Education in The Health Professions | 2001
Marcel D'Eon; Doris AuYeung
Background: The purpose of train‐the‐trainer (TTT) programs within the context of continuing medical education (CME) is to help facilitators acquire and/or enhance their skills at leading CME sessions. The provision of follow‐up is one feature of successful CME workshops over which CME providers have some control. Follow‐up is defined as any encounter between participants and workshop leaders, following an initial workshop or other development session, and is designed to enhance, maintain, reinforce, transfer, extend, or support the learning from the original workshop. In this article, we elaborate on the use of audio teleconferences to provide follow‐up for a TTT workshop in Saskatchewan, a largely rural province in western Canada. Methods: The teleconferences began 6 weeks after the workshop and were held at approximately 6‐week intervals, with five conference calls in total. Each lasted about 45 minutes. Participants were interviewed to determine their view of the value of the teleconferences. Results: Participants reported learning from the teleconferences and feeling more prepared to conduct CME sessions due to their participation in the teleconferences. Participants missed teleconferences only for extenuating circumstances (e.g., emergency deliveries). Findings: We have found that audio teleconferences allow for and encourage professional discussion that is crucial to changing practices. They are an effective way to incorporate follow‐up to TTT workshops when participants travel great distances to attend.
American Journal of Evaluation | 2009
Marcel D'Eon; Kevin W. Eva
There are times in research, as in life, when honest debate leads to common ground. Lam (2009) presents arguments contradicting the conclusions drawn in a 2008 article by D’Eon, Sadownik, Harrison, & Nation, published earlier in this journal. Although we stand by the value of the 2008 study, our own reanalysis of the original data provides additional affirmation of Lam’s (2009) conclusion, but for different reasons. Furthermore, we continue to propose that more research might provide insights into when or if self-assessments would be useful in detecting workshop effectiveness. The article by D’Eon et al. (2008) reports a study to test the hypothesis that individual judgments could provide valuable information when considered in aggregate. It is possible to reduce the error associated with individual judgments, a fundamental property of reliability, thus increasing the utility of the measurements being collected. This is the reason that assessment strategies that use sampling (e.g., Objective Structured Clinical Examinations) do indeed provide useful information in the aggregate even though performance on specific stations fails to predict performance on other specific stations. If we average across numerous selfassessments, we might dilute the error to the point of being able to determine whether learning has taken place for a group, on average, thereby allowing cautious inferences to be made about the effectiveness of an intervention. As self-assessment data are simple and inexpensive to gather, it is vital to know whether education researchers and program evaluators should continue to rely on aggregated self-assessments as meaningful sources of information. The article by D’Eon et al (2008) reported that self-assessments, aggregated across workshop participants, yielded an effect size from pretest to posttest of a magnitude comparable to aggregated external assessments of performance collected across the same interval, despite the fact that the correlation between self-assessments and external judgments was poor. It was concluded, therefore, that the data provided ‘‘reasonably compelling evidence’’ that aggregated self-assessments may generate useful indicators of workshop effectiveness. Lam (2009) has taken exception with these conclusions, arguing that (a) self-assessments are invalid; (b) the observed change in the external assessments may not be attributable to the training workshop; and (c) the findings may not be generalizable to other contexts. We disagree. To argue against the validity of self-assessments at the individual level is immaterial. D’Eon et al did not claim that individual self-assessments were valid and there is ample evidence, including the 2008 study in question, suggestive of the contrary. However, this does not mean that, once aggregated, self-assessments necessarily remain invalid. Second, in any test of construct validity, one examines whether scores on the measure being tested
Medical Teacher | 2007
Marcel D'Eon; Caroline Kosmas; Jamie MacMillan
The article, ‘‘Learning syndromes afflicting beginning medical students: identification and treatment – reflections after forty years of teaching,’’ (Burns 2006) seems to resonate with teachers but completely misses the point. In a lighthearted way the author seems to be laying blame for these syndromes at the feet of the medical students. The point we would like to make from our own experience of teaching, doing research, conducting workshops, and attending and presenting at conferences in medical education for almost 10 years (MD) and as medical students (CK, JM) is that many of the syndromes and conditions are in fact symptomatic of systemic problems to which the medical students are merely reacting and that the major responsibility for these learning syndromes ought to fall at the feet of the faculty of the medical school. The author provides an example that supports our thesis in the ‘Slip and Slide’ syndrome where the anatomy department modified its testing program, which then created changes in medical student study behaviour. It would do little good to tell people who are sick from mining asbestos that they should better look after themselves. There would clearly be a moral imperative for the managers and supervisors to improve the working conditions at the mine! Medical students from all years often agonize over all kinds of questions asked on examinations because the questions are ambiguous, have more than one right answer, include grammatical errors, or were not taught or included in the objectives (if objectives were provided). We have observed that course coordinators during examination reviews did not know the answers to certain questions, that others admitted that there were two good answers but refused to allow marks for both, and that some picked from a question bank without knowing what exactly was taught by a particular lecturer. Questions that seem clear to an instructor with tens of years of experience will likely seem complex to a student who is trying to apply the information for the first time. Instead of telling medical students not to agonize over every question, we as medical educators need to write technically correct multiple choice questions and pay more attention to student assessment generally (Entwhistle 1992). One method we are trying out at the University of Saskatchewan is an examination audit whereby practicing clinicians and former graduates systematically review exam questions for relevance and quality. Medical students, like many of us, may at times postpone studying till what seems like the last moment. This is often a good coping strategy since most tasks will expand to fill the amount of time available. But they may be overwhelmed and we think the larger issue is the total amount of content that we expect them to learn in a finite amount of time. One estimate pegs the rate of learning new facts and concepts in medical school (based on a 40 hour work week) at about one every two and a half minutes for the pre-clinical material and about one every four and a half minutes for clinical skills and knowledge, whereas the recommended rate is about one every 12 minutes (Anderson & Graham 1980). Our own estimates for some courses are similar. How can anyone learn at that rate and be able to use and apply the material in a proficient way? The author laments the chorus of medical students who only want to study relevant material fearing that we will turn out mere technicians and trades people. There is more than enough material that is relevant to clinical practice to fill four years of study without including material that is only marginally relevant (Jamshidi & Cook 2003). There is much wasted effort in teaching irrelevant material to students when they will promptly forget much of it within months and sometimes up to 50% within a year of the exam (D’Eon 2006). Instead of teaching them with and about electron microscope images of inflammation in first year let’s stick to what is authentically relevant such as: On physical examination what does inflammation look like? How does it behave? What are the consequences? What do we do about it? To train physicians as opposed to mere technicians we would explore the science behind inflammation to the extent that it furthered one’s ability to identify and manage inflammation in varied circumstances. We should go deeper on some relevant content and avoid other material entirely. The author also introduces the importance of ‘pass the exam’ relevance implying that students should compliantly study the material as handed down by the teacher because it is on the exam. This is often no more than an attempt at coercive motivation when no rational and convincing explanation
Medical Teacher | 2004
Marcel D'Eon
Before beginning this critique I want to congratulate my colleagues for entering the murky world of tacit knowledge. Their attempts to observe and analyse tacit knowledge have moved the discourse forward. They correctly describe tacit knowledge when, for example, they state: ‘‘Successful teachers may know how to behave appropriately in the teaching role, but may not be aware of, or be able to articulate, the basic principles and theoretical concepts underlying effective teaching behaviours.’’ They are saying that tacit knowledge is procedural; it is about what to do. Explicit knowledge, on the other hand, is declarative and abstract; it is knowledge about teaching. They are not saying that those with tacit knowledge cannot describe in plain English what they are doing when they teach in the classroom or by the bedside. Teachers possessing tacit knowledge are just unable to explain what they do and why they do it using the educational jargon for basic pedagogic concepts and principles. To identify teachers with tacit knowledge one cannot ask them explicit verbal questions about pedagogical concepts and principles because they do not know them. Researchers must be able to observe the use of effective pedagogical practices in appropriate situations (on paper, simulated, or real) and determine that the teacher is unable to articulate what he/she did using pedagogical terms and concepts. The knowledge of what to do is implied by the use of effective techniques and strategies. The label of tacit is applied later by the lack and impossibility of an articulate description and explanation. One can ascertain tacit knowledge of teaching only indirectly. It is a diagnosis of exclusion. I find their description of tacit clear, but its use in their paper ambiguous.
Medical Teacher | 2016
Marcel D'Eon
Dear SirLet me be clear: “non-cognitive” does not work. For example, in the recent article “Narrative information obtained during student selection predicts problematic study behavior,” the authors...