Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michele Groves is active.

Publication


Featured researches published by Michele Groves.


Medical Teacher | 2007

An analysis of peer, self, and tutor assessment in problem-based learning tutorials

Tracey Papinczak; Louise Young; Michele Groves; Michele Haynes

Objective: The purpose of this study was to explore self-, peer-, and tutor assessment of performance in tutorials among first year medical students in a problem-based learning curriculum. Methods: One hundred and twenty-five students enrolled in the first year of the Bachelor of Medicine and Bachelor of Surgery Program at the University of Queensland were recruited to participate in a study of metacognition and peer- and self-assessment. Both quantitative and qualitative data were collected from the assessment of PBL performance within the tutorial setting, which included elements such as responsibility and respect, communication, and critical analysis through presentation of a case summary. Self-, peer-, and tutor assessment took place concurrently. Results: Scores obtained from tutor assessment correlated poorly with self-assessment ratings (r = 0.31–0.41), with students consistently under-marking their own performance to a substantial degree. Students with greater self-efficacy, scored their PBL performance more highly. Peer-assessment was a slightly more accurate measure, with peer-averaged scores correlating moderately with tutor ratings initially (r = 0.40) and improving over time (r = 0.60). Students consistently over-marked their peers, particularly those with sceptical attitudes to the peer-assessment process. Peer over-marking led to less divergence from the tutor scoring than under-marking of ones own work. Conclusion: According to the results of this study, first-year medical students in a problem-based learning curriculum were better able to accurately judge the performance of their peers compared to their own performance. This study has shown that self-assessment of process is not an accurate measure, in line with the majority of research in this domain. Nevertheless, it has an important role to play in supporting the development of skills in reflection and self-awareness. Practice points Self-assessment results in substantial under-marking compared to tutor assessment. Scores obtained from peer-assessment are significantly more generous than those scores arising from tutor assessment. Self-assessment is a less accurate means of assessing student performance than peer-assessment.


Nurse Education Today | 2009

The objective structured clinical examination (OSCE): optimising its value in the undergraduate nursing curriculum.

Marion Mitchell; Amanda Henderson; Michele Groves; Megan Dalton; Duncan David Nulty

This article explores the use of the objective structured clinical examination (OSCE) in undergraduate nursing education. The advantages and limitations of this assessment approach are discussed and various applications of the OSCE are described. Attention is given to the complexities of evaluating some psychosocial competency components. The issues are considered in an endeavour to delineate the competency components, or skill sets, that best lend themselves to assessment by the OSCE. We conclude that OSCEs can be used most effectively in nurse undergraduate curricula to assess safe practice in terms of performance of psychomotor skills, as well as the declarative and schematic knowledge associated with their application. OSCEs should be integrated within a curriculum in conjunction with other relevant student evaluation methods.


Medical Teacher | 2003

Clinical reasoning: the relative contribution of identification, interpretation and hypothesis errors to misdiagnosis.

Michele Groves; Peter O'Rourke; Heather Alexander

The aim of this study was to identify and describe the types of errors in clinical reasoning that contribute to poor diagnostic performance at different levels of medical training and experience. Three cohorts of subjects, second- and fourth- (final) year medical students and a group of general practitioners, completed a set of clinical reasoning problems. The responses of those whose scores fell below the 25th centile were analysed to establish the stage of the clinical reasoning process—identification of relevant information, interpretation or hypothesis generation—at which most errors occurred and whether this was dependent on problem difficulty and level of medical experience. Results indicate that hypothesis errors decrease as expertise increases but that identification and interpretation errors increase. This may be due to inappropriate use of pattern recognition or to failure of the knowledge base. Furthermore, although hypothesis errors increased in line with problem difficulty, identification and interpretation errors decreased. A possible explanation is that as problem difficulty increases, subjects at all levels of expertise are less able to differentiate between relevant and irrelevant clinical features and so give equal consideration to all information contained within a case. It is concluded that the development of clinical reasoning in medical students throughout the course of their pre-clinical and clinical education may be enhanced by both an analysis of the clinical reasoning process and a specific focus on each of the stages at which errors commonly occur.


Medical Education | 2012

Understanding clinical reasoning: The next step in working out how it really works

Michele Groves

Educ 2011;16:311–29. 8 Ericsson KA. The influence of experience and deliberate practice on the development of superior expert performance. In: Ericsson KA, Charness N, Feltovich P, Hoffman RR, eds. Cambridge Handbook of Expertise and Expert Performance. Cambridge: Cambridge University Press 2006;685–706. 9 Hodges BD. A tea-steeping or i-Doc model for medical education? Acad Med 2010;9 (Suppl):34–44. 10 ten Cate O, Snell L, Carraccio C. Medical competence: the interplay between individual ability and the health care environment. Med Teach 2010;32 (8):669–75.


Journal of Interprofessional Care | 2014

Australian health reforms: enhancing interprofessional practice and competency within the health workforce.

Janelle Thomas; Lindy McAllister; Michele Groves

Abstract Underpinned by increasing healthcare complexity and ongoing pressures to control the cost of healthcare, governments are increasingly calling for improved health service delivery models. A public policy paradigm of partnership-based, collaborative interprofessional working is central to revised models of health service delivery. Collaborative activity and service re-design do not occur by chance. They are complex and multi-faceted. Increasingly, calls for collaborative style health service re-design activities are being translated to a need to agree on a clear set of interprofessional competencies and develop a culture of interprofessional practice (IPP) across the sector. This report summarizes the requirements for developing a culture of interprofessional practice within the context of Australian healthcare reforms. It also highlights the role of well-developed interprofessional competency frameworks to support envisaged changes in practice. The report expands the discussion in this area by referring to the work of two other nations with prior developments in interprofessional workplace development and reform.


Nurse Education in Practice | 2010

Innovation in learning – An inter-professional approach to improving communication

Marion Mitchell; Michele Groves; Charles Mitchell; Judy Batkin

Inter-professional education (IPE) is recognised as a major way of introducing students in the health professions to the importance of teamwork and communication in the delivery of excellent healthcare. This pilot project evaluated mixed versus single discipline group tutorials of nursing and medical students as a way to promote IPE and understanding of communication. Four tutorial sessions were video-recorded and analysed using a video analysis coding grid. Additional data were drawn from student evaluations and assessment of group participation and were subjected to quantitative and qualitative analysis. The case study as portrayed in the DVD was thought to provide an effective learning tool by both sets of students. Medical students rated the need for mixed group tutorials significantly lower than the nursing students who thought the tutorial activity helped with an appreciation of the importance of communication to effective teamwork. However, medical students in the single discipline group did not understand the nursing role. The resources fostered reflection on students own professional role as well as others; however, the importance of communication within the nursing role needs to be recognised by nursing students and curriculum designers.


Nurse Education Today | 2013

An implementation framework for using OSCEs in nursing curricula

Amanda Henderson; Duncan David Nulty; Marion Mitchell; Carol Jeffrey; Michelle Kelly; Michele Groves; Pauline Glover; Sabina Knight

The implementation framework outlined in this paper has been developed from feedback of a trial across three different nursing and midwifery programmes and is designed to assist educators to incorporate OSCEs within their curricula. There is value in flagging the pedagogical principles embodied in the framework and alerting educators to their importance for more meaningful student learning. For each step practical advice is provided contributing to the utility of this approach. Considerations are systematic ensuring that the use of OSCEs in health care curricula assures judicious use of resources to achieve desired student outcomes.


BMC Medical Education | 2013

Analysing clinical reasoning characteristics using a combined methods approach

Michele Groves; Marie-Louise Dick; Geoff McColl; Justin L C Bilszta

BackgroundDespite a major research focus on clinical reasoning over the last several decades, a method of evaluating the clinical reasoning process that is both objective and comprehensive is yet to be developed.The aim of this study was to test whether a dual approach, using two measures of clinical reasoning, the Clinical Reasoning Problem (CRP) and the Script Concordance Test (SCT), provides a valid, reliable and targeted analysis of clinical reasoning characteristics to facilitate the development of diagnostic thinking in medical students.MethodsThree groups of participants, general practitioners, and third and fourth (final) year medical students completed 20 on-line clinical scenarios -10 in CRP and 10 in SCT format. Scores for each format were analysed for reliability, correlation between the two formats and differences between subject-groups.ResultsCronbach’s alpha coefficient ranged from 0.36 for SCT 1 to 0.61 for CRP 2, Statistically significant correlations were found between the mean f-score of the CRP 2 and total SCT 2 score (0.69); and between the mean f-score for all CRPs and all mean SCT scores (0.57 and 0.47 respectively). The pass/fail rates of the SCT and CRP f-score are in keeping with the findings from the correlation analysis (i.e. 31% of students (11/35) passed both, 26% failed both, and 43% (15/35) of students passed one but not the other test), and suggest that the two formats measure overlapping but not identical characteristics. One-way ANOVA showed consistent differences in scores between levels of expertise with these differences being significant or approaching significance for the CRPs.ConclusionSCTs and CRPs are overlapping and complementary measures of clinical reasoning. Whilst SCTs are more efficient to administer, the use of both measures provides a more comprehensive appraisal of clinical skills than either single measure alone, and as such could potentially facilitate the customised teaching of clinical reasoning for individuals. The modest reliability of SCTs and CRPs in this study suggests the need for an increased number of items for testing. Further work is needed to determine the suitability of a combined approach for assessment purposes.


Advances in Simulation | 2016

OSCE best practice guidelines—applicability for nursing simulations

Michelle Kelly; Marion Mitchell; Amanda Henderson; Carol Jeffrey; Michele Groves; Duncan D. Nulty; Pauline Glover; Sabina Knight

BackgroundObjective structured clinical examinations (OSCEs) have been used for many years within healthcare programmes as a measure of students’ and clinicians’ clinical performance. OSCEs are a form of simulation and are often summative but may be formative. This educational approach requires robust design based on sound pedagogy to assure practice and assessment of holistic nursing care. As part of a project testing seven OSCE best practice guidelines (BPGs) across three sites, the BPGs were applied to an existing simulation activity. The aim of this study was to determine the applicability and value of the OSCE BPGs in an existing formative simulation.MethodsA mixed methods approach was used to address the research question: in what ways do OSCE BPGs align with simulations. The BPGs were aligned and compared with all aspects of an existing simulation activity offered to first-year nursing students at a large city-based university, prior to their first clinical placement in an Australian healthcare setting. Survey questions, comprised of Likert scales and free-text responses, used at other sites were slightly modified for reference to simulation. Students’ opinions about the refined simulation activity were collected via electronic survey immediately following the simulation and from focus groups. Template analysis, using the BPGs as existing or a priori thematic codes, enabled interpretation and illumination of the data from both sources.ResultsFew changes were made to the existing simulation plan and format. Students’ responses from surveys (n = 367) and four focus groups indicated that all seven BPGs were applicable for simulations in guiding their learning, particularly in the affective domain, and assisting their perceived needs in preparing for upcoming clinical practice.DiscussionSimilarities were found in the intent of simulation and OSCEs informed by the BPGs to enable feedback to students about holistic practice across affective, cognitive and psychomotor domains. The similarities in this study are consistent with findings from exploring the applicability of the BPGs for OSCEs in other nursing education settings, contexts, universities and jurisdictions. The BPGs also aligned with other frameworks and standards often used to develop and deliver simulations.ConclusionsFindings from this study provide further evidence of the applicability of the seven OSCE BPGs to inform the development and delivery of, in this context, simulation activities for nurses. The manner in which simulation is offered to large cohorts requires further consideration to meet students’ needs in rehearsing the registered nurse role.


Journal of Clinical Nursing | 2015

Critical factors about feedback: ‘They told me what I did wrong; but didn't give me any feedback’

Michele Groves; Marion Mitchell; Amanda Henderson; Carol Jeffrey; Michelle Kelly; Duncan David Nulty

Aim This study reports nursing and midwifery undergraduate and postgraduate students’ perceptions of feedback during their participation in a performance-based learning activity, either an Objective Structured Clinical Examination (OSCE) for patient assessment or a simulation focussed on communication skills. Background Providing feedback to students is critical to learning. The definition and process of giving feedback has significantly progressed since its initial concept of simply advising learners whether an answer to a test item was right or wrong (Kulhavy 1977). Feedback is now conceived more broadly and used throughout the learning process. By providing students with a snap-shot of their current ability and advice, feedback helps to define learning goals more clearly, increases achievement and influences learning style (Sadler 1989). Feedback cultivates reflective practice and develops expertise (Albanese 2006). This is especially so in work-based learning where the provision of immediate feedback on performance can particularly enhance applied learning. The nature of feedback varies widely and includes formative assessment by teachers and peers, and summative assessment required for academic progression. The most effective feedback is constructive. That is, it should focus on the task being assessed, include strengths as well as weaknesses of performance and suggest strategies for performance improvement. However, its effectiveness is also dependent on factors such as format, timing and the perceived expertise of the provider (Hattie & Timperley 2007, Murdoch-Eaton & Sargeant 2012). Additionally, receptiveness to, and type of feedback preferred, varies with the maturity and life experience of the learner, for example, beginning medical students have indicated a preference for positive, re-assuring feedback whereas senior students preferred immediate verbal feedback (Murdoch-Eaton & Sargeant 2012). Design Student perceptions of feedback were collected across four educational settings: two undergraduate nursing programmes, one undergraduate midwifery programme and a postgraduate course for rural and remote healthcare nurses where students’ learning was centred on a practice based activity, either an OSCE or simulation session. The OSCE consisted of one scenario that required students to undertake an integrated patient assessment while the simulation session focussed on communication skills with students alternately playing the roles of patient, carer, nurse, etc. In all settings, the activities were for formative assessment and students received feedback from teaching staff. Additionally, students were encouraged to organise informal practice sessions and obtain peer feedback. Method Data were collected via open-ended questions on student surveys (n = 557) and student participation in focus group discussions (n = 91) within one week of participation in the learning activity. Thematic analysis was conducted on text from surveys and transcripts of focus groups discussions. Results Overall, students found the feedback they received to be beneficial to their learning regardless of their role in the practice based activity or whether they received individual or group feedback. However, three specific themes emerged from the data analysis. These related to the value of feedback for learning, students’ perception of the nature of feedback, and the need for consistency in giving feedback (see Table 1): 1 The value of feedback for learning. Students appreciated receiving detail regarding the positive aspects of their practice and areas in which they could improve. However, there was variable appreciation of peer feedback by students, some of whom felt that their colleagues’ lack of expertise limited the opportunity for effective learning. 2 Limited understanding of what constitutes feedback. There was evidence of limited understanding by some students about what actually constitutes feedback. This included the perception that feedback is always positive and different to simply correcting mistakes; another was that quantity was more important than quality. A small minority of the 557 students commented that they could only learn through ‘doing’ rather than ‘observing’ and that feedback given to others in a group setting was by definition, not applicable to them. Students in year one indicated that they were only informed about what they were doing wrong. They valued positive feedback in the form of reassurance rather than negative comments. 3 Issues to do with consistency in the quality and delivery of feedback. Some of the student comments indicated that there were differences in how staff gave feedback during teaching sessions. Sometimes this resulted in conflicts or contradictions in the performance of techniques. Students highlighted the need for a uniform approach to teaching and giving feedback. Conclusion These findings provide important insights into perceptions of feedback, its effectiveness in promoting learning, student perceptions of what feedback is, and their receptiveness to different types of feedback, specifically in clinical practice situations. In particular, it supports other recent work that identifies that students at the beginning of their course of study understand feedback as positive affirmation. This is in contrast to more experienced and postgraduate students who value detailed statements about how they can improve (Murdoch-Eaton & Sargeant 2012). Relevance to clinical practice • Both staff and students need to have a common understanding of the nature and various forms of feedback. In addition to the well-recognised features of quality feedback (timely, specific, constructive and the like), this study has highlighted the need to address two underpinning issues before embarking on the feedback process, namely: ○ That all teaching staff should be trained in giving give consistent and effective feedback in a way that will be most useful for students; and ○ That information should be provided to students about what constitutes feedback and how it can best be used to improve learning. • The negative view of peer feedback by some students suggest that adequate preparation is important and should carefully consider: ○ The purpose of the feedback (e.g. to be used following expert feedback to check technique or as revision in the lead-up to summative assessment); ○ Where and how it is to be given. Disclosure The authors have confirmed that all authors meet the ICMJE criteria for authorship credit (www.icmje.org/ ethical_1author.html), as follows: (1) substantial contributions to conception and design of, or acquisition of data or analysis and interpretation of data, (2) drafting the article or revising it critically for important intellectual content, and (3) final approval of the version to be published. Funding & Ethics This project was funded by Australian Government through the Office of Learning and Teaching, Department of Education, Employment and Workplace Relations. Ethical approval was obtained from the Ethics Review Committees of all participating institutions.

Collaboration


Dive into the Michele Groves's collaboration.

Top Co-Authors

Avatar

Amanda Henderson

Princess Alexandra Hospital

View shared research outputs
Top Co-Authors

Avatar

Marion Mitchell

Princess Alexandra Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Louise Young

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Peter O'Rourke

QIMR Berghofer Medical Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge