Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Claudia Kiessling is active.

Publication


Featured researches published by Claudia Kiessling.


Journal of Interprofessional Care | 2016

Crossing boundaries in interprofessional education: A call for instructional integration of two script concepts

Jan Kiesewetter; Ingo Kollar; Nicolas Fernandez; Stuart Lubarsky; Claudia Kiessling; Martin R. Fischer; Bernard Charlin

ABSTRACT Clinical work occurs in a context which is heavily influenced by social interactions. The absence of theoretical frameworks underpinning the design of collaborative learning has become a roadblock for interprofessional education (IPE). This article proposes a script-based framework for the design of IPE. This framework provides suggestions for designing learning environments intended to foster competences we feel are fundamental to successful interprofessional care. The current literature describes two script concepts: “illness scripts” and “internal/external collaboration scripts”. Illness scripts are specific knowledge structures that link general disease categories and specific examples of diseases. “Internal collaboration scripts” refer to an individual’s knowledge about how to interact with others in a social situation. “External collaboration scripts” are instructional scaffolds designed to help groups collaborate. Instructional research relating to illness scripts and internal collaboration scripts supports (a) putting learners in authentic situations in which they need to engage in clinical reasoning, and (b) scaffolding their interaction with others with “external collaboration scripts”. Thus, well-established experiential instructional approaches should be combined with more fine-grained script-based scaffolding approaches. The resulting script-based framework offers instructional designers insights into how students can be supported to develop the necessary skills to master complex interprofessional clinical situations.


Medical Education | 2015

Feedforward interview: enhancing reflection for successful teachers

Anja Görlitz; Ralf Schmidmaier; Claudia Kiessling

What problem was addressed? Summative feedback provided by students at the end of a course did not offer lecturers the opportunity to improve their teaching methods during the course. There was a lack of instant and formative feedback to help individual lecturers identify strengths and weaknesses in their teaching methods. We sought to advocate timely self-reflection among lecturers in a continuous process within an academic year. What was tried? In alignment with the implementation of a revamped curriculum, 12 groups of students (n = 179) took turns to evaluate various types of daily teaching activity (e.g. lectures, seminars, laboratories, problem-solving activities and clinical teaching sessions). Each group was assigned a 2week period to provide feedback and the process was repeated throughout the academic year. An example of a question asked (for a 1-hour lecture) is: ‘Was there alignment between the content delivered and the learning objectives?’. Other questions were related to delivery of the content, intellectual level and overall rating. Students also wrote about their positive and negative experiences, and recommended possible areas for improvement through online feedback. Next, respective lecturers received the feedback reports via e-mail. Lecturers then informed the medical education management of their response to students’ feedback (e.g. Lecturer: ‘I will take note of the recommendations and try to improve further. This method of feedback is very useful.’) At the end of each learning block (system), we met students to present a summary of their feedback and the actions taken by faculty members. What lessons were learned? We believe that students’ feedback could reinforce lecturers’ commitment to maintaining effective teaching methods (e.g. Student: ‘He encouraged two-way interactions by asking us questions’). Lecturers realised that students valued their efforts (e.g. Lecturer: ‘Happy to be of service to the students and nice to feel appreciated’). By contrast, lecturers were made aware of areas for improvement (e.g. Student: ‘I hope the lecturer can speak slower’). The aforementioned lecturer accepted the recommendation and changed his delivery methods (e.g. Student: ‘The lecturer spoke slower and continued repeating the facts that he wanted to tell us. . . he had improved his ways of giving lectures’). The increasing amount of written feedback indicated that students began to buy in to the initiative when their feedback was taken seriously. Meanwhile, the implementation was accompanied by reasonable deliberations. Firstly, we invited all students to evaluate three lectures on a randomly selected Monday in order to resolve lecturers’ concerns about the reliability of small-group evaluation. A Mann–Whitney test reported that, for all nine questions asked in each lecture evaluation, the responses of a randomly selected group (n 15) did not differ significantly from those of others in the class (n 130). The response rate was approximately 85%. Secondly, some students’ feedback contained comments that described negative emotions and used inappropriate wording. Subsequently, we met with students regularly to facilitate their learning of how to provide constructive feedback. This also served to cultivate their personal and professional development. Thirdly, we found that lecturers became more open to accepting criticism from students and demonstrated a commitment to change.


Medical Education | 2018

When predicting item difficulty, is it better to ask authors or reviewers?

Claudia Kiessling; Andreas Winkelmann; Felicitas-Maria Lahner; Daniel Bauer

questions do we need to ask now in order to move the OES towards the collective vision? What relevant and credible information is needed to answer these questions? Step 5 pulled together the relevant information and a ‘learning huddle’ was held with members of the core team. The learning huddle had three objectives: (i) to make meaning of the information collected, (ii) to discuss what the core team is noticing or sensing in the system as a result of the work of the OES (emergent outcomes) and (iii) to identify any new or relevant questions that the core team thought important to ask in order to move the OES towards the collective vision. Step 6 involved ‘refreshing’ the evaluation based on the learning huddle discussion and Step 7 was the sharing of the evaluation findings to date with relevant stakeholder groups. What lessons were learned? The advantages to using this process include gaining a comprehensive understanding of the true value of the OES, making informed strategic decisions with confidence, and growing capacity within the department for learning and inquiry. That said, this evaluation approach should only be used in situations where the following is true: the primary focus of the work is programme growth and improvement (rather than solely accountability), there is flexibility to change or adapt the programme, there is a core team of people who can commit to the process as it requires dedication over a long period of time, and the culture of the organisation is one that is open to making mistakes and learning from them.


Academic Medicine | 2007

What do students actually do during a dissection course? First steps towards understanding a complex learning experience.

Andreas Winkelmann; Sven Hendrix; Claudia Kiessling


Instructional Science | 2015

Fostering professional communication skills of future physicians and teachers: effects of e-learning with video cases and role-play

Martin Gartmeier; Johannes Bauer; Martin R. Fischer; Tobias Hoppe-Seyler; Gudrun Karsten; Claudia Kiessling; Grit E. Möller; Anne Wiesbeck; Manfred Prenzel


Health Research Policy and Systems | 2015

Tools and instruments for needs assessment, monitoring and evaluation of health research capacity development activities at the individual and organizational level: a systematic review

Johanna Huber; Sushil Nepal; Daniel Bauer; Insa Wessels; Martin R. Fischer; Claudia Kiessling


Patient Education and Counseling | 2016

Development and validation of a computer-based situational judgement test to assess medical students’ communication skills in the field of shared decision making

Claudia Kiessling; Johannes Bauer; Martin Gartmeier; Peter Iblher; Gudrun Karsten; Jan Kiesewetter; Grit E. Moeller; Anne Wiesbeck; Michaela Zupanic; Martin R. Fischer


Psychotherapie Psychosomatik Medizinische Psychologie | 2013

Deutsche Übersetzung und Konstruktvalidierung des „Patient-Provider-Orientation Scale“ (PPOS-D12)

Claudia Kiessling; Götz Fabry; Martin R. Fischer; Claudia Steiner; Wolf A. Langewitz


Journal for educational research online | 2017

Simulated conversations for assessing professional conversation competence in teacher-parent and physician-patient conversations

Anne Wiesbeck; Johannes Bauer; Martin Gartmeier; Claudia Kiessling; Grit E. Möller; Gudrun Karsten; Martin R. Fischer; Manfred Prenzel


Journal of Evaluation in Clinical Practice | 2014

Evaluation of health research capacity strengthening trainings on individual level: validation of a questionnaire

Johanna Huber; Daniel Bauer; Michael Hoelscher; Jerry Kapungu; Arne Kroidl; Tessa Lennemann; Lucas Maganga; Oliver Opitz; Omari Salehe; Abbie Sigauke; Martin R. Fischer; Claudia Kiessling

Collaboration


Dive into the Claudia Kiessling's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marc van Nuland

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Zoi Tsimtsiou

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Geurt Essers

Radboud University Nijmegen Medical Centre

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Götz Fabry

University of Freiburg

View shared research outputs
Top Co-Authors

Avatar

Ingo Kollar

University of Augsburg

View shared research outputs
Top Co-Authors

Avatar

Insa Wessels

Humboldt University of Berlin

View shared research outputs
Researchain Logo
Decentralizing Knowledge