Responding to student feedback Individualising teamwork scores based on peer assessment
RResponding to student feedback: Individualising teamwork scores based on peer assessment
Mahdi Jalili and Homa Babai Shishavan Abstract
Teamwork is critical in engineering education. However, assessing teamwork effectively is challenging. A particular challenge is how to account for individual contributions in team projects. Engineering team projects often result in a single final product, for which all team members receive a single assessment mark regardless of their contribution to the teamwork. In this work, we proposed a peer assessment process to individualise team member marks in an effective and fair manner. The proposed process was developed in response to students’ course evaluation feedback in terms of providing them with final assessment marks reflecting their individual contributions to the teamwork completion more accurately. Our data from engineering design courses of Electrical and Electronics Engineering programs at RMIT University (Melbourne, Australia) showed that the proposed individualisation significantly improved the students’ appreciation of the teamwork aspects of the courses. It also significantly enhanced the students’ engagement in the course evidenced by improved teaching scores and overall satisfaction index in the course evaluation survey.
Keywords:
Peer assessment, teamwork, individualised score, student feedback, student experience.
Introduction
Acquiring efficient teamwork skills is an important part of courses in higher education as a desirable graduate outcome and an important employability skill. Both educators and employers encourage teamwork as an important skill required to undertake a professional job Budimac, Putnik, Ivanović, Bothe, & Schuetzler, 2011; Iacob & Faily, 2019; Riebe, Girardi, & Whitsed, 2016). Teamwork has an important role in engineering education, where students are challenged to work in a team to improve their engineering problem solving skills (Bae, Ok, & Noh, 2019; Robert Lingard & Barkataki, 2011; Weinberg, White, Karacal, Engel, & Hu, 2005). Engineering courses in Australian higher education institutes are accredited by Engineers Australia, a member of Washington Accord, a non-for-profit and professional national organisation dedicated to establish a forum for advancement and standardisation of engineering disciplines within Australia. Engineers Australia frequently review engineering degrees offered by Australian universities and accredit the eligible ones. To be accreditable, the universities must show that the courses offered in their engineering programs provide the skillset outlined in “Stage 1 competency for professional engineer” (Engineers Australia, 2019). As part of the elements and indicators of engineering application ability, graduates are required to “Contribute(s) to and/or manage(s) complex engineering project activity, as a member and/or as the leader of an engineering team.” One of the six professional and personal retributes is “Effective team membership and team leadership”, where graduates are required to: a) understand the fundamentals of team dynamics and leadership; b) function as an effective member or leader of diverse engineering teams, including those with multi-level, multi-disciplinary and multi-cultural dimensions; c) earn the trust and confidence of colleagues through competent and timely completion of tasks; d) recognise the value of alternative and diverse viewpoints, scholarly advice and the importance of professional networking; e) confidently pursues and discerns expert assistance and professional advice; and, f) take initiative and fulfil the leadership role whilst respecting the agreed roles of others. Including the above skillset in the accreditation process, is an indicative of importance of developing teamwork skills for engineering graduates. While courses with a focus on building teamwork skills are critical in engineering programs, often there is a lack of clarity around measuring and assessing teamwork skills, which makes developing them a challenge in the curriculum (Britton, Simper, Leger, & Stephenson, 2017). It has been noted that in some cases students do not get enough clarity on outcomes and what is expected of them during the teamwork assessment tasks (Mellor, 2012), nd their failure to effectively work together remains a frequently reported issue in collaborative learning. Various strategies can be taken to address this problem. For example, team-building intervention has been shown to effectively increase students’ positive perception of team performance (Kapp, 2009). Also, employing effective assessment criteria for teamwork considering the team dynamics to evaluate how the team work together (Volkov, 2007), can provide a solution to this problem. For example, the framework suggested by Lingard (2010) to assess teamwork, which is based on assessing various skills, such as completing individual tasks promptly, accomplishing a fair share of the work, introducing new ideas, clear expression, sharing knowledge, listing to others’ opinions and views, and showing respect to other team members, can be effective in tackling this problem (R. W. Lingard, 2010). Another common problem in terms of assessing teamwork skills is that evaluating the development of teamwork and individualising the team marks is not simple, as it is likely that in many occasions there is no objective evidence to study (Fidalgo-Blanco, Sein-Echaluce, García-Peñalvo, & Conde, 2015). While giving a single mark for the team product is the easiest way of assessing teamwork, it is not without limitations. A common drawback of this approach is that everyone in the group gets the same score regardless of their contribution to the team. An equally common view of many students is that they did not receive the marks that truly reflect what they did for their team project (Mellor, 2012). This is especially the case in computer science and engineering subjects (Russell, Haritos, & Combes, 2006), as the outcome of teamwork is usually a single end-product, which receives a single assessment mark from assessors. Therefore, it is necessary to individualize the group marks in an efficient and fair manor. Self and peer assessment can be used to individualize group marks. Self assessment refers to the evaluation of team members of their own performance or contribution to the team (Budimac et al., 2011). Although research has shown that project success correlates with the team members’ self assessment of effectiveness (R Lingard & Berry, 2000), team members are likely to rate themselves more leniently compared to others rating them (Gordon et al., 2016). Peer assessment, on the other hand, refers to students formally assessing the performance of their team members in the team activity (Topping, 2009). Research suggests that peer assessment is a reliable, useful and valid exercise for the assessments made by independents (Falchikov, 1995). A recent research suggested using badges representing individual contributions of team members’ to the team as an effective peer assessment strategy (Kubincová, Šuníková, & Homola, 2019). They showed that the proposed approach has positive impact on students’ engagement with peer assessment compared to open-text uestions. A separate research showed that on average peer assessments are very similar to those made by teachers when a general judgment based on well understood rubrics are used (Falchikov & Goldfinch, 2000). In this work, we proposed a peer assessment process to individualise group marks, which we have piloted in engineering design courses, namely Professional Engineering Project Part A & B (Post Graduate course) and Engineering Design Project Part A & B (Under Graduate course), offered in the School of Engineering, Royal Melbourne Institute of Technology (RMIT), Melbourne, Australia. This process was developed and implemented in response to the course evaluation feedback that we received from the students in 2017. At the time, all team members received a single mark for the teamwork-based assessment tasks undertaken in these courses. However, their qualitative feedback suggested that they thought their teamwork marks did not reflect their true individual contributions to the completion of the task, and to improve the course the marks need individualisation. The analysis of qualitative and quantitative student feedback both before and after the implementation of the peer assessment process demonstrated that it provides a fair approach to individualize the group marks and improves students’ course experience.
Materials and methods
Course overview
Professional Engineering Project Part A & B and Engineering Design Project Part A & B courses are offered in undergraduate and postgraduate programs in Electrical Engineering, Electrical and Electronics Engineering, Computer and Network Engineering, and Communications Engineering. These courses run in two semesters, and their purpose is to introduce the students to engineering projects and refine their analytical and practical design skills through teamwork. The students engage in a team project (each team with 4-6 member) under supervision of an academic staff, leading to the design and production of an engineering product. In semester one, the students work in a set team to develop an Electrical, Electronic, Network, or Communication Engineering product idea. The teams are expected to complete their project definition, requirement analysis, and make significant progress in terms of designing and building their product. Teams complete their project in semester two, where they demonstrate their working prototype/product at the School of Engineering Trade Fair, which is usually attended and judged by many industry representatives. ssessment marking
There are four tasks to be completed in Part A and three tasks in Part B. In Part A, teams are assessed for project definition, requirement analysis, presentation and progress report. The assessments in Part B include completion plan, presentation and final report. Marking of each assessment involves three steps. First, academic staff mark the assessment tasks, and each team receives a single mark for each task. Team members also assess their peers’ performance after each assessment task. The marks received from the academics and the results of the peer assessment are then used to individualise and finalise the marks of each team member.
Peer assessment and mark individualisation procedure
Marks of individual students is determined by taking into account their individual peer assessment marks provided by their team members. This is achieved by inviting all team members to participate in the peer assessment process. However, participation in peer-assessment is not compulsory (See Appendix 1 for the peer assessment criteria). After the completion of each assessment task, students are given a week to peer-assess all team members including themselves in terms of their contribution to the completion of the assignment. Peer assessment is conducted using an on-line tool, SparkPlus ("https://sparkplus.com.au/," 2020), which is supported by RMIT University. The average of the peer assessment marks received by a student from everyone in the team is then calculated. The mark the teams received from the academics is then individualized proportionally based on the average of the peer assessment mark each student received from the team members. If a student receives zero average peer assessment, their individual mark for that assignment will be zero regardless of the mark given to the team by the academic staff. Indeed, the zero mark from all team members indicates that the student had not any individual contribution to the completion of the task. Table 1 illustrates an example of how the team marks are individualised based on peer assessments.
Table 1:
The table shows an example of how a group assessment mark is individualised based on peer assessment marks. The team received 75 (out of 100) from the academic for their assignment. Only four team members out of 5 participated in the peer-assessment and Student 2 did not complete the peer assessment. The team’s mark is individualised proportional to the average of peer assessment marks given to each student. In this example, individual peer assessment marks ranges from 70% to 90%. Accordingly, individual marks vary from 80.36/100 to 62.5/100. The individual marks are proportional to the peer assessments, for xample, individual mark of student 1 is 80.36, which is obtained by multiplying 75 (team mark received from the academic) by 90 and then dividing by 84 (the average of the peer assessments).
Peer assessment completed for:
Name of Assessor
Student 1 Student 2 Student 3 Student 4 Student 5
Student 1
Student 2 - - - - -
Student 3
Student 4
80% 80% 80% 100% 60%
Student 5
80% 80% 80% 80% 80%
Mean peer assessment mark 90% 80% 90% 90% 70% Team mark from the academic 75 75 75 75 75
Individual mark 80.36 71.43 80.36 80.36 62.5
Results
In this section, we discuss our quantitative and qualitative results pre and post implementation of the peer assessment strategy for the fair individualization of team marks.
Pre peer assessment
In this case, students were not assessed for their team contribution, and a single mark was given to each team. In the written feedback provided in the Course Evaluation Survey (CES) report (see Appendix B for CES procedure), 24% of the students provided feedback in the “best aspects of the course” section appreciated teamwork aspects of the course (Table 3). When students were asked for the aspects of the course that needed improvement, 26% of them explicitly mentioned the marking process and how individual contributions were captured as n area for improvement (Table 3). The following quotations are examples of the student feedback in this regard: “There's no consequence for not contributing to the group assessments”; “It makes it way too easy for group members to be lazy and let the rest of the group do everything”; “As with any group work, two students allocated to our group completed little to zero work all year but will receive the same mark as the rest of us”; “Team mates not pulling their weight and not being marked down enough for it”; and “There's no consequence for not contributing to the group assessments”. To respond to the students’ feedback, we developed and implemented the peer assessment process to individualise the team marks.
Post peer assessment
The students were asked to complete peer assessments after each assignment. Table 2 shows variations of the peer assessment marks in terms of within team standard deviation. A standard deviation of 0 indicates that the average peer assessment mark for all team members is the same, while a large standard deviation indicates huge variability of team contributions assessed by team members. A perfect equal contribution varied from 9% of the teams (Assignment 1) to 26% (Assignment 4), and about half the teams provided peer assessments with a rather low variability (standard deviation < 2). In Assignments 6 and 7 (Final report and Demonstration), we found large variability for the team peer assessments, with 48% of the cases with standard deviation of higher than 5. This resulted in large variation of the final individualised marks for these cases, where the final assignment mark was increased for some team members and reduced for others, based on their peer assessment marks. This result further justify the use of peer assessments for making imdividualisation.
Table 2:
This table shows the percentage of teams with std of peer assessment less than a certain value, where standard deviation (std) is obtained on the peer assessment marks within a team. The peer assessment marks varied between 0 and 100. % of teams with std = 0 % of teams with std < 1 % of teams with std < 2 % of teams with std < 5 % of teams with std > 5 Assessment 1 9 40 53 83 100 Assessment 2 13 40 53 70 100 Assessment 3 17 39 48 65 100 Assessment 4 26 52 57 79 100 ssessment 5 22 52 65 70 100 Assessment 6 17 30 39 52 100 Assessment 7 13 30 35 52 100 Analysis of the qualitative feedback provided in the CES survey (see Appendix B) showed that there was a 42% increase in students’ positive feedback indicating their appreciation of the teamwork aspect of the assessment tasks compared to their feedback received prior to the implementation of the mark individualisation process (Table 3). Almost one-third of the students who provided CES feedback, explicitly mentioned that they liked the teamwork aspects of the course. The following quotations are instances of student feedback reflecting their appreciation of teamwork skill development in the courses after the implementation of the peer assessment process to individualise their marks: “I learned working in team and problem solving skills”, “(I liked) being in groups which helps us in our communication and also providing feedback and tips to do the given module is pretty good”, “This course helps in building the Industrial Network and team building”, and “Working in a project with different people gives training on how to perform in a group”. Teaching quality of the courses at RMIT are assessed through Good Teaching Scale (GTS) in the CES survey (see Appendix B). Teams were asked to complete the CES for their academic supervisors. Our data showed that the post peer assessment mean GTSs (mGTSs) were significantly improved (P < 0.018; Wilconxon’s ranksum test) compared to the pre peer assessment mGTSs (Table 4). The peer assessment process improvement the mGTS scores by 8%, while other courses offered in the school and RMIT experienced less than 1% improvement over the same time period. The Overall Satisfaction Index (OSI) of the course was also increased from 4 to 4.12 by implementing the peer assessment. The improvement in the mGTS and OSI is an evidence of positive impact of the proposed peer assessment process in improving student engagement with the course.
Table 3:
Impact of group mark individualisation based on peer assessment policy in the feedback provided by students in the CES survey.
Pre peer assessment % of student commented about teamwork appreciation in the section % of students commented on inappropriate team assessment in the section “What aspects of this What are the best aspects of this course?” of the CES course are in most need of improvement?” of the CES 24% 26%
Post peer assessment
34% 7%
Table 4:
Impact of group mark individualisation on the student engagement with the course measured by mGTS and OSI (see appendix B for details). mGTS shows the average and standard deviation of the mGTSs of the academic staff involved in supervising the student projects. mGTS of post peer assessment is significantly higher than that of pre peer assessment (P < 0.018; Wilconxon’s ranksum test).
Pre peer assessment mGTS OSI 4.04±0.47 4
Post peer assessment
Conclusion
Efficient teamwork skill is essential for many engineering jobs. Many accreditation agencies, like Engineers Australia that is the main accreditation body for engineering degrees in Australian universities, expect engineering graduates to gain proficiency in teamwork skills in their education programs. To be truly proficient in teamwork skills, students must be offered courses with teamwork as their core. However, assessing teamwork in challenging. A critical challenge is how to effectively individualise team marks. Engineering team projects often result in a single end-product, receiving a single mark from assessors. In this manuscript, we investigated the challenge of individualising team marks on engineering design courses offered as part of undergraduate and postgraduate programs of Electrical and Electronics Engineering at RMIT University (Melbourne, Australia). We proposed an online peer assessment strategy, where after each assignment, students were asked to complete a peer assessment deployed in an online platform (SparkPlus) to evaluate their team members’ performance in that assignment. The average rating that each student received from their team members were then used to individualise the group mark for hat student. This resulted in fair individual distribution of marks and appreciating individual contributions in the marking process. Our analysis on the data provided in course evaluation survey reports showed that the proposed peer assessment process resulted in more appreciation of teamwork and much less complains about marking fairness for non-contributing team members. The peer assessment was also main contributor for improved good teaching scores for the academic supervisors involved in supervising the team projects. The data also showed that students’ engagement with the course was significantly improved.
References
Bae, S. A., Ok, S.-Y., & Noh, S. R. (2019). Effects of teamwork competence on problem solving in engineering students: mediating effect of creative personality.
Journal of Engineering Education Research, 22 (3), 32-40. Britton, E., Simper, N., Leger, A., & Stephenson, J. (2017). Assessing teamwork in undergraduate education: a measurement tool to evaluate individual teamwork skills.
Assessment & Evaluation in Higher Education, 42 (3), 378-397. Budimac, Z., Putnik, Z., Ivanović, M., Bothe, K., & Schuetzler, K. (2011). On the assessment and self‐assessment in a students teamwork based course on software engineering.
Computer Applications in Engineering Education, 19 (1), 1-9. Engineers Australia. (2019). Stage1 competency standards for professional engineer. In. Falchikov, N. (1995). Peer feedback marking: developing peer assessment.
Innovations in Education and Training International, 32 (2), 175-187. Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks.
Review of Educational Research, 70 (3), 287-322. Fidalgo-Blanco, Á., Sein-Echaluce, M. L., García-Peñalvo, F. J., & Conde, M. Á. (2015). Using Learning Analytics to improve teamwork assessment.
Computers in Human Behavior, 75 , 149-156. Gordon, C. J., Jorm, C., Shulruf, B., Weller, J., Currie, J., Lim, R., & Osomanski, A. (2016). Development of a self-assessment teamwork tool for use by medical and nursing students.
BMC Medical Education, 16 , 218. doi:10.1186/s12909-016-0743-9 https://sparkplus.com.au/. (2020). Iacob, C., & Faily, S. (2019). Exploring the gap between the student expectations and the reality of teamwork in undergraduate software engineering group projects.
Journal of Systems and Software, 157 , 110393. Kapp, E. (2009). Improving student teamwork in a collaborative project-based course.
College Teaching, 57 , 139-143. Kubincová, Z., Šuníková, D., & Homola, M. (2019). Badges for peer assessment of teamwork in organized education.
Journal of Universal Computer Science, 25 (2), 1589-1607. Lingard, R., & Barkataki, S. (2011).
Teaching teamwork in engineering and computer science . Paper presented at the Frontiers in Education Conference. Lingard, R., & Berry, E. (2000). Improving team performance in software engineering. In C. Chambers (Ed.),
Selected Papers from the 11th International Conference on College Teaching and Learning : Florida Community College at Jacksonville. ingard, R. W. (2010). Teaching and assessing teamwork skills in engineering and computer science.
Systems, Cybernetics and Informatics, 8 , 34-37. Mellor, T. (2012). Group work assessment: some key considerations in developing good practice.
Planet, 25 , 16-20. Riebe, L., Girardi, A., & Whitsed, C. (2016). A systematic literature review of teamwork pedagogy in higher education.
Small Group Research, 47 (6), 619-664. Russell, M., Haritos, G., & Combes, A. (2006). Individualising students’ scores using blind and holistic peer assessmen.
Engineering Education, 1 (1), 50-60. Topping, K. J. (2009). Peer assessment.
Theory Into Practice, 48 (1), 20-27. Volkov, A. (2007). Teamwork and assessment: A critique. e-Journal of Business Education & Scholarship of Teaching, 1 , 59-64. Weinberg, J. B., White, W. W., Karacal, C., Engel, G., & Hu, A.-P. (2005). Multidisciplinary teamwork in a robotics course.
ACM SIGCSE Bulliten, 37 (1).
Appendices
Appendix A: Questions asked in the peer assessment enquiry
Each student is assessed by their team members for contributions to the team activity, overall performance and level of involvement with the project works. The team members are assessed against the following criteria: Ethical conduct and professional accountability. (20%) Creative, innovative and proactive demeanour. (20%) Efficiency use and manage information. (20%) Orderly manage her/himself. (20%) Effective team member. (20%)
Students provide a rating of 0-10 for each of the criteria above for each of their team members, which is then averaged over all criteria, and multiplied by 10 to provide the percentage (0%-100%) for each peer assessment. When logged in to the online system (SparPlus), students find the assignment to which the peer assessment is through. Then for each criterion above, they find peers to whom they require to assess. Students can also provide written feedback to each of their peers in the team. The identity of peer assessors is blind to the peers. As the peer assessment period concludes, the average peer assessment received by each student along with the written feedback provided by peers is published and becomes accessible to students in their portal.
Appendix B: Course evaluation survey questionnaire n the last four weeks of each semester, students are asked to complete a Course Evaluation Survey (CES) for each course enrolled. They provide written feedback on their experience of the course and also provide a rating of 1-5 for each of the following questions: Q1.
The teaching staff are extremely good at explaining things. Q2.
The teaching staff normally give me helpful feedback on how I am going in this course. Q3.
The teaching staff in this course motivate me to do my best work. Q4.
The teaching staff work hard to make this course interesting. Q5.
The staff make a real effort to understand difficulties I might be having with my work. Q6.