“Am I Participating Enough?” (“What Do Your Peers Say?”): Small Group Peer Assessment As An Alternative To Class Participation Grades

“Am I Participating Enough?” (“What Do Your Peers Say?”): Small Group Peer Assessment As An Alternative To Class Participation Grades


Reflections on Practice


Stewart L. ARNOLD


College of Business, Nanyang Business School, Nanyang Technological University (NTU)



Correspondence
Name:     Dr Stewart L. ARNOLD
Address:  Nanyang Business School, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798
Email:      sarnold@ntu.edu.sg
 


Recommended Citation:
Arnold, S. L. (2021). “Am I participating enough?” (“What do your peers say?”): Small group peer assessment as an alternative to class participation grades. Asian Journal of the Scholarship of Teaching and Learning, 11(1), 46-55..

View as PDF      Current Issue


Abstract

In this paper, I question the effectiveness of instructor/observer grading of class participation for the purpose of encouraging meaningful participation. Instead, I propose that we should promote peer learning in small group work, and subsequent peer assessment, as ways to improve the deep learning that can occur in a truly participative classroom, especially in an Asian higher education setting. I explain the peer learning and assessment practices used in a course with culturally diverse participants and reflect on three advantages that peer learning and assessment has over class participation for this course.

Keywords: Class participation, peer assessment, summative feedback, deep learning, higher education

Relevant Literature

In student-centred learning environments, participation has long been the key to achieving student engagement and subsequent learning outcomes (Hallinger & Lu, 2013). The benefits of student participation are well-documented (e.g., Cajiao & Burke, 2016, Rocca, 2010, Weaver & Qi, 2005). However, many problems have been reported with the grading of students’ participation (Gilson, 1994; Gillis, 2019; Litz, 2003; Mello, 2010; Paff, 2015).  

Firstly, students may mistake quantity of participation for quality, and speak up as often as possible to score points towards their participation grade, but without adding any real depth to the classroom discussion (Litz, 2003). In addition, giving “class participation grades” often means instructors must try to get students to speak up in the larger class setting, such as “cold calling” on students (Pepper & Pathak, 2008). Despite this, as few as 10% of students in a class would regularly participate in class discussions (Rocca, 2010). However, as many as 75% of instructors report that they count participation in student grades (Rogers, 2013). Clearly there is a disconnect between an administrative requirement for students to speak up, and a pedagogical imperative to encourage meaningful participation. 

Secondly, students can perceive the grading of participation as subjective and unfair because they do not understand the requirements for quality participation (Paff, 2015). Furthermore, instructors tend to favour quantitative measurements of participation (i.e., grading students for simply speaking up in class), because it is difficult to objectively assess the quality of participation (Rocca, 2010). In fact, Bean and Peterson (1998) found that most instructors make subjective assessments of participation, based on their impressions of students. Even if instructors or other observers rate participants’ comments in real time (i.e., during a class), the ratings are prone to biases (Gillis, 2019). Little wonder that students may have perceptions of poor procedural justice in the instructor’s grading of their participation (Pepper & Pathak, 2008). 

Thirdly, grading participation does not necessarily motivate students to participate more often or in more depth (Crombie et al., 2003; Fritschner, 2000, Paff, 2015). The reasons for low student participation in classroom settings include shyness, nervousness, feelings of intimidation, “fear of being wrong” and looking foolish in front of their peers and the instructor (Fritschner, 2000; Rocca, 2010; Weaver & Qi, 2005).

For Asian students, the potential problems of grading participation are exacerbated because of the challenges they reportedly face speaking up in class discussions (Kim, 2006; Murray, 2018; Nakane, 2007; Takahashi, 2019; Wong & Tsai, 2007). However, there is evidence that Asian students in general do appreciate, and benefit from, interactive classes (Hallinger & Lu, 2013; Kember, 2000; Littlewood, 2001; Tan; 2016; Watkins, 2000). Their participation can be encouraged in different ways, such as allowing students to discuss in small groups before whole class discussions, arranging the classroom to promote student interaction, and providing regular feedback (Hardy & Tolhurst, 2014). Thus, working in small groups, rather than participating in large class discussions, may be especially salient for Asian students. This is the approach I take towards course participation, as described next. 

The Course and its Learning Opportunities

I teach an undergraduate elective course on “Leadership in the 21st Century” at the Nanyang Technological University (NTU), Singapore. Students are deliberately allocated into five-member groups that are as diverse as possible with respect to gender, educational background, and race. We ensure that all groups comprise members who have not previously met. As well as providing transparency in the team formation, this arrangement should help students meet, learn about, and work with, diverse others (Loes et al., 2018). It also reduces the possibility of biases in peer evaluation (Mayfield & Tombaugh, 2019).

Early in the semester, we have activities where team members learn about each other quickly. Greater self-disclosure among teammates fosters mutual trust and reduces any initial biases toward each other (Mayfield & Tombaugh, 2019). In addition, as recommended by O’Neill et al. (2019), formative feedback is used frequently throughout the course to develop specific skills.

Using summative peer assessment

Summative peer assessment is used to indicate students’ progress towards the course’s overarching Assessment of Learning (AoL) goal: “improve teamwork and interpersonal skills”. The team’s performance on various tasks is used to measure other AoL goals (e.g., critical thinking skills) and all team members receive the same team grade for this work. Hence, it is in the students' interests to contribute optimally to their teams’ work, because of team assessments and individual peer assessment grades. Students assess each other at mid-semester (Week 7) and end-of-semester (Week 13), when individual grades are worth 5% and 10% respectively. The course is set up such that all teams have five members, so no team is disadvantaged by having fewer team members than any other.

Students assess each other using the same assessment rubric each time. In this rubric, the two criteria—team task skills and team interpersonal skills—are defined by reference to a checklist of such skills. In order to reduce the problem of grade inflation, students have a limited number of marks to allocate across their four team members1

Students also have to submit written feedback online on each member’s “strengths” and “areas to improve”. Instructors read through all comments and moderate any potentially offensive or otherwise inappropriate comments, which has rarely been the case in reading over 5,000 qualitative assessments over 14 semesters of this course. 

Students are assured that their ratings and feedback are anonymous. When the results are released online, every student will see only their own average rating on each of the two assessment criteria, a subsequent overall peer assessment grade, and a collated list of comments written by their peers. Anonymous feedback is generally considered the most effective for summative assessment when there is some assessor accountability (Panadero & Alqassab, 2019). Therefore, students are clearly informed that the instructor will read all comments and contact the assessor if the comments do not match the ratings given.

Ashenafi (2017) recommends using automation for peer assessment tasks, because of the efficiency and timeliness of the process. This has certainly been true for the online peer assessment process in this course. The procedure for entering marks and comments is demonstrated in class and students are advised to make practice submissions, which can be over-ridden up until the submission deadline. After such practice, some students may approach me individually for further guidance. Thus, the process of explaining the assessment rubrics, and giving students practice opportunities, is in accordance with best practices outlined in the literature (Reddy & Andrade, 2010).

In summary, I require all students to provide anonymous summative peer assessment (marks and written feedback) twice during the semester; and I use several processes to ensure the peer assessment will be fair and constructive in helping them to learn effective team skills.

Reflections 

The practices described above are integrated with a team-based learning approach; hence, it is difficult to isolate the effects of peer assessment. Team-based learning is known to be effective for student engagement, deep learning, and teamwork skills (Michaelsen & Sweet, 2008). Because team membership is fixed across a semester, and the team engages in many different activities, students feel a strong sense of belonging to their team (Steem-Utheim & Foldnes, 2018). The various activities within teams, and the shared team outcomes (some of which are assessed) signal the importance of each member’s contribution; peer assessment is an additional reminder of students’ accountability to their team. Specifically, the use of peer learning practices and summative peer assessment appears to have three advantages over class participation grading, as discussed next.

Quality of participation and depth of learning

The student teams do perform more collaboratively as semester progresses. I observe more willingness by students to question and challenge each other and generate truly mutual solutions to tasks. The assessable team tasks show improvements in quality, and consequently the team grades increase across semester. For example, later in semester, students examine the context of case studies more thoroughly from different stakeholder perspectives. 

Learning how to give and respond to such feedback is an important workplace skill (Cranmer, 2006). After the first peer assessment, I see evidence of students developing the capacity to respond to feedback. In particular, “quieter” students not only speak up more, but even dare to challenge others. In addition, the feedback is more insightful at the end of semester, with comments such as “You didn’t take up as much space as others in the group, but you made us all feel very supported”; and “you motivated us with your non-verbals”. 

To ensure constructive feedback, I used to rate the assessors on their comments. I have stopped doing this because of the heavy workload involved; however, I am happy to report that students still provide constructive feedback to each other. In fact, the comments are so insightful that I usually include them in reference letters that I write for ex-students because employers appreciate knowing what an applicant’s peers think of him or her (Figure 1). 

S-Arnold_Figure1
Figure 1. An example of how a student’s group work and peer assessment feedback is incorporated into a reference letter.

Perceptions of peer assessment

As reported in the literature (Ashenafi, 2017), students initially feel uneasy about rating their peers. However, they accept the rationale for it by the end of semester and even feel empowered that they will rate their peers instead of the sole instructor making subjective assessments of “participation”. The student evaluations of my courses often have comments such as: “Those peer assessments made me think about my team members in a positive light, not just being critical” and “analysing how others participated got me to self-reflect and I looked forward to getting their feedback about me”.

Occasionally, students complain that they want to give each other higher marks because “everyone in the team” has contributed well. They accept my explanation that if all team members have performed very well, then the rewards should be higher team scores for the various team assessments. 

My experience with other undergraduate and postgraduate courses in Singapore is that if students are not forced to discriminate in their marking, most will simply give each other full or near-full marks. This can lead to an extra problem if a student does discriminate strongly between his/her peers: The low scoring peers in that group are disadvantaged because they would probably not be penalised if they were in another group. 

Motivation to participate more

As noted earlier, commitment to their team probably motivates students as much as the peer assessment grades on offer.

In the second half of semester, there is typically around 15% of students who opt for only a “Pass/Fail” grade for the course. Such students still attend class and engage actively; however, it is obvious that they have stopped preparing conscientiously for classes. The final peer assessment grades for these students are predictably low; however, the students themselves have continued to benefit from the group work. 

In one notable instance, a student who was confined to bed for the last four weeks of class apparently still prepared and presented her ideas on the team activities to her team-mates by WhatsApp messages2. She was rated higher than average by her peers, with comments about being “willing to pitch in for the group, despite being sick”, “you show great team spirit” and so on. In the end, she opted for “Pass/Fail” for the course but obviously, her commitment to her team-mates did not wane!

Students are not pressured to speak up during the large group discussions. Instead, there is the usual 10% of class members who speak up regularly (a figure consistent with literature on class participation [Rocca, 2010]). However, because the discussion topic has been explored and debated within all small groups, when the natural “talkers” present to the large class, everyone listens intently in order to compare other groups’ output to their own group’s work.
 
This is particularly the case for the local (Singaporean) students in each class. In facilitating the whole-of-class discussions, I note that exchange students from Europe, North America and Australia speak up the most. However, as I observe the small group activities, I notice from their body language and verbal interactions that the local students are fully engaged in those activities. Every semester, I am pleasantly surprised to learn from the feedback that certain “quiet” students (local and overseas) have in fact made constructive contributions to their small groups.

Using peer assessments also takes pressure off me to make instructor-based assessments of participation. Instead, I can focus fully on facilitating the very best possible small group work and class discussions for student learning. 

Conclusion

In my undergraduate classes on Leadership, where there is a diversity of participants, a range of different small and large group activities seems to benefit most students. In all the small group activities, I notice that Asian students are as engaged, vocal, and motivated as the international students. Similar observations have been reported in homogeneous race classrooms and culturally diverse classrooms alike (Crosthwaite et al., 2015).

Summative peer assessment is an integral part of the team-based learning approach I adopt and I unashamedly promote it! Brutus et al. (2013) argue that if a standardised peer evaluation system is used within a business school, students get the opportunity to practice evaluating their peers on the standard criteria. Consequently, they feel more confident in evaluating their peers and will be better able to provide detailed, specific feedback to each other, which are skills relevant to managerial practice. 

Assessment should not be a necessary evil; it should contribute to learning. Ultimately, to produce valuable learning, our summative assessment would become more formative in nature and thereby encourage deeper learning beyond simply providing marks (Geertsema, 2017). In my experience, constructive peer assessment is one tool that can achieve this more effectively than the common practice of the instructor grading the students’ class participation. 

Endnotes

  1. In order to reduce the problem of grade inflation, students have a limited amount of marks (60) to allocate across their four team members, so they cannot give full marks (20 each) to everyone. In fact, if they give full marks to anyone, they would have relatively few marks left (40) to spread among the remaining members. It is very rare for students to give full marks to a team member, just as it is rare for any student to score 100% on other forms of assessment. 

    Students can allocate 15 marks to each member (so that all get the same marks, and a subsequent grade of A-), but they are encouraged to discriminate between their team members, where possible. Students sometimes complain about this system, wanting to award higher marks to all team members, but I point out that even if they choose the non-discrimination option, the average grade of A- will not disadvantage most students. In fact, the students perceived as having the highest level of skills in a team usually get an average rating of an 8 or 9 on one or both criteria. This translates to an A or A+ for their peer assessment grade and is often aligned to their grades for the other assessment components. 

    Thus, the peer evaluation contributes to a holistic assessment of students’ leadership knowledge and skills.

  2. All assessments except for quizzes are “open-book” so phone use is allowed. 

 

References

Ashenafi, M. (2017). Peer-assessment in higher education–twenty-first century practices, challenges and the way forward. Assessment & Evaluation in Higher Education, 42(2), 226–251. https://doi.org/10.1080/02602938.2015.1100711 

Bean, J. C., & Peterson, D. (1998). Grading class- room participation. New Directions for Teaching and Learning, 74, 33–40. https://doi.org/10.1002/tl.7403 

Brutus, S., Donia, M. B. L., & Ronen, S. (2013). Can business students learn to evaluate better? Evidence from repeated exposure to a peer-evaluation system. Academy of Management Learning & Education, 12, 18–31. https://doi.org/10.5465/amle.2010.0204 

Cajiao, J. & Burke M, J. (2016). How instructional methods influence skill development in management education. Academy of Management Learning & Education, 15(3), 508–524. https://doi.org/10.5465/amle.2013.0354 

Cranmer, S. (2006). Enhancing graduate employability: Best intentions and mixed outcomes. Studies in Higher Education, 31(2), 169–184. https://doi.org/10.1080/03075070600572041 

Crombie, G., Pyke, S. W., Silverthorn, N., Jones, A., & Piccinin, S. (2003). Students’ perceptions of their classroom participation and instructor as a function of gender and context. Journal of Higher Education, 74(1), 51-76. http://www.jstor.org/stable/3648264 

Crosthwaite, P., Bailey, D., & Meeker, A. (2015). Assessing in-class participation for EFL: considerations of effectiveness and fairness for different learning styles. Language Testing in Asia, 5(1), 1–19. https://doi.org/10.1186/s40468-015-0017-1 

Fritschner, L. M. (2000). Inside the undergraduate college classroom: Faculty and students differ on the meaning of student participation. The Journal of Higher Education, 71(3), 342–362. https://doi.org/10.2307/2649294 

Gillis, A. (2019). Reconceptualizing participation grading as skill building, Teaching Sociology, 47(1), 10–21. https://doi.org/10.1177/0092055x18798006 

Gilson, C. (1994). Of dinosaurs and sacred cows: The grading of class participation. Journal of Management Education, 18(2), 227-236. https://doi.org/10.1177%2F105256299401800207 

Geertsema, J. (2017). Learning-oriented assessment and the scholarship of teaching and learning: A review of Excellence in University Assessment by David Carless. Asian Journal of the Scholarship of Teaching and Learning, 7(1), 83-90. https://nus.edu.sg/cdtl/docs/default-source/engagement-docs/publications/ajsotl/archive-of-past-issues/year-2017/v7n1_may2017/pdf_vol7n1_johangeertsema.pdf?sfvrsn=3561ac40_2 

Hallinger, P. & Lu, J. (2013). Learner-centered higher education in East Asia: Assessing the effects on student engagement. International Journal of Educational Management, 27(6), 594–612. https://doi.org/10.1108/IJEM-06-2012-0072 

Hardy, C. & Tolhurst, D. (2014). Epistemological beliefs and cultural diversity matters in management education and learning: A critical review and future directions. Academy of Management Learning & Education, 13(2), 265–289. https://doi.org/10.5465/amle.2012.0063 

Hartman, N. S., Allen, S. J., & Miguel, R. F. (2015), An exploration of teaching methods used to develop leaders: Leadership educators' perceptions. Leadership & Organization Development Journal; Bradford, 36(5), 454-472. https://doi.org/10.1108/LODJ-07-2013-0097 

Kember, D. (2000). Misconceptions about the learning approaches, motivation and study practices of Asian students. Higher Education, 40(1), 99-121. http://www.jstor.org/stable/3447953 

Kim, S. (2006). Academic oral communication needs of East Asian international graduate students in non-science and non-engineering fields. English for Specific Purposes, 25, 479–489. https://doi.org/10.1016/j.esp.2005.10.001 

Littlewood, W. (2001). Students’ attitudes to classroom English learning: A cross-cultural study. Language Teaching Research, 5(1), 3–28. https://doi.org/10.1177%2F136216880100500102 

Litz, R. (2003). Red light, green light and other ideas for class participation-intensive courses: Method and implications for business ethics education. Teaching Business Ethics, 7(4), 365-378. https://doi.org/10.1023/B:TEBE.0000005710.76306.1d 

Loes, C. N, Culver K. C., & Trolian, T. L. (2018.). How collaborative learning enhances students’ openness to diversity. The Journal of Higher Education, 89(6), 935-60. https://doi.org/10.1080/00221546.2018.1442638 

Mayfield, C. O, & Tombaugh, J. R. (2019). Why peer evaluations in student teams don’t tell us what we think they do. Journal of Education for Business, 94(2), 125-138. https://doi.org/10.1080/08832323.2018.1503584 

Mello, J. A. (2010). The good, the bad, and the controversial: The practicalities and pitfalls of grading class participation. Academy of Educational Leadership Journal, 14(1), 77–97. https://www.abacademies.org/articles/aeljvol14no12010.pdf 

Michaelsen, L. K., & Sweet, M. (2008). The essential elements of team-based learning. New Directions for Teaching and Learning, 116, 7–27. https://doi.org/10.1002/tl.330 

Murray N. (2018). Understanding student participation in the internationalised university: Some issues, challenges, and strategies. Education Sciences, 8(3), 96-106. https://doi.org/10.3390/educsci8030096 

Nakane, I. (2007). Silence in intercultural communication: Perceptions and performance. Amsterdam: John Benjamin Publishing.

O’Neill, T., Larson, N., Smith, J., Donia, M., Deng, C., Rosehart, W., & Brennan , R. (2019). Introducing a scalable peer feedback system for learning teams. Assessment & Evaluation in Higher Education, 44(6), 848-862. https://doi.org/10.1080/02602938.2018.1526256 

Paff, L. (2015). Does grading encourage participation? Evidence and implications. College Teaching, 63(4), 135–145. https://doi.org/10.1080/87567555.2015.1028021 

Panadero, E. & Alqassab, M. (2019). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assessment & Evaluation in Higher Education, 44(8), 1253-1278. http://dx.DOI.org/10.1080/02602938.2019.1600186 

Pepper, M., & Pathak, S. (2008). Classroom contribution: What do students perceive as fair assessment? Journal of Education for Business, 83(6), 360–368. https://doi.org/10.3200/JOEB.83.6.360-368 

Reddy, Y. M. & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435-448. https://dx.doi.org/10.1080/02602930902862859   

Rocca, K. A. (2010). Student participation in the college classroom: An extended multidisciplinary literature review. Communication Education, 59, 185–213. https://doi.org/10.1080/03634520903505936 

Rogers, S. (2013). Calling the question: Do college instructors actually grade participation? College Teaching, 61(1), 11–22. https://doi.org/10.1080/87567555.2012.703974 

Steen-Utheim, A. T., & Foldnes, N. (2018). A qualitative investigation of student engagement in a flipped classroom. Teaching in Higher Education, 23(3), 307-324. https://doi.org/10.1080/13562517.2017.1379481 

Takahashi, J. (2019). East Asian and native-English-speaking students’ participation in the graduate-level American classroom. Communication Education, 68(2), 215–234. https://doi.org/10.1080/03634523.2019.1566963 

Tan, C. (2016). Teacher-directed and learner-engaged: Exploring a Confucian conception of education. Ethics and Education, 10(3), 302–312. https://doi.org/10.1080/17449642.2015.1101229 

Watkins, D. (2000). Learning and teaching: a cross-cultural perspective. School Leadership & Management, 20(2), 161-173. https://doi.org/10.1080/13632430050011407 

Weaver, R. R., & Qi, J. (2005). Classroom organization and participation: College students’ perceptions. The Journal of Higher Education, 76(5), 570–601. https://doi.org/10.1080/00221546.2005.11772299 

Wong, Y., & Tsai, J. (2007). Culture models of shame and guilt. In J. L. Tracy, R. W. Robins, & J. P. Tangney (Eds.), The self-conscious emotions: Theory and research (pp. 209–233). New York: The Guilford Press.


About the Author

Stewart L. ARNOLD teaches leadership courses to a range of students from different programs across NTU. He has been a management consultant for over 30 years and brings many diverse experiences into his teaching. This engages students to prepare for leadership in their daily lives and in the workplace. Stewart is particularly focused on the benefits of collaborative learning.

Stewart can be reached at sarnold@ntu.edu.sg.