The Development of a Scoring Rubric for Studying Graduate Teaching Assistants’ Competence in Collaborative Learning Lesson Planning and Implementation
1Mark GAN Joo Seng and 2Sapthaswaran Veerapathiran
1 Centre for Development of Teaching and Learning, NUS
2 Faculty of Science, NUSCorrespondence
Name: Dr Mark GAN Joo Seng
Address: Centre for Development of Teaching and Learning, National University of Singapore, 15 Kent Ridge Road, Singapore 119225
Gan, M. J. S., & Sapthaswaran V. (2020). The development of a scoring rubric for studying graduate teaching assistants’ competence in collaborative lesson planning and implementation. Asian Journal of the Scholarship of Teaching and Learning, 10(1). 40-52.
View as PDF Current Issue
Teaching assistants play an important role in teaching and learning in tertiary institutions. In this study, we explore the development and use of a scoring rubric to investigate Graduate Teaching Assistants’ (GTAs) competence in adopting collaborative learning (CL) scripts during a two-day Teaching Assistant Programme (TAP). In particular, we evaluate the extent in which the rubric can reliably help us examine GTAs’ effectiveness in instructional planning and implementation during the programme’s micro-teaching sessions. GTAs from one cohorts of TAP volunteered to participate in this study. Video recordings of the micro-teaching sessions and GTAs’ written lesson plans were collected and used for analysis with a scoring rubric. The scoring rubric was developed through an iterative process by student research assistants and a researcher, grounded in research literature about CL scripting, and included the following criteria: Prior Knowledge, Evaluation and the five components of a CL script. Overall, the findings indicated that the rubric provided a detailed evaluation of GTAs’ use of the CL scripts to structure and teach a CL lesson. GTAs were found to be able to design interesting CL activities, but were less focused on role assignment and distribution, and in monitoring and evaluating the outcomes of CL. It is through the evaluation of micro-teaching using the rubric that we uncovered how GTAs interacted with their ‘students’ to bring about CL, and to have a sense of how co-construction of knowledge worked or did not work. The implications of this study were discussed in relation to the development and use of a lesson planning and observation scoring rubric.
Keywords: Collaborative learning scripts, graduate teaching assistants, instructional planning, scoring rubric, micro-teaching
Background to the Study
Teaching assistants play an important role in facilitating and supporting teaching and learning in tertiary institutions. Most universities recruit or employ graduate students as teaching assistants in helping faculty members with teaching duties and engaging in departmental work such as conducting tutorials, laboratory demonstrations, preparing teaching resources, and the administration of assessments. At the same time, these graduate teaching assistants (GTAs) are also working on their own research projects and striving towards completing their graduate education. Similarly, at the National University of Singapore (NUS), the findings of a recent in-house survey, conducted as part of a larger study on enhancing GTAs’ teaching competences (whereby this study is also a part of), indicated that both supervisors and administrative staff in charge of GTAs ranked involvement in tutorials, laboratory supervision, and consultations with students as the top roles or activities of GTAs in their departments. The GTAs themselves commented that besides the huge amount of time spent in preparing lessons, they often face the challenge of getting students to participate actively in class discussions and activities. With this challenge in mind, in this paper we report the process of development, implementation, and initial evaluation of a scoring rubric, designed to measure GTAs’ enactment of a collaborative learning strategy. The development of the rubric is nested within a larger research study that is concerned with how GTAs enact an active learning strategy (i.e. student collaborative learning) in their own tutorials after being explicitly taught how to do so.
In order to better prepare GTAs for their roles and responsibilities within their faculties, NUS runs the Teaching Assistants Programme (TAP) through the Centre for Development of Teaching and Learning (CDTL). The TAP is a formal two-day programme for GTAs aimed at enhancing their pedagogical skills and knowledge in fostering collaborative learning (CL) in higher education classroom contexts. The aims for the programme were to allow participants to:
- Articulate the role and responsibilities of GTAs
- Plan and enact a collaborative learning tutorial lesson to engage students in active and deep learning
- Develop a mindset that is open to learning opportunities and growth
Besides the TAP, there are also ad-hoc programmes within Departments to support GTAs, but these are usually targeted at certain groups of GTAs and mostly tailored to the specific needs of the department. The TAP at NUS specifically focuses on instructional activities that GTAs are most likely to be involved in, such as small group teaching in tutorials, developing positive mindset in their learners, and engaging in collaborative learning sessions. In a bid to align itself to current teaching practices, the TAP has also undergone major revisions, from teacher-centred activities such as training on lesson delivery and classroom management skills to more student-centred approaches that focus on fostering active participation of students with their peers through CL activities. CL refers to “any instructional method in which students work together towards a common goal, emphasising interaction and group processes” (Ruys, Van Keer & Aelterman, 2012, p. 350). As such, CL typically involves peers learning together, using resources that each individual brings to the discussion, which is often characterised by equality of participation and mutuality of influence (Palincsar & Herrenkohl, 2002; O’Donnell, 2006).
A key element of the revised TAP is the introduction of explicit instructions on CL with the use of CL scripts. CL scripts are scaffolds that aim to improve collaboration through structuring the interactive processes between two or more learning partners (Kollar, Fisher, & Hesse, 2006). A script’s purpose is to prompt collaborating learners to focus on, remain engaged in, and regulate specific roles and actions which are expected to promote learning. CL scripts may take the form of predefined scenarios (e.g., argumentation, debate, reciprocal peer feedback), procedural guidance (e.g., taking turns, listening, playing specific roles), cognitive prompts (e.g., explaining, questioning, summarising), and metacognitive prompts (e.g., monitoring, regulating, formulating arguments) (Fischer, Kollar, Stegmann, & Wecker, 2013). For example, during collaborative work, a script may require a student in the group to enact a pre-specified role (e.g., the summariser or leader role) which in turn enhances the group discussion.
In the following sections, we first draw on the literature of CL scripting to build a case for using scripts to guide instructional planning and implementation. This is followed by explaining the development of a rubric for evaluating CL planning and implementation, which involves adopting a five-component script to design and formulate the criteria and descriptors. We then describe the research method for evaluating this rubric and in the concluding section, we discuss the findings in relation to GTAs’ training and potential future application, and studies of the use of this rubric.
CL script to guide GTAs’ instructional planning
During CL, students may present their ideas, share information with others, provide explanations to their group mates and justify their ideas in response to questions, differing viewpoints, or disagreements (King, 2002). While CL is widely recognised as an effective approach to engage students in peer-to-peer learning (e.g. Gillies, 2003; Hmelo-Silver & Chinn, 2016), merely assigning students to groups does not mean they will engage in meaningful discussions (O'Donnell, 2006). For example, students may fail to provide elaborative explanations, suppress participation, and engage in negative socio-emotional processes. Thus, teachers need to overcome these detrimental processes by promoting the beneficial processes through proper instructional planning. Studies on CL implementation emphasised the importance of explicit and detailed instructional planning and preparation (Frudden, 2001; Gillies & Boyle, 2010; Ruys et al., 2012; Veenman, van Benthum, Bootsma, van Dieren, & van der Kemp, 2002). For example, Ruys et al. (2012) argued that elaborate planning by the teacher prior to lessons may facilitate anticipatory reflection (i.e. active pre-lesson reflection on instructional approaches), and proceeded to develop and test a scoring rubric which examined the quality of lesson plans on the implementation of CL by pre-service teachers. The findings indicated that teachers need to focus their instructional planning on the organisational aspect of CL implementation (e.g. norms of CL, group composition, and timing of the learning activity) as well as the monitoring and evaluating of students’ learning during CL. Thus, teachers’ approach to planning and implementation of CL is critical to the effectiveness of CL (Kaendler, Wiedmann, Rummel, & Spada, 2015) and warrant further research.
According to Fischer et al. (2013), the guidance of learners through collaborative scripts can be attributed to the dynamic interactions of external and internal scripts. Learners’ internal scripts represent knowledge components about a collaborative practice and facilitates the learners’ understanding of and subsequent actions in the collaboration. Guidance of the learners’ collaborative learning is enhanced when their internal scripts are aligned to the external scripts, which serve as a means to guide the collaborative activities. The underlying theoretical perspectives of collaborative scripts are drawn from schema-based cognitive theory and sociocultural theory, in particular, Vygotsky’s Zone of Proximal Development (Vygotsky, 1978).
In this paper, we suggest that helping GTAs develop CL understanding requires that GTAs play an active role in inhibiting detrimental CL processes and promoting beneficial processes through instructional planning (Webb et al., 2008). In line with this argument, a key part of the TAP is to provide explicit instruction on planning and structuring effective group interactions. This involves going beyond assigning group membership and preparing meaningful tasks, to implementing certain routines, approaches or activities using a collaborative script (i.e. a guiding scenario on specific roles, steps and procedures on how students interact with one another). This collaborative script serves two important purposes for GTAs, namely (a) as an external script to scaffold planning CL lessons, and (b) as an internal script to guide their own implementation of CL.
Rubric Development for CL
In general, the design and development of a scoring rubric for lesson planning and observation depended largely upon the strategy of adoption or adaptation from existing intact rubric and modifying it for the local assessment purpose and context (e.g. Corey et al., 2010; Ruys et al., 2012). This ‘a priori’ approach, by an individual or a small group of experts, usually involves the following steps (DeVellis, 2017):
- Establish the purpose of the rubric by investigating the research question
- Define the constructs and their relationships
• Conduct a literature review
• Develop a conceptual framework
- Review potential rubrics for adoption or adaptation
- Construct the rubric
• Identify the dimensions/criteria
• Decide on the level of performance/scoring scale
• Write descriptors
- Carry out a pilot study on the rubric and perform calibration and evaluation
- Document the development process
Inherent in this approach are two important steps—firstly, the rubric should reflect a progressive development of the skill in question (Land & Stone, 2006), and secondly, the different levels of proficiency in the rubric should be grounded in current theories of learning (Kane, 2013). Accordingly, in this paper, we report on three key stages for developing and validating the scoring rubric: defining the theoretical construct, developing the rubrics, and initial validating of the rubric with analyses of GTAs’ lesson planning and implementation, rater feedback and rater reliability.
As explicated through the literature review, the purpose of this paper is to explore the development and use of a scoring rubric to investigate GTAs’ competence in adopting CL scripts. In particular, we evaluate the extent in which the rubric can reliably help us to examine GTAs’ effectiveness in instructional planning and implementation during micro-teaching sessions. Therefore, the research question to guide this study is as follows:
How do we develop and use a scoring rubric to examine GTAs’ competence in CL lesson planning and implementation?
The study involved 40 GTAs (female = 19; male = 21) from Science, Technology, Engineering and Mathematics disciplines in their first or second year of graduate study. They were nominated by their Departments to attend the 2-day TAP. The GTAs were paired based on their own disciplinary backgrounds for the lesson planning as well as the micro-teaching sessions. This pairing was seen as important to allow for collaborative interactions during the planning stage and the co-teaching during micro-teaching sessions.
The 2-day TAP was a formal programme designed for GTA development in the university and involved four one-and-a-half hour sessions on the first day, conducted by the first researcher and colleagues at CDTL, followed by the second day on micro-teaching. The first and second sessions were focused on students’ understanding of their roles and responsibilities as TAs, the need for a growth mindset in relation to student-centred approaches to teaching and learning (especially CL), and the importance of communicating and general writing of learning outcomes. The third session drew on theoretical and empirical underpinnings of CL to instruct and model scripting CL—the nature of CL and scripting, and how a CL script can be used to structure instructional planning and CL implementation. The lesson plan template was also introduced, with examples, on how to draft a CL lesson. This was followed by session four, whereby GTAs were given the opportunity to work in pairs to design and plan a CL lesson for micro-teaching using the lesson plan template.
The paired micro-teaching on the second day of TAP allowed the GTAs to carry out deliberate practice of planning and teaching a 30-minute CL lesson, with feedback provided by the instructor before and after the lesson. Peers also provided feedback after going through each CL micro-teaching lesson. All lesson plans and micro-teaching video recordings were collected and used for the data analysis.
Micro-teaching is the organised practice of teaching for learning to teach better, with the aim of helping novice teachers by simplifying the complexities of regular teaching-learning processes (Perlberg, 1987; Macleod, 1987). The purpose of micro-teaching is to give TAs confidence, support, and feedback by letting them explore and try out, among course mates, a short episode of what they plan to do with their students. Micro-teaching engages the TA to build up knowledge, skills and dispositions, to experience a range of tutoring approaches and strategies, as well as to learn and practice giving and receiving constructive feedback (Wilkinson, 1996).
The teaching sessions conducted by the 20 paired TAP participants were video-recorded for analysis purposes and for the benefit of those involved for further self-reflection. The participants were paired to allow for co-teaching, whereby both students worked together to design, plan, and implement the CL lesson. This is in line with our approach on providing CL opportunities and practice throughout our TAP. As many as six to eight participants from the same or similar courses can participate in a single micro-teaching session. While one person takes his or her turn as teacher, everyone else would play the students. It is the role of these ‘students’ to ask and answer questions realistically, while the role of the ‘teacher’ is to involve his or her "class" actively in CL. A teaching scenario typically runs for thirty minutes. When the teaching scenario is completed, the participants have fifteen minutes to share their reflections on the micro-teaching process, followed by their peers, who would discuss the strengths and weakness of the lesson and provide feedback on how to improve. Finally, the facilitator will provide their comments and feedback on how to make changes to enhance the CL lesson.
Rubric development process and data analysis
We adopted Kollar, Fisher, and Hesse’s (2006) conceptual framework to inform our design of the scoring rubric, drawing on the five components of a collaborative script—learning objectives, types of activities, sequencing, role distribution, and type of representations to articulate and formulate the criteria for CL planning and implementation (see Appendix). The five components are aligned to the nature of a CL script and as discussed above, scripting CL is underpinned by theories of learning in cognitive science and sociocultural perspectives.
The rubric development for evaluating the lesson plans and micro-teaching sessions was an iterative process involving three student research assistants and the researcher over three months.
We also adapted the rubric developed by Ruys et al. (2012), which comprised three major domains: Instruction, Organisation, and Evaluation. The ‘Instruction Domain’ included the goals and objectives of the lesson, detailed instructions for students, materials and resources for the class, the strategies for developing CL among the students, and the learning task/assignment. The ‘Organization Domain’ includes the classroom and group arrangement, and the time allocations for different portions of the class. Finally, the ‘Evaluation Domain’ comprises the different evaluative components and assessments the teacher performs to monitor the progress of students. The final rubric thus consists of seven criteria:
- Prior Knowledge
- Learning Objectives
- Types of Activities
- Role Distribution
- Types of Representation
The level of performance/scoring scale used were: Unsatisfactory (0-1), Needs Improvement (2), Adequate (3), and Exceeds Expectations (4), with a total score of 28 points for each lesson plan or micro-teaching video. The descriptors were then written for each dimension. Finally, the constructed scoring rubric was calibrated by three student research assistants and the researcher using a sample of the lesson plan and micro-teaching videos. Interrater agreement between the multiple evaluators was discussed during the pilot study to reach 100% agreement finally.
Comparison of inter-rater reliability
The average inter-rater reliabilities were found to be acceptable. The lesson planning scoring intraclass correlation coefficients (ICC) for reliabilities amongst the three raters are 0.95, with 95% confidence interval (CI) (0.87, 0.98); 0.92, with 95% CI (0.82, 0.97); and 0.96, with 95% CI (0.90, 0.98). The average ICC for three raters is 0.94. The micro-teaching video scoring intraclass correlation coefficients for reliabilities amongst the three raters are 0.87, with 95% CI (0.70, 0.95); 0.87, with 95% CI (0.70, 0.94); and 0.94, with 95% CI (0.85, 0.98). The average ICC for three raters is 0.89.
The overall mean score for lesson planning (M = 18, SD = 3.87) is slightly lower than that of micro-teaching videos (M = 21, SD = 3.60), indicating that on average, GTAs scored higher in their CL implementation than in their lesson planning (see Table 1). The breakdown on individual criteria showed that GTAs performed slightly better in writing about prior knowledge and learning objectives in their lesson plans as compared to their implementation in micro-teaching. In contrast, GTAs performed better at implementing CL than in their lesson planning in relation to CL activities, sequencing, role distribution, types of representation and evaluation, with large mean differences for types of representation and evaluation.
Descriptive statistics of lesson planning and micro-teaching video scoring results
Frequencies of the scoring results for each rubric criterion (n =20, paired)
As reflected in Table 2, about 90% of GTAs (scored ‘adequate’ and ‘exceeds expectations’) provided explicit prior knowledge and learning objectives in their lesson planning and in micro-teaching. The other criterion that was seen as ‘strengths’ for GTAs was coming up with CL activities, with 85% and 95% for lesson planning and micro-teaching respectively. It was obvious from Table 3 that GTAs were less proficient in lesson planning and micro-teaching (exactly or more than 45% needs improvement or scored ‘unsatisfactory’ levels) in relation to role distribution (lesson planning–80%, micro-teaching–70%), and evaluation (lesson planning–100%, micro-teaching–70%). Furthermore, GTAs were much weaker in types of representation in their lesson planning (55%), compared to micro-teachings (20%) and in sequencing (lesson planning–75%, micro-teaching–45%).
Correlations between micro-teaching video analysis and the seven criteria in the scoring rubric
To analyse the relationships between the overall score for CL implementation in micro-teaching and the seven criteria for using CL scripts in lesson planning, the correlations were calculated (see Table 3). The micro-teaching score was significantly correlated with role distribution and type of representation. While slight positive correlations were found for prior knowledge, learning objectives and activities, both sequencing and evaluations were slightly negatively correlated. The positive correlations suggest that the criteria which were evident in the lesson planning were enacted or implemented in micro-teaching. The significantly correlated criteria further indicated that role distribution and type of representation were key criteria in lesson planning and in the implementation of CL. Negative correlations for sequencing and evaluation reveal that although both criteria are poorly represented in the lesson plans, they were still important in CL implementation and continued to be enacted in micro-teaching sessions.
Overall, the findings indicated that the scoring rubric allows for a detailed evaluation of GTAs’ use of CL script in lesson planning and implementation. There were strengths and weaknesses in both the lesson planning and micro-teaching, with GTAs being able to design interesting CL activities but less explicit in terms of role assignment and distribution, and in monitoring and evaluating the outcomes of CL. This finding is in line with the results of Ruy et al. (2012), who found that beginning teachers do not explicitly monitor and evaluate group processes, and attribute the reasons to a more objectivist focus on students’ learning products at the end of the lesson, i.e. a more summative purpose on evaluation. In our study, another possible reason might be the duration for the micro-teaching is too short for any formal or informal assessment of students’ learning. Planning for assessment is an important skill set that will require more targeted training and development.
We also learnt from the experience of developing a scoring rubric to evaluate GTAs’ lesson plans and micro-teaching. The reliability of this rubric was established by a rigorous process of development and calibration. Content validity was also demonstrated by drawing on other rubrics from empirical studies and building on a scripted CL framework. During the many iterative discussions to come up with the rubric, we realised that more could be done to refine the criteria and descriptors. For example, in order to more accurately evaluate the levels of cognitive demand of the teaching activity, the descriptors need to be more accurately described in the rubric. The criteria, which are more important for a successful CL lesson, could be given a higher priority by assigning greater weightage in the rubric. In addition, efficient time management of activities appears to be a key element in CL implementation and should be included in the rubric.
The rubric serves not only to score the lesson plans and micro-teaching lessons, but to provide us with valuable feedback on the GTAs’ ability to plan and implement CL in their own lessons. GTAs generally were receptive to using the CL script in planning their CL lessons; the script served as a heuristic tool (i.e. an external script) for GTAs to engage in learning about CL and implementing CL (Kollar et al., 2006). It also allows them to discuss challenges and issues concerning how to prepare themselves for teaching, and goes beyond just thinking about the content to focus on teaching strategies and how best to help their students learn (i.e. an internal script). This was observed during the preparation stage on Day 1 of the TAP, whereby participants worked in pairs to design their micro-teaching lesson. We also realised that while the findings revealed that GTAs were able to use CL scripts in both the planning and implementation of CL, it was the opportunity to enact the CL in the micro-teaching sessions that GTAs found most useful and rewarding. It is also through the micro-teaching that we were able to observe and uncover how GTAs interacted with their ‘students’ to bring about CL, and to have a sense of how co-construction of knowledge or sharing learning works or did not work.
The first implication from the findings of this study is that of the importance of instructional planning in preparing GTAs for CL teaching, what Ruys et al. (2012) described as developing anticipatory reflection. Anticipatory reflection, as contrasted from retrospective reflection, occurs before the actual lesson and helps teachers to plan effectively in order to enhance students’ learning. Given the complexity of implementing CL, teachers need to carefully prepare and think about the learning tasks, the learning environment, and students’ readiness and approaches to learning. More importantly, as demonstrated in this study, the process of instructional planning should, by itself, be a collaborative learning process, involving not just the GTA and the instructor but other peers as well. Instructional planning, as collaborative learning, not only enhances GTAs’ knowledge and skills on CL strategies, it can also foster discussion and collegiality amongst GTAs.
The second implication is the need for lecturers and module coordinators to create opportunities for GTAs’ deliberate practice on writing a lesson plan and implementing a CL lesson, with the help of scripts. The GTAs’ development of teaching competencies should not be left to a few academic developers or educators but include building a wider community of learners, with opportunities for reflection and practice. As alluded by Nyquist and Wulff (1996), the GTAs’ development goes beyond concerns about teaching skills and understanding of students’ learning, to include academic role development. In other words, more is needed to enhance the professional development of GTAs and to rethink the programmes for teaching assistantship, towards a more holistic and concerted effort across the institution.
As GTAs play a pivotal role in supporting and fostering University teaching, there is a need for continuous and targeted support for GTAs. Not only do GTAs need to see the importance of CL for content learning, it is also a strategy for developing engaged, shared, and inclusive learners. The development of CL lesson planning and implementation could go a long way in helping GTAs enhance their teaching competencies and practices. The scoring rubric not only serves as a scoring guide for CL lesson planning and observation, it may also be used for peer and self-reflection or monitoring of teaching progress. More research could be done to investigate how GTAs monitor and evaluate students’ collaborative learning using the rubric, such as during discussions and the quality of the interactions. Future studies could also consider how GTAs navigate and overcome the social, political, and cultural context of University teaching.
This research is funded by the Tertiary Education Research Fund (TRF) from the Ministry of Education, Singapore. We would like to thank the student participants for their support and active participation in this study. We are also grateful for the help and advice from our research assistants and co-investigators.
DeVellis, R. F. (2017). Scale development: Theory and applications (Fourth ed.). Los Angeles: SAGE.
Fischer, F., Kollar, I., Stegmann, K. & Wecker, C. (2013). Toward a script theory of guidance in computer-supported collaborative learning. Educational Psychologist, 48(1), 56-66. https://dx.doi.org/10.1080%2F00461520.2012.748005
Gillies, R. M., & Boyle, M. (2010). Teachers' reflections on cooperative learning: Issues of implementation. Teaching and Teacher Education, 26(4), 933-940. https://doi.org/10.1016/j.tate.2009.10.034
Gillies, R. M. (2003). Structuring cooperative group work in classrooms. International Journal of Educational Research, 39(1-2), 35-49. https://doi.org/10.1016/S0883-0355(03)00072-7
Hmelo-Silver, C. E., & Chinn, C. A. (2016). Collaborative learning. In E. Anderman & L. Corno (Eds.), Handbook of educational psychology (3rd ed.) (pp. 349-363). New York: Routledge.
Kaendler, C., Wiedmann, M., Rummel, N., & Spada, H. (2015). Teacher competencies for the implementation of collaborative learning in the classroom: A framework and research review. Educational Psychology Review, 27(3), 505-536. https://doi.org/10.1007/s10648-014-9288-9
Kane, M. (2013). Validating the interpretations and uses of test scores. Journal of Educational Measurement, 50(1), 1–73. https://doi.org/10.1111/jedm.12000
King, A. (2002). Structuring peer interaction to promote high-level cognitive processing. Theory into Practice, 41, 33–40. https://doi.org/10.1207/s15430421tip4101_6
Kollar, I., Fischer, F., & Hesse, F.W. (2006). Collaboration scripts – A conceptual analysis. Educational Psychology Review, 18, 159-185. https://doi.org/10.1007/s10648-006-9007-2
Lane, S., & Stone, C. (2006). Performance assessment. In R. Brennan (Ed.), Educational measurement ((4th ed.), pp. 387–431). Westport, CT: American Council on Education/Praeger.
MacLeod, G. (1987). Microteaching: End of a research era? International Journal of Educational Research, 11(5), 531–541. https://doi.org/10.1016/0883-0355(87)90013-9
Nyquist, J. D. & Wulff, D. H. (1996). Working effectively with graduate assistants. Sage Publications: Thousand Oaks, CA.
O' Donnell, A. M. (2006). The role of peers and group learning. In P. Alexander & P. Winne (Eds). Handbook of educational psychology, 2nd Edition. Mahwah, NJ: Lawrence Erlbaum.
Palincsar, A. S., & Herrenkohl, L. R. (2002). Designing collaborative learning contexts. Theory into Practice, 41(1), 26-32. https://doi.org/10.1207/s15430421tip4101_5
Perlberg, A. (1987). Microteaching: Conceptual and theoretical bases. In M. Dunkin (Ed.), The International Encyclopedia of Teaching and Teacher Education (pp. 715–720). Oxford: Pergamon Press.
Ruys, I., Van Keer, H., & Aelterman, A. (2012). Examining pre-service teacher competence in lesson planning pertaining to collaborative learning. Journal of Curriculum Studies, 44(3), 349-379. https://doi.org/10.1080/00220272.2012.675355
Veenman, S., van Benthum, N., Bootsma, D., van Dieren, J., & van der Kemp, N. (2002). Cooperative learning and teacher education. Teaching and Teacher Education, 18(1), 87-103. https://psycnet.apa.org/doi/10.1016/S0742-051X(01)00052-X
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Massachusetts: Harvard University Press.
Webb, N. M., Franke, M. L., Ing, M., Chan, A., De, T., Freund, D., & Battey, D. (2008). The role of teacher instructional practices in student collaboration. Contemporary Educational Psychology, 33(3), 360-381. https://doi.org/10.1016/j.cedpsych.2008.05.003
Wilkinson, G. A. (1996). Enhancing microteaching through additional feedback from preservice administrators. Teaching and Teacher Education, 12(2), 211–221. https://doi.org/10.1016/0742-051X(95)00035-I
About the Corresponding Author
Mark GAN is Associate Director at the Centre for Development of Teaching and Learning (CDTL). Mark co-leads the Teaching Assistants Programme (TAP) to develop graduate teaching assistants’ knowledge and skills in planning and implementing collaborative learning. Mark obtained his PhD in Education at The University of Auckland and has worked at the university’s Faculty of Education as a Research Fellow and lecturer in the School of Learning, Development and Professional Practice. Mark’s research interests include assessment quality, using data to support learning and feedback studies.