Assessing Undergraduate Learning in Earth Science Residential Fieldwork
1 Stephen CHUA, 1 2Adam D. SWITZER, 3Kevin HARTMAN, 2Natasha BHATIA, 4Jaime KOH
1 Earth Observatory of Singapore, Nanyang Technological University, Singapore
2 Asian School of the Environment, Nanyang Technological University, Singapore
3 Institute for Applied Learning Sciences and Educational Technology, National University of Singapore, Singapore
4 Centre for Research and Development in Learning, Nanyang Technological University, SingaporeCorrespondence Name: Stephen CHUA
Address: Earth Observatory of Singapore, Nanyang Technological University, Block N1.3-B1-15, 70 Nanyang Drive, Singapore 637457.
Chua, S. C. W., Switzer, A. D., Hartman, K., Bhatia, N., & Koh, J. (2020). Assessing undergraduate learning in earth science residential fieldwork. Asian Journal of the Scholarship of Teaching and Learning, 10(1). 69-88.
View as PDF Current Issue
One of the challenges for module and academic programme coordinators is having to simultaneously measure module outcomes, programme outcomes and the development of graduate attributes. Reliable measures of student growth are difficult to obtain because much of what students are asked to do in their modules is not made available to stakeholders in an easily analysable form. In the earth sciences, fieldwork provides both a consistently orchestrated activity that students repeat at multiple time points in their academic careers and an activity that yields artefacts that can be flexibly analysed depending on a programme’s intended outcomes and the university’s desired attributes.
Fieldwork is an essential component of undergraduate earth science education, and provides a platform for students to learn in a ‘real-world’ setting which is not readily replicated in more traditional methods (e.g. lectures, tutorials). Students record their field experience and accomplish field-based tasks in field notebooks, where careful assessment of these books serve to measure student learning and the development of key skills and competencies integral to their progress as future earth scientists. Based on the collected corpus of student field notebooks, we crafted a generalisable rubric to assess student field notebooks for programme outcomes and the presence of the university’s graduate attributes. The rubric was modelled on the Structure of Observed Learning Outcomes (SOLO) taxonomy (Biggs & Collis, 1989; Biggs & Collis, 2014), a structured method of classifying learning outcomes.
A cross-sectional and longitudinal analysis of three student cohorts indicated that the presence of the key skills, competencies, and attributes the programme and university value increased over time. We found that earth science undergraduates generally showed greater sophistication in parameters related to their capacity to organise field information, record information via text, and identify field features, but displayed lower levels of sophistication in aspects related to the use of field symbols and critical thinking. A longitudinal comparison indicated students showed an increase, at the cohort level, in the use of field conventions and textual recording, but showed a decrease in the presence of critical thinking. The changes could be due to the inherent nature of field-based tasks which do not create situations to test higher-order thinking. The findings from this study can thus be used by both students to make informed decisions to improve their own learning (formative assessment), as well as field instructors in possible redesigning of fieldwork content and pedagogy.
Keywords: Fieldwork, field notebooks, rubrics, SOLO taxonomy, assessment
Undergraduate field experience is an important component of earth science education as actual fieldwork provide budding scientists with authentic and unique learning experiences (Feig, 2018; Petcovic, Stokes, & Caulkins, 2014). It is critical that earth science undergraduates experience the ‘real world’ through fieldwork, both to increase their understanding of physical processes (Fuller, 2006; Krakowka, 2012), as well as to increase their enthusiasm for the subject (Scott, Humphries, & Henri, 2019; Van Loon, 2019). Compared to more traditional teaching methods (i.e. lectures, tutorials), research suggests that active and student-centred approaches like fieldwork significantly increase student learning (Kastens et al., 2009; Kortz & Murray, 2009; Thomson, Buchanan, & Schwab, 2006). Beyond the critical aspect of assimilating lecture instruction and knowledge in students, fieldwork is lauded as a platform for authentic real-world problem-solving and the development of higher-order thinking and spatial skills (Assaraf & Orion, 2005; Hannula, 2019; Hill & Woodland, 2002), an experience not replicated in more traditional venues such as lecture halls and earth science laboratories.
The perception of fieldwork amongst instructors and university faculty also appears positive, with 89.5% of survey respondents from the 2010 and 2011 Geological Society of America Annual Meetings indicating that fieldwork should be an essential portion of undergraduate education (Petcovic et al., 2014). Similarly, earth science and geography undergraduates enjoy fieldwork as it is perceived to possess peripheral benefits such as developing confidence and teamwork (Boyle et al., 2007). Indeed, studies have shown a strong affinity between earth science education and fieldwork, and concluded that such experiences have considerable social benefits for both student and staff (Glass, 2015; Streule & Craig, 2016).
While there may be positive findings about fieldwork experiences, the usefulness of fieldwork in addressing students’ development of university graduate attributes remains unexamined.
The Context Surrounding Graduate Attributes
On the surface, the concept of graduate attributes—the qualities a university publicly states it would like to inculcate in its students—seems fairly straightforward. Universities put forth a set of ideals or values that their students should embody and exhibit after participating in the formal and informal learning experiences offered by the university. However, the concept of graduate attributes has been problematic when instantiating them into university classroom settings from the beginning. First, by definition, graduate attributes are selected to be overly broad and ill-defined so as to let individual programmes of study set their own measures and standards (Green, Hammer, & Star, 2009). A university in Southeast Asia specifies five graduate attributes: Character, Creativity, Competence, Communication and Civic-mindedness. However, while umbrella terms like Communication and Creativity give programmes great flexibility as to how they can link their curricula to the graduate attributes, the terms also provide very little guidance about how to attain those attributes and less reassurance that one instructor’s definition of an attribute is congruent with another instructor’s definition—even within the same programme. For instance, one science faculty member defines Creativity as “thinking like I do”, whereas another faculty member defines Creativity as “thinking of something I did not.” Without a common understanding of what the attributes are and how to define them, measuring their development over time and more importantly, providing actionable feedback to students when they need it becomes impossible.
The second common issue with graduate attributes stems from the structure of university programmes. University curricula are often developed on a per module basis. Topics are linked across modules in service of achieving full coverage of the subject area, but less attention is paid to linking the types of activities that happen within those modules with each other or mapping them to graduate attributes.
Figure 1. Abstract representation showing relationship between programme requirements (modules), module artefacts (designed activities), programme outcomes and the development of graduate attributes.
Figure 1 shows an abstract representation of how programme requirements (modules) are translated into activities designed to demonstrate programme outcomes, which in turn lead to the development of graduate attributes. Figure 1 also shows how a constellation of programme requirements may forgo the explicit measurement and development of some attributes because the programme’s outcomes do not readily map onto them.
An example would be the graduate attribute of Communication, which is learnt through field note-taking across various fieldwork modules, and is measured by how well students communicate their scientific observations and interpretations. However, even though internally the assessment of such skills are robust within the module itself, the activities themselves are not assessed in the same manner across modules. This primarily boils down to different learning goals between these modules. For example, the Bali field trip module, an introductory field module before students select their specialisations, emphasise on the identification of geologic structures and mapping skills, while the Sri Lanka fieldtrip module emphasises the learning of field techniques (e.g. identification and counting of selected species). Both require and meet the need for implementing the learning of Communication skills, but clearly test and provide feedback in disparate ways. Shavelson et al. (2008) acknowledge that programme structures in which every activity essentially serves as a summative assessment undercut the power of receiving corrective feedback. If activities are never revisited in the same way, then it is again difficult to anchor feedback in a way that is actionable for learners. Growth models of learning require such revisiting to determine how much a learner has developed in the interim, that is, between the first measurement period and subsequent measurement periods. A belief that graduate attributes can be developed means they must somehow be distilled into measurable behaviours.
Translating Graduate Attributes into Desired Learning Outcomes in the Earth Sciences
Like many academic programmes, earth science programmes can relate to a host of potential graduate attributes. Usually, the development of the attributes occurs after the development of the programme. However, in the case of an undergraduate earth science programme at the university, the graduate attributes predated the creation of the academic programme. Due to the sequencing of events, the elements of the programme were designed in response to the university’s stated graduate attributes and contextualised within the field of earth sciences.
Interestingly, the places the programme looked to measure the development of the university’s graduate outcomes occurred far beyond the classroom environment. The programme exposes undergraduates to three areas: 1) geosciences, 2) ecology and ecosystems, and 3) society and the Earth system. Each of the areas, regardless of discipline, exposes all undergraduates to an extensive suite of fieldwork experiences, beginning in their first year of study. The early focus on the nuances of fieldwork stems from the opportunities students have to explore earth systems locally, regionally, and overseas. Each year, students participate in overseas fieldwork experiences. Through the programme, undergraduates complete a multidisciplinary residential fieldwork experience in Bali between Year 1 and Year 2. In the following years, students explore Borneo and Sri Lanka, depending on their chosen specialisations.
Conducting fieldwork abroad has been associated with significant improvement in student learning (deep learning) and interest in their subject matter (Glass, 2015; Hill & Woodland, 2002). Immersive overseas field experiences have been found to improve student motivation (Ballantynet, Anderson, & Packer, 2010), with programme graduates referring to such experiences as the highlight of their undergraduate journey (Scott et al., 2019). Novel fieldwork experiences also provide opportunities for students to develop the systems thinking necessary to solve complex problems. In addition, the complex problems found in the earth sciences often require collaborative efforts which comprise teams of geoscientists, ecologists and social scientists (Kastens et al., 2009; King, O'Donnell, & Caylor, 2012).
The Bali fieldwork experience contains geoscience, ecology, as well as earth and society elements, and presents students with a comprehensive assessment of all three discrete but interconnected disciplines. Such multidisciplinary field-based investigations, early in their academic career, offer students a more holistic learning experience than dedicating a fieldwork experience to a single discipline (Schiappa & Smith, 2018). By encountering the similarities and contrasts inherent in the different disciplines, student learn to notice the complex relationships between the different systems while holding the site constant. The Bali fieldwork experience is considered the first comprehensive field experience and provides a reliable baseline for assessing the demonstration of student learning prior to the specialisations chosen at the end of Year 2. This particular fieldwork experience occurs before students choose their programme specialisation, which occurs at the end of the second year. The Borneo and Sri Lanka fieldwork experiences were selected to assess student learning longitudinally, and they record later and hypothetically more advanced field skills/competences for these same student cohorts.
Despite their differences in locale, foci and occurrence within the programme timeline, each of the fieldwork experiences made use of the same activity—the recording of site observations in a field notebook. This essential activity of the earth sciences provides an opportunity for students to showcase their competence of scientific observation, communication of interpretations, and expression of their creativity.
Field Notebooks as Snapshots for Measuring Student Growth
An indispensable component of fieldwork is the field notebook, and the ability to keep a high-quality notebook, coupled with proficient note-taking and sketching, are deemed essential field skills (Dohaney, Brogt, & Kennedy, 2015; Jolley et al., 2019; King, 2019).
Keeping a field notebook is a common activity among field scientists of all levels. Field notebooks should not be viewed as just “diaries for scientists” (Jolley et al., 2019). In its most basic form, a field notebook is a repository of a field scientist’s observations. It serves as a record of what the scientist experienced while out in the world, to literally sense what they sense (Maltese, Balliet, & Riggs, 2013). With a field notebook, a scientist should be able to reconstruct where he or she was, what he or she was doing, and what was happening at a given place and time, even years later. Just as important, a different scientist should be able to pick up a field notebook and reconstruct the events and observations made by the original author (Schiappa & Smith, 2018). In a sense, field notebooks are vehicles for scientists to showcase their competence within their field by communicating important observations, from the present to one’s future self and to one’s future peers in critical and creative ways. Because keeping a field notebook is a recurring practice, it creates an opportunity for measuring student growth with regards to the presence and magnitude of the graduate attributes expressed in the works. Field notebooks capture ‘snapshots’ of student performance along multiple dimensions for each of the different fieldwork experiences. These snapshots can be used to systematically compare the qualities of the produced artefacts from different time periods. For instance, one can analyse the instances and intensities of creativity and competency within the domain alongside improvements in the quality of the communication to measure growth, positive changes in attribute expression over time, within an individual or throughout a population.
The fact that there is not a prescriptive format for a field notebook entry makes it a creative endeavour. Entries require more than checking off a series of observations like time, date, elevation, and ambient temperature at a given site (Switzer & Kennedy, 2013). They allow the field scientist to draw images, wax philosophically, and express confusion. This open-ended structure also allows the field scientist to record important observations that, because of their rarity, would not be part of a standard form (Balliet, Riggs, & Maltese, 2015).
Field notebooks provide a platform for students to record not just observational information, but as an archive of possible “systems thinking”—that is, complex interrelationships between various sub-disciplines. Such skills are key to solving a myriad of global environmental problems, and fieldwork provides students the opportunity to acquire such skills to solve complex issues which are often a combination of coupled human and environmental problems (Assaraf & Orion, 2005; Kastens et al., 2009; King et al., 2012). They are also skills involving immense amount of nuance and competence within the field.
Prior to this study, we were unable to find a standard method of evaluating field notebooks that could be used across fieldwork experiences, and was also compatible with both the programme’s learning outcomes and assessing the development of the university’s graduate attributes. Without a standard set of methods and measures, a collection of instructors cannot systematically evaluate its students’ field notebooks in alignment with each modules’ learning objectives while still honouring the intent of the university’s graduate attributes. Without comparable evaluations, programme coordinators and evaluators cannot judge whether students are more capable when conducting scientific observations in the field, or more likely to exhibit the university’s graduate attributes in their work when compared to their past selves and past cohorts.
This project aimed to measure key programme learning outcomes in earth science students, e.g. observation, spatial, scientific communication, and so on, as well as an essential set of competencies (i.e. deeper observational skills and critical thinking) at multiple points of time within the context of field note-taking and research. Measuring these learning outcomes and attributes in a standard and reliable way over time would be an improvement over the current methods of evaluating the attainment of the programme’s learning outcomes and using the university’s graduate attributes as inspirational labels rather than developmental milestones that could be measured. In particular, we looked at how students incorporated the university’s graduate attributes of communication, creativity and competency into the artefacts from their fieldwork experiences.
To study student attainment of programme outcomes as well as the development of graduate attributes longitudinally and across cohorts, we identified three modules incorporating overseas fieldwork experiences. We will refer to the modules by the name of their fieldwork locations: Bali, Borneo, and Sri Lanka. The Bali fieldwork experience serves as an introduction to the techniques, strategies, and benefits of conducting field research in the earth systems science discipline. It also provides students with opportunities to apply classroom knowledge and problem-solving skills to authentic tasks found in the field. Field tasks include identifying volcanic and geologic structures, mapping and sketching, and collecting onsite field measurements. Composing field notes comprise 20% of the module’s total grade. Reports and an end-of-term presentation form the bulk of the assessment weightage.
The Borneo fieldwork experience focuses on geological fieldwork related to layers and landforms. Its learning outcomes include being able to observe and compare various sediments according to their structures and depositional environments, as well as applying field techniques and methods to solve earth science-related issues. Finally, the Sri Lanka fieldwork experience concentrates on ecology fieldwork and focuses on learning basic and technology-based field methods to study aquatic and terrestrial organisms and ecosystems. Its learning outcomes include formulating research questions, designing field experiments, and carrying out appropriate statistical analyses.
To assemble the corpus of field notebooks, we recruited students in the earth science programme to voluntarily make their field notebooks available for research purposes. Of the more than 80 students participating in the fieldwork modules, 17 submitted notebooks covering the Bali fieldwork experience. At the time, the Bali experience had been conducted annually for four years and was the most established of the three fieldwork experiences. Four students provided notebooks covering both the Bali and Sri Lanka fieldwork experiences. These notebooks were used as a proof of concept for showing how growth models of learning could be used with the infusion of graduate attributes in field notebooks.
Summative methods for assessing field notebooks include subjectively ranking student work and subsequently translating those rankings into grades, simple scoring sheets (by dimension/aspect of fieldwork), and more detailed and comprehensive assessment rubrics. Comprehensive rubrics are often considered the best form of assessment as they include assessing student work along different dimensions and providing clearly worded descriptions of the different levels of outcome attainment (Farnsworth, Baldwin, & Bezanson, 2014). We chose to create a comprehensive rubric (with elements such as specificity, evaluative criteria, quality levels, and definitions) to both assess over a broad scope, as well as communicate expectations of fieldwork to the students (Brookhart & Chen, 2014; Dawson, 2017; Jonsson, 2014). Providing clearly-worded descriptors is a critical factor for allowing an assessor to determine the quality of a student’s work. Quality descriptions must “provide a detailed explanation of what a student must do to demonstrate a skill, proficiency or criterion in order to attain a particular level of achievement, for example poor, fair, good or excellent” (Reddy & Andrade, 2010).
We began crafting the rubric by reviewing the entirety of the collected corpus of field notebooks. During this review, we identified the key dimensions of a field notebook that related to the development of students’ observation and communication skills. We also assembled a set of field notebook entries that served as concrete examples of the different levels of sophistication of these dimensions. After the review, we narrowed down the key dimensions to a focused subset that related to measuring and improving the observation and communication skills of student field scientists. With the key dimensions and levels of sophistication identified, we codified them into an evaluation rubric that could be applied to all field work experiences. Development of final version of the rubric involved consulting with key stakeholders and reviewing the activities and requirements from each fieldwork module. The stakeholders included faculty members and course instructors.
Crafting the Rubrics
The key principle for creating rubrics to evaluate the presence of learning outcomes is to translate general notions of outcomes into well-defined, demonstrable, and measurable behaviours (Biggs & Collis, 2014). Focusing on what is observable and measurable allows evaluators to map student learning, and progress toward mastery, back to the qualities found in student work.
However, the structure of a rubric for measuring outcomes within a module may be different from the structure of a rubric for measuring programme outcomes and graduate attributes. While a module-bound rubric would be expected to focus more on the presence of content and skills presented in a particular course, a rubric evaluating outcomes at the programme and university level would focus more on elements that are common across similar modules. When working at any of the three levels, the structure of a rubric should thus be organised logically and directly relate to the knowledge, skills and outcomes of interest to that level, and do so across a spectrum of sophistication (Jonsson, 2014; Steer et al., 2019).
Creating a generalisable rubric that still aligned with the requirements of the three disparate specialisations of geoscience, ecology and earth, and society required abstracting out general outcomes from their field-specific modes of assessment and assignment guidelines. The main difficulty was that each of the specialisations prioritises different aspects of fieldwork. Fortunately, the multidisciplinary Bali fieldwork experience contains aspects from all three specialisations and served as a constructive starting point for rubric crafting.
We began by reviewing the Bali module’s activities and tasks, paired them with a small corpus of student notebooks, and placed the key skills and competencies into a continuum of sophistication inspired by the Structure of Observed Learning Outcomes (SOLO) taxonomy (Biggs & Collis, 1989; Biggs & Collis, 2014). The SOLO taxonomy provides a structure for evaluating the quality of the demonstrations of learning outcomes. Each demonstration can be classified into one of the five levels of sophistication, ranging from Level 0 (Prestructural or not present) to Level 4 (Extended Abstract or transformatively present). Levels 1 through 3 show increasing competence of an individual to perform an assessment task by relating facts and ideas to a concept and eventually relating concepts to each other. Level 4 demonstrations can be viewed as exhibiting creativity as they synthesise existing knowledge and relationships and transform them into novel, yet structured, performances, artefacts and explanations. As the SOLO taxonomy naturally aligns with two of the graduate attributes in the context of a communication activity, we preserved its structure, but not necessarily its level definitions, when developing our own rubrics.
We selected eight parameters deemed critical for assessing student learning and scientific communication in the field; the actual rubrics are attached in Appendix 1. These rubrics were approved by the representative field coordinators of each specialisation. The individual parameters were then combined into a more general set of graduate attributes that included creativity, communication, and competence. It should be noted that the tight interweaving of the three graduate attributes within the context of field notebook activities meant that we could not isolate particular programme outcomes that could then be discretely mapped onto a subset of the three graduate attributes. Each programme outcome provided students with an opportunity for demonstrating creativity, communication, and competence.
Assessing Student Learning
We asked for voluntary submissions of field notebooks from students entering the programme between 2015 and 2017. We then evaluated the collection of field notebooks according to the new rubric.
A key confidence in the measures stems from having the same evaluator (a trained earth science educator) evaluate every notebook within several weeks. This consistency is paramount for judging the quality of student performance (Brookhart & Chen, 2014). Furthermore, assessing the field notebooks within a tight time-frame minimises the tendency of assessments to ‘drift’ over due to differing conditions over time.
In Tables 1, 2, and 3, we present the raw rubric scores from the three student cohorts participating in the Bali field trip from 2015 to 2017.
Rubric raw score for the Bali field trip in 2015 (n=4)
Rubric raw score for the Bali field trip in 2016 (n=3)
Rubric raw score for the Bali field trip (ES1006) in 2017 (n=10)
We synthesised the raw rubric data from the three fieldtrips and compared the 3-year trend latitudinally as shown in Figure 2.
Figure 2.. Average rubric score of Bali field trips from 2015 to 2017.
Generally, we found stability between the cohorts. Students attained higher levels of outcomes on text-based field inputs (i.e. field notes, observations, lecturer pointers) and lower levels of outcomes, albeit with some variability, in sketching, use of field conventions, and field symbols. The collected notebooks show little indication of independent, higher-order thinking in the field. Students showed the highest level of outcomes in the following: recording field information using text (Mean level = 3.1), identifying features (Mean level = 3.1), and field interpretation (Mean level = 3.1). The use of field conventions (symbols and patterns) showed the most limited level of attainment, although there was a marked improvement in the 2017 cohort relative to the other fieldwork batches. The use of symbols and objects range from basic ones displaying orientation and scale for site maps and sketches, to structural symbols denoting strike, dip, azimuth, to more advanced ones requiring some degree of field interpretation such as rock type, internal structures, fossils, and more. It is not unexpected that students overlook recording logistical information, as well as the use of symbols and patterns, as these are conventions which require experience in the field to develop and become familiar with. Figure 3 shows an example of a high-quality field illustration with well-labelled features of the landscape. However, even this example still lacks basic field conventions such as scale and orientation. A similar pattern of focusing on the aesthetics of the sketch, without including details necessary to interpret the meaning of the sketch, is common in the field notebook corpus.
Figure 3.. Example of a site sketch from a student involved in the 2015 Bali field trip.
As alluded to earlier, the Year 1 undergraduates participating in the Bali field experience may lack the necessary sophistication and knowledge to identify and interpret features in the world as well as translate them into symbols within their notebooks. However, a senior lecturer responsible for a significant percentage of Year 1 undergraduate training is known to be fastidious in ensuring proper instruction during pre-fieldwork note-taking exercises, during which he drills the practice into students using mnemonic devices. He instituted the protocol using the acronyms ‘LEMONS’ (Location/Time, Examination, Measurements, Observations, Names, and Sketches/Photographs) and ‘OASIS’ (Orientation, Annotation, Scale, Interpretation, and Sketch what you see) as reminders on what to do when recording field observations. Although inherently helpful, some degree of monitoring and policing is clearly required for students to add these elements into their field notebooks diligently, as the scores seem to indicate.
To investigate whether students preferentially record their observations in text vis-à-vis sketches, we quantified the amount of space devoted to each mode of recording information (Figure 4). Data collected from participants of the 2015 and 2016 fieldwork experiences were combined and compared with the 2017 fieldwork experience.
Figure 4. Boxplots of raw counts for notebook pages used, and number of sketch and sample sketches by students by year.
The results suggest that across cohorts the earth science students generally gravitated toward recording field information in text rather than in pictorial/sketch form, with the variance between the two modes increasing significantly by the 2017 cohort. The number of sketches (often associated with biological specimen, rock clasts, etc.) increased as well. However, the number of sketch maps remained largely similar. This is unsurprisingly given the fixed number of localities students are brought to over the years.
Such results suggest that students display a strong bias toward recording field information in textual form even when the integration of visual information (e.g. annotated field map) is required or preferred. Many of these sketches are also field-based tasks and thus students do not exercise choice in selecting this mode of scientific communication. Nonetheless, a clear strength is how comprehensive and detailed the written notes are, as well as the clarity and legibility of note taking under field conditions.
Figure 5 shows a notebook entry where a student exercised self-questioning techniques and higher-order critical thinking to analyse phenomena and features in the field. Often, such analysis involves leveraging a conceptual understanding of how natural features evolve over time to interpret the surrounding environmental cues. However, the occurrences of such deep interpretation was infrequently detected in the corpus. It was also difficult to differentiate between students making higher-order schematic links to what they learned in their lectures, and students merely recording and rehashing the field instructors’ observations ad verbatim.
Figure 5. Examples of evidence for higher-order thinking in the field.
These observed trends may indicate that students could make field note entries related to what they learn in the field, but do not necessarily think independently or creatively about the various environmental issues they encounter in the field. Many of the notes in the entries related to verbal instructions received while in the field. One of the programme’s senior lecturers expressed that he wanted to observe cogent observations, comments and questions that indicated students were looking critically at their environment and linking those insights to lecture content and other prior knowledge. Such behaviours have also been observed elsewhere, where students seemingly struggled to connect content from lectures and field-based activities (Hill & Woodland, 2002; Thompson, Ngambeki, Troch, Sivapalan, & Evangelou, 2012; Van Loon, 2019).
However, the issue behind this disconnect between what happens in the classroom and what is observed in the field may lie in the inherent nature of the field-based tasks, i.e. whether they overtly possess traits associated with problem-solving or require higher-order thinking to accomplish them. For example, the task given to students at the Kopi Luwak Cliff in Bali (Figure 5) was to “describe the outcrop”. The expected outcome was students would compose “descriptions of ignimbrite, pumice, and volcanic bombs.” With such a narrowly intended outcome, the impetus to analyse the geological features beyond the requirements of the task falls upon the students, where few were observed recording higher-level comments and/or making cogent links and interpretations based on field evidence. Riggs, Lieder, and Balliet (2009) have observed that problem solving in the field typically involves the use of navigational, spatial, and other problem-solving skills to operate in data-poor conditions. If field-based tasks are structured to include such elements, field notebooks would then serve as an extension of their cognitive processes and hence a suitable proxy for measuring their problem-solving skills (Balliet et al., 2015).
We compared the rubric scores of the same student cohorts over time as they chose their respective specialisations after Year 2. The aim was to assess the evolution of field-based competencies over time and establish the degree of ‘learning progression’ attained (King, 2019). We compared rubric scores between the Bali field cohorts of 2016 and 2017, the Borneo field cohort of 2017 (geoscience), and Sri Lanka of 2018 (ecology) respectively (Figure 6).
Figure 6. Comparison of rubric scores between different fieldwork modules taken by the same cohort.
Note: the ecology rubric is slightly different with exclusion of ‘Symbols and patterns’ and the substitution of ‘Identification’ with ‘Data collection’.
There appears to be some similarities between the progression of geoscience students vis-à-vis ecology students. Both groups showed increases in the sophistication of their organisational and textual recording skills, but there were decreases in areas involving interpretation and critical thinking and analysis.
The geoscience students showed a marked increase in the sophistication of their use of symbols and patterns in their later notebooks. This finding is an encouraging indication of greater cognisance and familiarity with such conventions amongst students. However, this result was not accompanied by a similar increase in higher-order thinking and quality of field sketches.
In the case of the ecology students, there was a strong increase in data collection, which is equivalent to identification of field features in the geosciences. We may infer that over time, students made improvements in their observational skills, and were more motivated to record such information as sketches instead of mere text. Due perhaps to the nature of the field-based tasks with more of a focus on developing ecological study protocols, these notebooks did not show as much evidence of critical thinking. The steep decrease in the level of outcome attainment for logistical information is due to students recording such information elsewhere.
A notable trend was the greater use of photographs and web printouts to augment drawings and their writing. Some students also incorporated knowledge from other modules in their field sketches (e.g. fault symbols for the Borneo field trip). However, surprisingly the quality of site maps and geological sketches generally decreased in the later notebooks when compared to the earlier notebooks. Presumably, this decrease was due to a change in instructor expectations and/or assessment requirements.
Interestingly, the data also suggests that male students typically sketched more maps while female students took more pages of notes and sketches of features or specimen. This is non-conclusive; nonetheless, it could be an interesting avenue for future study as highlighted by earlier studies (e.g. Boyle, A. et al., 2007).
The strong focus on overseas residential fieldwork for earth science undergraduates provides a platform for understanding student learning and the development of critical competencies in earth science disciplines. The Bali fieldwork experience is an excellent commencement point to measure baseline competencies before students progress to their respective disciplines. To measure programme outcomes and growth in the expression of the university’s graduate attributes, we crafted a rubric to assess students both summatively (end of module), and formatively (within their 4-yr programme). The results generally show students demonstrating strong organisational, recording (textual) and identification skills, but fare poorer in using field conventions and show less evidence of higher-order thinking in the field.
The findings from this study will guide student mindset regarding fieldwork and note-taking requirements, and provide a standardised measure of fieldwork rigour. The rubric scores will become not just ‘assessment of learning’ (summative), but also ‘assessment for learning’ (formative) for the students who will better understand learning targets, leading to personal judgements on how to revise and improve their performance (Reddy & Andrade, 2010). They will thus slowly internalise what good fieldwork is about (Level 4 based on the SOLO taxonomy) on various aspects and also be able to peer-assess other students, which would aid their own learning and development. Such studies are invaluable in highlighting the conditions required to optimise student learning in organised fieldwork (Boyle et al., 2007).
To date, one senior lecturer heavily involved in leading and conducting student fieldwork has adopted and communicated the newly created rubrics to his students. Feedback from his students was positive, and the instructor hopes that this will guide both student learning in the field and be a basis for standardisation and improvements across the various modules offered.
Implications and Future Work
Programme staff members involved in fieldwork and notebook assessment will be guided by the requirements in conducting and communicating these standards to their students. This will align all staff in how they conduct fieldwork as they understand the prior experiences their students had during previous fieldwork, and how they can build upon these experiences through proper scaffolding and pitching of their own field component.
Exemplars from the notebook corpus will provide a good reference to standardise the various competency levels in the rubrics. These exemplars will be useful in giving students a clear idea of field work expectations as well as various conventions used in the various specialisations (e.g. fault symbols in geological field work).
Going forward, we hope to extend the use of this rubric to other fieldwork modules to test the applicability of the rubrics across courses and disciplines. Further, this project may ultimately contribute to improvements in fieldwork design, as faculty-in-charge can adapt the existing field processes to boost student learning and notebook-taking abilities (Marvell, Simm, Schaaf, & Harper, 2013). This can include consciously posing higher-order questions in the field, providing sufficient time and space for students to make better site maps, and more. This may be further developed into problem-based learning components in the field which can be appropriate for advanced undergraduates (Riggs et al., 2009), such as those in the earth science programme.
This project was funded with support from the Nanyang Technological University’s Teaching, Learning and Pedagogy Division (TLPD) through the Edex Grant. The methods and procedures for this study were approved by the university’s Institutional Review Board (IRB-2017-05-002).
Assaraf, O. B. Z., & Orion, N. (2005). Development of system thinking skills in the context of earth system education. Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching, 42(5), 518-560. https://doi.org/10.1002/tea.20061
Ballantynet, R., Anderson, D., & Packer, J. (2010). Exploring the impact of integrated fieldwork, reflective and metacognitive experiences on student environmental learning outcomes. Australian Journal of Environmental Education, 26, 47-64. http://dx.doi.org/10.1017/s0814062600000823
Balliet, R. N., Riggs, E. M., & Maltese, A. V. (2015). Students' problem solving approaches for developing geologic models in the field. Journal of Research in Science Teaching, 52(8), 1109-1131. http://dx.doi.org/10.1002/tea.21236
Biggs, J. B., & Collis, K. F. (1989). Towards a model of school-based curriculum development and assessment using the SOLO taxonomy. Australian Journal of Education, 33(2), 151-163. https://doi.org/10.1177%2F168781408903300205
Biggs, J. B., & Collis, K. F. (2014). Evaluating the quality of learning: The SOLO taxonomy (Structure of the Observed Learning Outcome): Academic Press.
Boyle, A., Maguire, S., Martin, A., Milsom, C., Nash, R., Rawlinson, S., . . . Conchie, S. (2007). Fieldwork is good: the student perception and the affective domain. Journal of Geography in Higher Education, 31(2), 299-317. http://dx.doi.org/10.1080/03098260601063628
Brookhart, S. M., & Chen, F. (2014). The quality and effectiveness of descriptive rubrics. Educational Review, 67(3), 343-368. http://dx.doi.org/10.1080/00131911.2014.929565
Dawson, P. (2017). Assessment rubrics: towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42(3), 347-360. http://dx.doi.org/10.1080/02602938.2015.1111294
Dohaney, J., Brogt, E., & Kennedy, B. (2015). Strategies and perceptions of students' field note-taking skills: Insights from a geothermal field lesson. Journal of Geoscience Education, 63(3), 233-249. http://dx.doi.org/10.5408/13-026.1
Farnsworth, J. S., Baldwin, L., & Bezanson, M. (2014). An invitation for engagement: Assigning and assessing field notes to promote deeper levels of observation. Journal of Natural History Education and Experience, 8, 12-20. Retrieved from https://journal.naturalhistoryinstitute.org/journal/articles/an-invitation-for-engagement-assigning-and-assessing-field-notes-to-promote-deeper-levels-of-observation/.
Feig, A. D. (2018). Technology, accuracy and scientific thought in field camp: An ethnographic study. Journal of Geoscience Education, 58(4), 241-251. http://dx.doi.org/10.5408/1.3534863
Fuller, I. C. (2006). What is the value of fieldwork? Answers from New Zealand using two contrasting undergraduate physical geography field trips. New Zealand Geographer, 62(3), 215-220. https://doi.org/10.1111/j.1745-7939.2006.00072.x
Glass, M. R. (2015). International geography field courses: practices and challenges. Journal of Geography in Higher Education, 39(4), 485-490. http://dx.doi.org/10.1080/03098265.2015.1108044
Green, W., Hammer, S., & Star, C. (2009). Facing up to the challenge: why is it so hard to develop graduate attributes? Higher Education Research & Development, 28(1), 17-29. https://doi.org/10.1080/07294360802444339
Hannula, K. A. (2019). Do geology field courses improve penetrative thinking? Journal of Geoscience Education, 67(2), 143-160. http://dx.doi.org/10.1080/10899995.2018.1548004
Hill, J., & Woodland, W. (2002). An evaluation of foreign fieldwork in promoting deep learning: A preliminary investigation. Assessment & Evaluation in Higher Education, 27(6), 539-555. http://dx.doi.org/10.1080/0260293022000020309
Jolley, A., Hampton, S. J., Brogt, E., Kennedy, B. M., Fraser, L., & Knox, A. (2019). Student field experiences: designing for different instructors and variable weather. Journal of Geography in Higher Education, 43(1), 71-95. http://dx.doi.org/10.1080/03098265.2018.1554632
Jonsson, A. (2014). Rubrics as a way of providing transparency in assessment. Assessment & Evaluation in Higher Education, 39(7), 840-852. http://dx.doi.org/10.1080/02602938.2013.875117
Kastens, K. A., Manduca, C. A., Cervato, C., Frodeman, R., Goodwin, C., Liben, L. S., . . . Titus, S. (2009). How geoscientists think and learn. Eos, Transactions American Geophysical Union, 90(31), 265-266. https://doi.org/10.1029/2009EO310001
King, C. J. H. (2019). What pattern of progression in geoscience fieldwork can be recognised by geoscience educators? Geosciences, 9(5), 192. http://dx.doi.org/10.3390/geosciences9050192
King, E., O'Donnell, F., & Caylor, K. (2012). Reframing hydrology education to solve coupled human and environmental problems. Hydrology and Earth System Sciences, 16(11), 4023-4031. Retrieved from https://www.hydrol-earth-syst-sci.net/16/4023/2012/.
Kortz, K. M., & Murray, D. P. (2009). Barriers to college students learning how rocks form. Journal of Geoscience Education, 57(4), 300-315. http://dx.doi.org/10.5408/1.3544282
Krakowka, A. R. (2012). Field trips as valuable learning experiences in geography courses. Journal of Geography, 111(6), 236-244. http://dx.doi.org/10.1080/00221341.2012.707674
Maltese, A. V., Balliet, R. N., & Riggs, E. M. (2013). Through their eyes: Tracking the gaze of students in a geology field course. Journal of Geoscience Education, 61(1), 81-88. http://dx.doi.org/10.5408/11-263.1
Marvell, A., Simm, D., Schaaf, R., & Harper, R. (2013). Students as scholars: evaluating student-led learning and teaching during fieldwork. Journal of Geography in Higher Education, 37(4), 547-566. https://doi.org/10.1080/03098265.2013.811638
Petcovic, H. L., Stokes, A., & Caulkins, J. L. (2014). Geoscientists’ perceptions of the value of undergraduate field education. GSA Today, 24(7), 4-10. http://dx.doi.org/10.1130/GSATG196A.1
Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435-448. http://dx.doi.org/10.1080/02602930902862859
Riggs, E. M., Lieder, C. C., & Balliet, R. (2009). Geologic problem solving in the field: Analysis of field navigation and mapping by advanced undergraduates. Journal of Geoscience Education, 57(1), 48-63. https://doi.org/10.5408/1.3559525
Schiappa, T. A., & Smith, L. (2018). Field experiences in geosciences: A case study from a multidisciplinary geology and geography course. Journal of Geoscience Education, 67(2), 100-113. http://dx.doi.org/10.1080/10899995.2018.1527618
Scott, G. W., Humphries, S., & Henri, D. C. (2019). Expectation, motivation, engagement and ownership: using student reflections in the conative and affective domains to enhance residential field courses. Journal of Geography in Higher Education, 43(3), 280-298. http://dx.doi.org/10.1080/03098265.2019.1608516
Shavelson, R. J., Young, D. B., Ayala, C. C., Brandon, P. R., Furtak, E. M., Ruiz-Primo, M. A., . . . Yin, Y. (2008). On the impact of curriculum-embedded formative assessment on learning: A collaboration between curriculum and assessment developers. Applied Measurement in Education, 21(4), 295-314. https://doi.org/10.1080/08957340802347647
Steer, D., Iverson, E. R., Egger, A. E., Kastens, K. A., Manduca, C. A., & McConnell, D. (2019). The InTeGrate Materials Development Rubric: A framework and process for developing curricular materials that meet ambitious goals. In Interdisciplinary Teaching About Earth and the Environment for a Sustainable Future (pp. 25-43). Springer Nature Switzerland AG: Springer International Publishing.
Streule, M. J., & Craig, L. E. (2016). Social learning theories—An important design consideration for geoscience fieldwork. Journal of Geoscience Education, 64(2), 101-107. http://dx.doi.org/10.5408/15-119.1
Switzer, A., & Kennedy, D. (2013). Techniques and Methods for the Field: An Introduction and Commentary. In J. F. Schroder (Ed.), Treatise on Geomorphology (pp. 105-109). London: Elsevier.
Thompson, S., Ngambeki, I., Troch, P. A., Sivapalan, M., & Evangelou, D. (2012). Incorporating student-centered approaches into catchment hydrology teaching: a review and synthesis. Hydrology and Earth System Sciences, 16(9), 3263-3278. Retrieved from https://www.hydrol-earth-syst-sci.net/16/3263/2012/.
Thomson, J. A., Buchanan, J. P., & Schwab, S. (2006). An integrative summer field course in geology and biology for K-12 instructors and college and continuing education students at Eastern Washington University and beyond. Journal of Geoscience Education, 54(5), 588-595. http://dx.doi/10.5408/1089-9995-54.5.588
Van Loon, A. F. (2019). Learning by doing: enhancing hydrology lectures with individual fieldwork projects. Journal of Geography in Higher Education, 43(2), 155-180. http://dx.doi.org/10.1080/03098265.2019.1599330
About the Corresponding Author
Stephen CHUA is a Research Fellow in the Earth Observatory of Singapore at Nanyang Technological University. His research interests include Quaternary stratigraphy, and past sea-level and palaeoenvironmental change for Singapore and the region.