A Study on the Effect of Structured versus Unstructured Company Advisors’ Feedback on Innovation Project Team Performance
Article
1Sarah CHEAH, 2Mark GAN Joo Seng, 1LI Shiyu, 3Bimlesh WADHWA
1 Dept of Management & Organisation, NUS Business School
2 Centre for Development of Teaching & Learning, NUS
3 Dept of Computer Science, School of Computing, NUSCorrespondence
Name: Associate Professor Sarah CHEAH Lai Yin
Address: NUS Business School, 15 Kent Ridge Drive, BIZ1-8-46, Singapore 119245
Email: bizclys@nus.edu.sg
Recommended Citation:
Cheah, S., Gan, M. J. S., Li S. Y., & Wadhwa, B. (2020). A study on the effect of structured versus unstructured company advisors’ feedback on innovation project team performance. Asian Journal of the Scholarship of Teaching and Learning, 10(1). 27-39.
View as PDF Current Issue
Abstract
To equip university students with the necessary knowledge and skills for their future workplace, companies provide opportunities for students to participate as “trainee consultants” in innovation-related projects as a form of experiential learning. While we recognise the importance of company advisors’ feedback on students’ projects, there are limited studies on the relationship between advisors’ feedback and student performance. In this quasi-experimental study, we investigate how structured advisors’ feedback could influence students’ learning and project performance. A total of 40 Year Three and Four undergraduates from the NUS Business School volunteered for this study. They were randomly assigned into 11 teams, with 19 students in the treatment groups and 21 students in the control groups. Five company advisors were asked to provide written feedback for the treatment groups using a feedback form, which focus students’ attention on three levels (task, process, and self-regulation, Hattie & Timperley, 2007) at three project milestone stages. Students in the control groups receive feedback from company advisors without any feedback form. Upon completion of the project, a survey was conducted to measure students’ attitude to feedback in supporting learning and the usefulness of company advisors’ feedback. The results showed that students who received structured feedback achieved better overall project performance than those who did not. Students who received structured feedback also showed a more positive attitude to feedback and perceive feedback as more useful than those who did not. The study adds to our understanding of the quality of company advisors’ written feedback and emphasises the need to provide ongoing support for advisors and students in the feedback process.
Keywords: Project-based learning, feedback, innovation, team performance, experiential learning
Introduction
In tertiary education, students are offered company projects as a form of experiential learning. To improve students’ learning in terms of the quality of their project work, company advisors are encouraged by educational institutions to provide feedback to students on their work. Feedback has powerful effects on learning; however, this impact can be either positive or negative. Feedback given by advisors at the early stage of a project, such as upon completion of initial tasks, would have a more positive influence on students’ performance than feedback given at a later stage. Unclear evaluative feedback, which fails to specify the criteria for successful performance or otherwise, could have deleterious effects on students’ learning and achievement. The tendency to avoid negative feedback for fear of hurting learners’ feelings could significantly limit the learning opportunities for learners to enhance their performance. To optimise their learning outcome that would be mutually beneficial to both parties, company advisors play an important role, particularly with their feedback to students on the projects. Unfortunately, there are limited studies on the relationship between company advisors’ feedback and student performance.
Project-based learning (POBL), is a constructivist method for creating meaningful learning experiences through hands-on problem-solving activities, often using a real-life problem to trigger inquiry activities in which students ask questions, search for information, brainstorm, design, and test alternative solutions (Thomas, 2000). During this inquiry process, learners create artefacts by applying what they previously learned or what they have searched and acquired along the way. The created artefacts are representations of students’ solutions to the problem, and are often shared and critiqued by peers and company advisors for further improvement. Unfortunately, while feedback is provided throughout the inquiry process, in the current project module, there is no way to track or monitor how students are using the feedback. In addition, the nature and quality of the feedback provided are also unexplored.
Our exploratory study investigated how the nature and quality of the advisors’ feedback influence the learning and project performance of students from the NUS Business School. In particular, this study introduced a form with three feedback levels (task, process and self-regulating) for company advisors to provide written feedback to students. We believe that our approach of integrating multi-level feedback as part of learning and assessment is original and can better support our students in their learning through the company projects. By integrating the feedback framework into POBL, we develop a better understanding of how students use the feedback in their learning. At the same time, the insights gained from this study have given the company advisors an opportunity to improve the way feedback is generated and provided for students’ learning.
Literature Review
Project-based learning
For the past few decades, higher education has shifted from the traditional instructor-centred to more student-centred learning approaches, with growing emphasis on self-direction, collaboration, and practice orientation. POBL is one such innovative learning approach, and may be defined as “a model that organises learning around projects” (Thomas, 2000, p. 1). POBL is also grounded in constructivism, where learners are expected to learn by doing, honing their critical thinking skills and directing their learning process by active participation (Gülbahar & Tinmaz, 2006). POBL has small teams, involves activities closer to professional practices, fosters knowledge application rather than acquisition, and hence requires accompanying course work, demands self-direction as well as the management of roles and resources (Perrenet, Bouhuijs, & Smits, 2000). Mills and Treagust (2003) studied how courses were implemented in Aalborg University (Denmark), Monash University, and Central Queensland University, and the results suggested that POBL seems to be optimal for engineering education. In their investigation of engineering education at Aalborg University, which applied POBL, Lehmann, Christensen, P. Du, and Thrane (2008) found that this approach fostered creativity and encouraged development of problem identification and solving skills. Meanwhile, Martínez-Monés et al. (2005) evaluated the POBL approach in a university computer architecture course involving concurrent multiple inter-related cases by different student teams. They concluded that through POBL, the students not only acquired broad and deep knowledge, but also developed planning and collaboration skills.
Feedback
There is a substantial body of research on feedback and its relationship with learning outcome and performance (Hattie, Gan, & Brooks, 2017). It has been established that feedback is one of the most powerful influences on learning and achievement, and its impact can be negative or positive (Hattie & Timperley, 2007). An important purpose of feedback is to reduce the discrepancies between current levels of understanding of course content, academic performance, and a desired academic goal (Hattie & Timperley, 2007). Teachers can reduce this gap by clarifying goals, enhancing the commitment or increased efforts of students to achieve the goals through feedback. In particular, teachers may address the three feedback questions:
• Where am I going?
• How am I going?
• Where to next?
The first question pertains to goals which relate to feedback by informing the individuals and allowing students to take it a step further and set more challenging goals, thereby establishing the conditions for learning (Locke & Latham, 1990). However, when feedback is not directed towards the achievement of the goals, such as feedback about basic grammar or spelling in students’ reports rather than the critical dimensions of the goals, such feedback will be ineffective in reducing discrepancies between performance and desired goals (Timperley & Parr, 2005). To answer the second question of “How am I going?”, a teacher has to provide feedback pertaining to a task performance relative to some expected standard. Feedback is effective when it contains information about the student’s progress and how he or she is to proceed. However, attention to this information tends to lead to assessment or testing, which often conveys limited feedback information to help the students know how they are progressing. With regard to the third question on “Where to next?”, instruction from the teachers tend to lead to more information, more tasks and more expectations, rather than providing information that opens up more opportunities for learning, such as enhancing the students’ capacity to exercise self-regulation and refine strategies to work on tasks.
Hattie and Timperley’s (2007) four levels of feedback provide a useful framework for thinking about feedback at the levels of task, process, regulatory and self. They contended that feedback at task level results from faulty interpretations. It is most effective when it supports building cues and information regarding erroneous hypothesis, leading to the development of more effective and efficient strategies for processing and understanding the material (Harackiewicz, 1979). Feedback at process level is most beneficial when it helps students reject erroneous hypotheses and guides them to directions for searching and strategising (Earley, 1988). Feedback that attends to self-regulation is powerful to the extent that it leads to further engagement into the tasks (Butler & Winnie, 1995). Feedback at the level of self, on the other hand, is rarely effective as it is seldom directed at addressing the three feedback questions (Wilkinson, 1981). In a recent study of primary and secondary school teachers, Brown, Harris, and Harnett (2012) found that most teachers deemed task level feedback agreeable, with moderate agreement of process and self-regulation feedback, and slight to moderate agreement of self-level feedback. In this study, we adopted the task, process and self-regulation feedback levels to conceptualise and design a feedback form for company advisors to provide written comments on students’ projects. Self-level feedback is not included as it is unlikely to help students in revising their respective projects.
A big part of feedback effectiveness has to do with students’ use of feedback to improve their learning, or what Sutton (2012) considered as feedback literacy. Carless and Boud (2018) described the notion of feedback literacy as “the understandings, capacities and dispositions needed to make sense of information and use it to enhance work or learning strategies” (p. 1316). For students to benefit from feedback, they need to recognise and appreciate different forms of feedback, make sound judgement of their own work, manage their emotional responses to feedback, and act upon the given comments in an informed and meaningful manner (Carless & Boud, 2018). At the same time, the teacher plays an important role in creating an open environment for students to engage actively with the feedback, building a trusting student-teacher relationship and modelling the productive use of feedback through dialogue and procedural facilitations, such as the use of question prompts (Gan & Hattie, 2014), and exemplars (Handley & Williams, 2011). In this study, to provide opportunities for students to use the company advisors’ feedback, we included three progressive stages for the project report writing and final presentation.
Our review highlights the importance of focussing on feedback in terms of levels whereby the feedback contains information that directs students’ attention towards task completion, deepens their procedural understanding and prompts self-monitoring processes (Hattie et al., 2017). In addition, we also pointed out that feedback should provide opportunities for students to develop a deeper understanding of the criteria for achieving the learning outcomes, and to practice using the criteria in future work. In view of the benefits of POBL in enhancing learning, and the power of feedback in reducing the gap between actual and expected performance levels, this study proposes the integration of these approaches to improve the learning outcomes of undergraduates from the NUS Business School in their company projects.
This study aims to address two research questions:
RQ1: To what extent do company advisors’ feedback affect the students’ overall company project performance?
RQ2: To what extent do company advisors’ feedback affect the students’ attitude towards feedback and their perceived usefulness of feedback in their projects?
METHODS
Participants
The participants for this study comprised 11 teams of undergraduates from the NUS Business School taking the Field Service project module in their third or fourth year of their degree courses. The themes of the projects generally pertained to innovation as the School worked closely with public sector innovation agencies and private companies to enhance the undergraduates’ knowledge and understanding of the innovation ecosystem (Cheah & Ho, 2019; Cheah, 2016). In particular, the teams were challenged in their projects to think of innovative ways to solve industry problems, with the view to enhancing their role breadth self-efficacy to prepare them for the future workplace (Cheah, Li, & Ho, 2019). Each team comprised three to four students, with a mix of both males and females, aged 22 to 25 years old. The total number of participants was 40 (Female = 15, Male = 25).
Design and procedure
Our study adopted a quasi-experimental design comprising a treatment group as well as a control group. The treatment group consisted of four teams of business students, while the control group comprised another seven teams of business students. Participants from both the treatment and control groups would go through their respective company project module under the guidance of company advisors. The company project was typically divided into four stages: (a) Scoping the Requirements, (b) Designing the Solution, (c) Implementing and Evaluating the Solution, and (d) Presenting the Report. Students had to submit a group written report for each stage of the project. To ensure fairness, the School appointed professors to be module coordinators to screen all project proposals, approve and assign to students only those projects that were consistent in nature, scope, degree of difficulty and company expectations. In our study, we incorporated feedback on the students’ project as part of formative assessment, which comprised three cycles of company advisors’ feedback and students’ revisions. This is described in more detail in the following sections for the treatment and control groups.
Treatment group
In the treatment group, a feedback form was designed with the three levels of feedback (task, process, self-regulation) for company advisors to provide written feedback to the students at the end of each of the first three stages of the company project. The fourth or final stage involved a summative grade for the presentation of the whole project. A rubric was developed based on the criteria in Table 1, to help advisors provide structured feedback on the progress made at each level of the students’ performance—below criteria, meeting criteria, and exceeding criteria. Table 1 also shows examples of the feedback for the three levels at each of the first three stages. Prior to the start of the module, the company advisors were briefed on the use of the rubric with examples, as well as on giving verbal discursive feedback and written feedback using the structured feedback forms, which were based on the rubrics. A questionnaire was administered at the end of the project to measure students’ attitude towards feedback and their perceptions of the usefulness of the feedback for improving their project. A template of the structured feedback form used by company advisors is found in the Appendix.
Table 1
Criteria in rubric and feedback examples for each stage of the project
Control group
In the control group, the company advisors used the traditional method of providing feedback which was ad hoc in terms of the schedule, and random in terms of the content exchanges with students (no clear distinction among the various feedback levels). The students were also invited to participate in a survey questionnaire to measure their attitude towards feedback and the usefulness of company advisors’ feedback.
Measures and data analysis
Table 2 provides the operationalisation of the four main measures of dependent variables (presentation scores, report scores, attitude towards feedback, and perceived usefulness of feedback) and the independent variable (with two levels—with or without structured company advisors’ feedback). Our team analysed the feedback data using multivariate analysis of variance (MANOVA) to simultaneously test for statistical significance difference between the treatment and control groups on the four variables.
Table 2
Description of measures
Results
Table 3 presents the descriptive results of the 40 participating students. The results showed that students in the treatment group who received structured feedback from company advisors performed slightly better on the project report and presentation than students in the control group. Also, students in the treatment group reported a higher level of positive attitude towards feedback and perceived usefulness of the company advisor’s feedback, with mean values of 4.40 and 4.26 respectively, while students in the control group scored much lower levels of 3.98 and 3.37 respectively. Moreover, the standard errors of the control group were normally larger than those of the treatment group, which suggested that there was greater variation in the project performance among students without structured feedback compared to those who received structured feedback.
Table 3
Descriptive results of the Participants and Outcome Variables
The correlations for the four measured variables were presented in Table 4. The results show that students’ attitude towards feedback for the treatment group correlates with their presentation performance, but there was no significant correlation between the perceived usefulness of company advisor’s feedback and students’ performance. Students with positive attitude towards feedback tend to perceive the feedback as useful. Apart from that, we also calculated the Cronbach’s Alpha for the measured scales with 12 items on students’ attitude towards feedback and 7 items on the usefulness of company advisor’s feedback. Thus, the strength of these reliability estimates of 0.80 and 0.94 indicate a high homogeneity among the scale items.
Table 4
Correlation coefficients for relations between four measures of structured feedback influence
To address our research questions, using MANOVA (see Table 5), the findings showed that there was a significant effect of the structured feedback on the combined dependent variables, F (4, 35) = 8.43, p<0.001, partial η2 = 0.491. Analysis of dependent variables individually showed no effects for the presentation and report variables. However, the attitude and perception variables were statistically significant at a Bonferroni adjusted alpha level of 0.013, F (1, 38) = 14.63, partial η2 = 0.278 and F (1, 38) = 22.82, partial η2 = 0.375 respectively. These findings indicate that students who received structured feedback from company advisors demonstrated significantly higher positive attitude and higher perceived usefulness towards feedback than those who did not.
Table 5
Multivariate and Univariate Analyses of Variance F Ratios for structure feedback x (students’ performances, attitude and perceptions).
Discussion
Our study has several findings and implications. First, students who received structured feedback appear to have achieved better overall performance for their company projects than those who did not. This finding is in line with other research on the positive impact of targeted feedback on helping students achieve higher levels of understanding relative to where they were before the information was provided (Sadler, 1989; Shute, 2008). Here we noticed that providing students with task, process, and self-regulation level feedback allowed for greater feedback interactions as well as helping students to focus their attention on what to improve. The feedback form serves an important purpose in supporting company advisors in formulating written feedback and the criteria that were clearly indicated in the form further enhanced the subsequent discussion and conversations between advisors and students on their project. In contrast, the control group students experienced feedback in an ad hoc manner, and opportunities for focused discussion on criteria is very much dependent on the expertise of the advisors and the help-seeking behaviours of the students. This first finding has implications for the careful design of feedback during POBL. For feedback to be effective, the learning task (i.e. the project) needs to allow students to use the feedback to progressively revise their later work. In this study, this was made possible by organising the project into three progressive stages, whereby there are opportunities for students to re-visit and revise their draft reports using the company advisor’s written feedback, i.e. closing the feedback loops (O’Donovan, Rust, & Price, 2016). At the same time, the company advisors were able to focus on the key criteria in the feedback form and draw students’ attention to the task, process, and self-regulation levels of their work. Again, this involves detailed design of the feedback form with specific criteria and feedback levels (Gan & Hattie, 2014).
Second, structured feedback from company advisors helped students to develop more positive attitude towards the feedback as well as higher usefulness perception on the feedback. In other words, students who receive more focused feedback (based on the levels) are more receptive to feedback as comments to improve their work, and more likely to put in effort to use the feedback to revise their project. In the feedback literature, students with mastery or learning goals tend to be positively associated with feedback, are keen to use feedback to understand or master something new, and are more willing to exert effort to achieve their goals (Dweck & Leggett, 1988). This is in contrast to students who are pursuing performance goals—they tend to avoid challenging tasks or obstacles and in this case, are less inclined to use advisor feedback to make changes to their projects. This finding also aligns with Carless and Boud’s (2018) notion of developing students’ feedback literacy. Students with a positive attitude towards feedback not only recognise the importance of feedback, they also appreciate the different ways in which feedback could be used to enhance learning. They also have the right attitude to approach feedback in a way that will help inform their work and willing to put in effort to self-evaluate in relation to the criteria and feedback provided. The pedagogical implication is to embed opportunities for students to develop feedback literacy through the design of feedback interactions, with deliberate practice and leveraging on POBL in solving real-life industry problems.
Third, there was no significant effect between the treatment and control groups on the students’ project performance in report writing and the final presentation. We attribute this finding to other factors that might mediate the use of feedback and students’ revision of their projects. Our analysis of the qualitative comments from students reveal that in both groups, they appreciate company advisors’ feedback but need to engage in further clarification and discussion, both within their group and with the advisors. Such interactions can only occur when students and advisors have taken the time to evaluate the current status of the project (using the criteria in the feedback form) and have decided on next steps to take or further questions to ask. While feedback might have helped students in some aspects of their project, there are other factors that need to be taken into consideration before students could benefit fully from the company advisors’ feedback. This finding could also be explained by the need to further develop students’ feedback literacy.
In conclusion, our study demonstrated that company advisors, with the right support and tools, could provide effective feedback to improve students’ performance and more importantly, to create opportunities for developing their feedback literacy.
Limitations and Future Research
The study is not without its limitations. First, the company advisors provide feedback voluntarily and are self-selected. The findings might not be generalisable to the rest of the company project cohorts in the Business School. Second, a small sample was used in this exploratory study. The improvements to reports and presentation by students who received structured feedback over those who did not seemed marginal from the results presented in Table 3. A larger sample size should be used in future studies to corroborate the findings. Future research should also consider how students actually make use of the company advisors’ feedback and examine what actually happens during the verbal feedback sessions.
Acknowledgements
This research is funded by the Tertiary Education Research Fund (TRF) from the Ministry of Education (MOE), Singapore. We would like to thank the company advisors and student participants for their support and active participation in this study. We are also grateful for the help and advice from our research assistants and co-investigators.
References
Brown, G. T. L., Harris, L. R., & Harnett, J. (2012). Teacher beliefs about feedback within an assessment for learning environment: Endorsement of improved learning over student well-being. Teaching and Teacher Education, 28, 968–978. https://doi.org/10.1016/j.tate.2012.05.003
Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65(3), 245–274. https://doi.org/10.3102%2F00346543065003245
Carless, D., & Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315-1325. https://doi.org/10.1080/02602938.2018.1463354
Cheah, S., & Ho, Y. P. (2019). Building the ecosystem for social entrepreneurship: university social Enterprise cases in Singapore. Science, Technology and Society, 24(3), 507-526. https://doi.org/10.1177%2F0971721819873190
Cheah, S., Li, S. Y., & Ho, Y. P. (2019). Mutual support, role breadth self-efficacy, and sustainable job performance of workers in young firms. Sustainability, 11(12), 3333. https://doi.org/10.3390/su11123333
Cheah, S. (2016). Framework for measuring research and innovation impact. Innovation, 18(2), 212-232. https://doi.org/10.1080/14479338.2016.1219230
Dweck, C.S., & Leggett, E.L. (1988). A social-cognitive approach to motivation and personality. Psychological Review, 95(2), 256-273. https://psycnet.apa.org/doi/10.1037/0033-295X.95.2.256
Earley, P. C. (1988). Computer-generated performance feedback in the magazine subscription industry. Organizational Behavior and Human Decision Processes, 41, 50–64. https://doi.org/10.1016/0749-5978(88)90046-5
Gan, M. J. S., & Hattie, J. (2014). Prompting secondary students’ use of criteria, feedback specificity and feedback levels during an investigative task. Instructional Science, 42(6), 861-878. https://doi.org/10.1007/s11251-014-9319-4
Gülbahar, Y. & Tinmaz, H. (2006). Implementing project-based learning And e-portfolio assessment In an undergraduate course. Journal of Research on Technology in Education, 38(3): 309-327. https://doi.org/10.1080/15391523.2006.10782462
Handley, K., & Williams, L. (2011). From copying to learning: Using exemplars to engage students with assessment criteria and feedback. Assessment & Evaluation in Higher Education, 36(1), 95-108. https://doi.org/10.1080/02602930903201669
Harackiewicz, J. M. (1979). The effects of reward contingency and performance feedback on intrinsic motivation. Journal of Personality & Social Psychology, 37(8): 1352–1363. https://psycnet.apa.org/doi/10.1037/0022-3514.37.8.1352
Hattie, J., & H. Timperley. (2007). The power of feedback. Review of Educational Research, 77(1): 81–112. https://doi.org/10.3102%2F003465430298487
Hattie, J., Gan, M. J. S., & Brooks, C. (2017). Instruction based on feedback. In R.E. Mayer & P.A. Alexander (Eds.), Handbook of Research on Learning and Instruction (2nd ed.). New York: Routledge.
Lehmann, M., Christensen, P., Du, X., & Thrane, M (2008). Problem-oriented and project-based learning (POPBL) as an innovative learning strategy for sustainable development in engineering education. European Journal of Engineering Education, 33(3), 283-295. https://doi.org/10.1080/03043790802088566
Locke, E. A., & Latham, G. P. (1990). A Theory of Goal Setting and Task Performance. Englewood Cliffs, NJ: Prentice Hall.
Martínez-Monés, A., Gómez-Sánchez, E., Dimitriadis, Y., Jorrín-Abellán, I., Rubia-Avi, B. & Vega-Gorgojo, G. (2005). Multiple case studies to enhance project-based learning in a computer Architecture course. IEEE Transactions on Education, 48(3). http://dx.doi.org/10.1109/TE.2005.849754.
Mills, J. & Treagust, D. (2003). Engineering education–is problem-based or project-based learning the answer? Australasian Journal of Engineering Education, 1-16. Retrieved from https://web.archive.org.au/awa/20050127221015mp_/http://pandora.nla.gov.au/pan/10589/20050128-0000/www.aaee.com.au/journal/2003/mills_treagust03.pdf.
O’Donovan, B., Rust, C. & Price, M. (2016). A scholarly approach to solving the feedback dilemma in practice. Assessment & Evaluation in Higher Education, 41(6), 938-949. https://doi.org/10.1080/02602938.2015.1052774
Perrenet, J. C., Bouhuijs, P. A. J. & Smits, J. G. M. M. (2000). The suitability of problem-based learning for engineering education: theory and practice. Teaching in Higher Education, 5(3), 345-358. https://doi.org/10.1080/713699144
Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 145–165. https://doi.org/10.1007/BF00117714
Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153-189. https://doi.org/10.3102%2F0034654307313795
Sutton, P. (2012). Conceptualizing feedback literacy: Knowing, being and acting. Innovations in Education and Teaching International, 49(1), 31-40. https://doi.org/10.1080/14703297.2012.647781
Thomas, J. W. (2000). A review of research on project-based learning. Retrieved 18 July 2005 from http://www.autodesk.com/foundation.
Timperley, H., & Parr, J. (2005). Literacy professional development project. Wellington: New Zealand Ministry of Education.
Wilkinson, S. S. (1981). The relationship of teacher praise and student achievement: A meta-analysis of selected research. Dissertation Abstracts International, 41(9–A), 3998. Retrieved from https://ufdc.ufl.edu/AA00048518/00001.
About the Corresponding AuthorSarah CHEAH is Associate Professor at the NUS Business School where she lectures corporate entrepreneurship, social entrepreneurship and strategic foresight. She advises public and private sector organiwations in the areas of technology roadmapping. Her research interests include technological innovation and strategic foresight. She has published in international journals such as Creativity and Innovation Management, The Journal of Technology Transfer, as well as Technological Forecasting and Social Change.