Educators’ pre-Covid19 lived experience of the assessment and feedback policy at a UK higher education institution

Mireilla Bikanga Ada1

1University of Glasgow, Glasgow, UK

Abstract

This study was conducted before the Covid19 pandemic. It used an interpretative thematic analysis approach to inquire about educators’ experiences of assessment and feedback policy and their views on assessment feedback. Eighteen teaching staff from a UK-based higher education institution expressed their opinions on various factors that facilitated or hindered the adoption of the assessment and feedback policy, along with their perceptions of assessment and feedback. The policy required that teaching staff provide feedback using technology such as Turnitin, with a two-week turnaround. Three themes were developed from the interpretative thematic analysis: 1) assessment and feedback policy as a personal and professional challenge, 2) mixed perceptions on the effectiveness of feedback, and 3) facilitating conditions for a successful assessment and feedback policy implementation. Workload and time constraints appeared to be the most prominent issues across all the interviews, affecting educators at every level of their professional and personal experiences. Insufficient time for attending meetings, providing timely and quality feedback, conducting research, pursuing professional development, and engaging in innovative strategies created tension between educators’ engagement and commitment to enhancing students learning and the pressure imposed by assessment and feedback policy demands. This research has implications for both educators and their professional development. Additionally, it can inform the university’s internal development of assessment practices and possibly broaden the understanding of assessment and feedback, particularly regarding the potential mismatch between the assessment and feedback policy and educators’ experiences and expectations.

Keywords

assessment feedback, assessment and feedback policy, interpretative phenomenological analysis, interpretative thematic analysis, higher education

Introduction

Assessment, considered one of the most conservative features in higher education (Bloxham, 2016), is a crucial yet challenging component of pedagogy that can potentially undermine the development of students’ learning capacity (Black, 2015). For years, assessment practices have been under pressure to change (Ferrell, 2012; Grion et al., 2018), while feedback continues to be a significant source of student dissatisfaction in their higher education experience (Deeley et al., 2019; Ferrell, 2012; Gray et al., 2022; Henderson et al., 2019). This dissatisfaction persists because assessment practices have been resistant to change, emphasising the need to understand stakeholders’ “change-readiness” and identify effective means of engaging them in the change process within their specific contexts (Ferrell, 2012, p. 21). In Scotland, achieving consistent professional and public understanding of assessment issues remains challenging, highlighting the complexity of the alignment of assessment with accountability in the multifaceted landscape of Scottish education (Hayward, 2015; Hutchinson & Young, 2011; Muir, 2022). Gaining such insights serves as a foundation for creating an assessment policy. The primary goal of developing an assessment policy is to diagnose and monitor student learning and improve the quality and quantity of learning (Brown, 2004). However, there are limited studies on actual feedback practices (Li & De Luca, 2014), as studies on assessment reform processes often focus on factors determining policy implementation success or failure rather than considering a broader perspective (Flórez Petour, 2015).

In the context of the recent Covid19 pandemic, educational institutions have been forced to transition suddenly to online education, pushing even those resistant to change to adopt online and remote technologies for teaching and learning. Before the pandemic, the adoption of learning technologies in higher education (HE) had been slow over the past two decades (Casanova & Huet, 2021). The study presented in this paper, conducted in a pre-Covid19 era, aims to explore the ways in which educators understand and conceptualise their experiences with the assessment and feedback policy and their perceptions of assessment feedback. This policy required educators to provide feedback using similarity checking and marking technology, such as Turnitin, within a two-week turnaround time. It was introduced during changes in the UK Electronic Management of Assessment (EMA) landscape (Ferrell, 2014), which encouraged higher education institutions to rethink their assessment and feedback practices by utilising technology, including virtual learning environments and tools like Turnitin, for assessment submission, marking, and feedback (Mann, 2016).

This study was conducted in one UK-based higher education institution (not the author's own) amid rising work pressures and educators’ frustrations with the blanket approach to assessment and feedback policy implemented by their institution. The study seeks to reveal how educators understand and conceptualise their experiences with the policy and their views on assessment feedback from an interpretative viewpoint. The policy required educators to provide feedback using Turnitin, with a turnaround of two weeks.

Background

Assessment and feedback have long been sources of students’ dissatisfaction (Williams & Kane, 2009). Recent National Student Survey (NSS) results from 2021 and 2022, which cover a period of education shifting primarily to online and blended learning due to the Covid19 pandemic, indicate that assessment and feedback remain problematic. This issue is exemplified by low scores in categories such as “Feedback on my work has been timely” and “I have received helpful comments on my work”. Factors affecting the effectiveness of feedback include delays, inconsistency, lack of clarity, vagueness, and illegible handwriting, which can lead to misunderstandings. International students may particularly struggle with verbal feedback due to language barriers (McCarthy, 2017). Although feedback, which is the “cornerstone of all learning” (Colbran et al., 2016, p. 6), the ‘Achilles’ Heel’ in terms of quality (Knight, 2002, 107), is an essential element for designing teaching (Cohen, 1985), improvements in policy and practice (Nicol, 2009; Nicol & Macfarlane-Dick, 2004) have not enhanced student learning (Nicol et al., 2014). Providing prompt, detailed, structured, relevant, and clear feedback can increase educators’ workloads, leading to further disappointments for staff. To address this issue, technology-enhanced assessment and feedback can be implemented to meet students’ expectations (Jensen et al., 2021, p. 173). Technology can support the assessment lifecycle, offering benefits such as improved clarity, timeliness, privacy, and reusability of feedback (Bikanga Ada, 2023; Ferrell, 2014; Jensen et al., 2021).

Despite efforts to promote technology-enhanced learning, some staff members still resist change (Kehrwald & McCallum, 2015; Laurillard et al., 2009). The use of technology in assessment, learning, and teaching is inconsistent (Henderson et al., 2017), and its adoption varies across disciplines (Brady et al., 2019). This is unsurprising as academics’ mindset is set to either “smooth the progress or hold back educational innovations” (Handal et al., 2013, p. 360). The policy discussed in this paper requires sustained use of technology for summative assessment and changes educators’ assessment marking practices and culture. Rienties (2014) found that the culture within an institution can be a limiting factor in academics’ acceptance of technology. However, other factors, such as the lack of digital skills and the lack of time for training, can also prevent the integration of technologies into teaching (Margaryan et al., 2011).

Numerous studies have explored perspectives on assessment and feedback, but educators’ experiences remain less thoroughly examined than students’, even though they are at the forefront of learners’ education and play a critical role in enacting teaching and learning policies. Gaining insight into educators’ experiences can enhance teaching and learning experiences, promote the adoption of various practices, contribute to more effective educational methods, and improve the quality of education. Li and De Luca’s (2014) review of 37 empirical studies on assessment feedback revealed that the main areas of research focused on undergraduate students’ diverse views on feedback effectiveness and usefulness, lecturers’ varied feedback strategies across disciplines, differing interpretations of assessment criteria, confusion regarding the dual roles of assessment feedback, and discrepancies between teachers’ beliefs and practices.

Carless (2006) collected students’ and tutors’ perceptions of assessment, marking, and feedback through a large-scale survey and semi-structured interviews. The key findings revealed a disparity between tutors’ and students’ perceptions. While tutors perceived themselves as providing more detailed and useful feedback, the students did not share this view to the same extent. Maggs (2014) investigated the satisfaction of college students and staff with assessment feedback in a case study using surveys and discovered that staff held mixed feelings about the feedback policy. The study recommended introducing electronic submission and marking, as electronic feedback allows students to retain copies of their work and feedback for future reference. However, staff expressed concerns about reading large amounts of text on computer screens. The study also suggested that staff should receive training on feedback purposes, while students should be taught to appreciate feedback and learn how to use it to improve future work. The necessity for staff feedback training has been emphasised in other studies, such as Beaumont et al. (2011), where tutors reported insufficient training in providing quality feedback, and Meyer et al. (2010), who observed that teaching staff lacked knowledge about effective assessment strategies and received minimal formal preparation for assessment responsibilities.

New policy introductions have often raised concerns. Ball et al. (2012) found that teachers viewed policy as an obstacle. Black and Wiliam (2010, p. 88) noted that teachers are unlikely to adopt “attractive” sounding ideas based on extensive research if they are presented as general principles that leave the task of translating them into practice solely to the teachers. Additionally, a policy’s ambiguity can contribute to its non-implementation. Meyer et al. (2010) discovered that policy and procedure documents provided limited guidance and seemed more reactive than visionary. Flórez Petour’s (2015) review highlights an abundance of literature that cites insufficient or low-level knowledge about assessment as an obstacle to assessment policy implementation, which policymakers often overlook. Meyer et al. (2010) conducted large-scale research on assessment policies and identified the lack of adequate evidence of a higher-level vision guiding procedures and requirements based on assessment purposes as a problem.

While numerous studies have explored educators’ experiences with assessment and feedback policy, none have examined their perceptions of feedback from an interpretative standpoint using interpretative phenomenological analysis (IPA). IPA is rooted in phenomenology, hermeneutics, and idiography. Phenomenology focuses on exploring and understanding human experience, hermeneutics is the theory of interpretation, and idiography is an in-depth analysis of single cases, examining individual perspectives of study participants in their unique contexts (Pietkiewicz & Smith, 2014). As a qualitative approach predominantly employed in qualitative psychology, especially in the UK (Smith & Osborn, 2007), IPA can help develop a detailed understanding of experiences and allow an in-depth account that quantitative methods cannot readily access. In essence, IPA “seeks to comprehend how a specific lived situation is experienced by a particular individual at a specific time, acknowledging that this experience is inseparably intertwined with the person’s lifeworld” (Eatough & Shaw, 2019, p.50).

Characterised by double hermeneutic, IPA involves the researcher attempting to make sense of the participant trying to make sense of what is happening to them (Smith & Osborn, 2007). IPA concentrates on individuals’ meaning-making processes and explores the experience and significance of that experience while paying attention to the language and emotions surrounding the experience for the participant (Clifford et al., 2019). IPA’s success in health psychology stems from its ability to enable healthcare professionals to look beyond quantifiable aspects of their work, such as treatment outcomes, survival rates, and clinical governance, and reach the heart of the patient’s lived experience (Biggerstaff & Thompson, 2008). In recent years, fields like sports & exercise, music, and theatre have adopted IPA (Farr & Nizza, 2019); however, the number of IPA-related studies in the educational domain remains limited (Smith, 2011). Having a deeper understanding of educators’ experiences with the policy and their views on assessment feedback can contribute to the development of more effective teaching and learning strategies, encourage the adoption of innovative practices, and ultimately improve the quality of education. The growing popularity of IPA in capturing participants’ lived experiences has led to its adaptation to accommodate various participant types. For instance, there is evidence of IPA being utilized in focus groups, albeit not in its ‘pure’ form (Love et al., 2020). IPA has also been combined with Thematic Analysis (TA) to analyse the same data. In Spiers and Riley’s (2019) study, which examines barriers and facilitators to help-seeking for distressed GPs, data collected from semi-structured interviews were suitable for both IPA and TA.

Methodology

Research questions

This study highlights some of the possible ways in which educators experienced their life in relation to assessment and feedback policy, as well as their perspectives on assessment feedback from an interpretative thematic approach (ITA). The guiding research questions are:

·        How do academics experience the assessment and feedback policy in the university?

·        How do educators perceive assessment feedback?

·        What do educators perceive as facilitating conditions to improve assessment and feedback implementation policy?

Participants

In this study, the interpretative thematic analysis (ITA) approach, modelled after the IPA guidelines by Smith & Osborn (2007), was employed, as in Furtwängler and de Visser (2017). A homogeneous sample was chosen through purposive sampling to ensure participants were educators from a UK-based higher education institution involved in providing assessment feedback. The initial purposive sample consisted of 25 educators with teaching experience ranging from one to 39 years. Staff who volunteered for interviews after completing the self-reported questionnaire on feedback issues (Bikanga Ada et al., 2017) were between the ages of 30-39 and 60-69. No staff under 30 or those in their 40s and 50s volunteered. Out of these, 18 educators were interviewed, while seven participants cancelled their interviews due to time constraints. A detailed description of each participant’s unique context is not provided here, as many participants requested that their individual characteristics (age, teaching experience, and other responsibilities besides teaching) be excluded from the publication to avoid being identified.

Data collection and analysis

The university’s School of Computing ethics committee granted ethical approval for this study. Semi-structured interviews were employed for qualitative data collection. Interviews, a common method in qualitative research, facilitate the gathering of rich, in-depth information from participants, allowing researchers to gain a clear understanding of the issues (Turner, 2010). An interview schedule was designed to guide the interview while remaining flexible enough to pursue interesting leads that emerged during the conversation. Open-ended questions were utilised, and interviews were audio-recorded. The interview schedule focused on themes related to the university’s assessment and feedback policy and participants’ perceptions of assessment feedback. Participants’ confidentiality was ensured by replacing their names with unique alphabetical identifiers.

This study employed an interpretative thematic approach (ITA) that adheres to IPA guidelines. TA was employed to explore all participants’ experiences, particularly barriers and facilitators to help-seeking, yielding practical findings. According to Spiers & Riley (2019, p. 279), IPA was used to “focus on existential, idiographic elements by unpicking individuals’ meaning-making about their experiences (Smith et al., 2009)”, and highlighted the internalised identity conflict experienced by some participants regarding assuming the general practitioners’ role (Spiers & Riley, 2019). The authors recommend this method of combining pragmatic and existential lenses, which also represents a form of ‘qualitative pluralism’ (Frost et al., 2011), particularly when working with large datasets. Furtwängler and de Visser (2017) employed an ITA to investigate university students’ beliefs about unit-based guidelines, following IPA procedures by concentrating on the experience.

Familiarisation with the data was achieved by listening to participants during interviews, listening to recordings during transcription, and reading transcripts to make ‘sense’ of participants’ information. This 'sense-making' process was not just about capturing literal meanings, but also about comprehending the underlying, subjective, and personal experiences of the participants. Verbatim transcripts were then analysed using ITA, following IPA guidelines (Smith & Osborn, 2007). Smith and Osborn’s (2007) approach to IPA analysis includes looking for themes in the first case, connecting the themes, continuing the analysis with other cases, and finally patterning experiential themes across cases. The coding approach was primarily inductive, with codes connected to participants’ experiential statements (Braun & Clarke, 2006).

This study also focuses on the researcher’s own interpretation of what the data ‘means’ instead of what it ‘is’ through idiographic divergent themes, which depict the subjective lived experience of participants and themes that echo collective experience across participants in order to generate a more in-depth and broader understanding of the data. Superordinate themes were identified based on their prevalence within transcripts and individual accounts offering unique or in-depth perspectives. While each superordinate theme is presented individually, it is important to note that they occurred within the context of the wider account. Only this wider account can capture the true complexity of the data and interconnections between each theme as a result of the convergences and divergences between participants’ accounts. Each superordinate or overarching theme had related subthemes.

Campbell and Morrison (2007) recommend cross-checking emerging themes with the text to ascertain that the analysis has a firm base in the accounts. They also suggest seeking participant feedback on the findings and refining theme descriptions accordingly. This approach was adopted here, with all participants being sent the findings to validate and verify interpretations.

Results

Three themes were developed from the interpretative thematic analysis of the interview transcripts, as shown in Table 1.


Table 1. Superordinate themes and emergent themes

Superordinate themes

Sub-themes

Assessment and feedback policy as a personal and professional challenge

An increasing workload and its psychological and emotional consequences

Accountability to policy knowledge and implementation

Mixed perceptions on the effectiveness of feedback (we give the feedback, but the student doesn’t recognise it)

The use of feedback and feedforward

The effectiveness of feedback is questioned

The effect of students’ engagement with feedback on educators

Facilitating conditions for a successful implementation of assessment and feedback policy

Cultural changes and structural changes

Personal and professional development

 

Assessment and feedback policy as a personal and professional challenge

This strong theme highlights how educators tried to make sense of the assessment and feedback policy within the university and how they conceptualised their experiences. This is captured in the related sub-themes of ‘an increasing workload and its psychological and emotional consequences’ and ‘accountability to policy knowledge and implementation.’

An increasing workload and its psychological and emotional consequences

The subtheme of an increasing workload and its psychological and emotional effects was extracted from all participants’ transcripts. A deep frustration with the heavy workload and expectations placed upon educators arises from an inner conflict. They desperately struggled to negotiate between their own pedagogic meanings and the regulations imposed by their institution. Indeed, workload and time problems, also identified as a source of concern in the literature (Meyer et al., 2010), appear to dominate the overall interviews. The policy requires educators to provide feedback using Turnitin, with a turnaround of two weeks. Many educators find using technology time-consuming, and the turnaround time makes it worst. For example, Staff B describes the time and workload limitations as obstacles to effective feedback provision. At the same time, Staff I felt that providing feedback electronically/digitally not only increases the workload in terms of the “actual feedback process itself” but also affects the quality of the feedback. It is a “lot to ask of staff” (Staff Q) when they already carefully ration the allocation of their time and make prioritisation judgements. Indeed, workload allocations are the ‘silent barrier’ to technology-enhanced learning in higher education (Gregory & Lodge, 2015, p. 210).

Most participants subscribed to the view that the assessment and feedback policy is letting the students down. Educators' time constraints and heavy workloads may inadvertently suggest to students that less time and support should be devoted to assessment and feedback, potentially diminishing their perceived importance, consequently reducing their own investment in such tasks, as students often follow cues from educators about what is important in their educational experience. This reflects an aspect of the hidden curriculum, where the unstated values and priorities of the educational institution and educators are implicitly conveyed to students through their actions and decisions (Birtill et al., 2022). Student learning is affected because teaching, marking, and feedback quality suffer due to time and workload issues, issues made worse in classes with bigger sizes (Staff B). The teaching quality suffers (Staff R), as “the one area that has always been easy to be cut back is in the teaching […] So, the teaching preparation time gets cut back. That always suffers. And if your preparation time suffers, then the final product isn’t as good as it could be” (Staff P). Students “suffer” a good deal as a consequence of their lecturer’s own suffering dues to heavy workload (Staff G). Nonetheless, while it is “sad” to see teaching being affected, as educators feel unrewarded, they prioritise activities that get them noticed, such as doing research (Staff P) and thus spending less time with students or teaching activities, leaving them no choice but to focus on their interests rather than the students.

According to the interviews, time and workload challenges often lead to psychological and emotional issues for educators, resulting from an internal struggle to navigate broader academic trajectories and achieve their desired personal and professional goals. The language used by participants highlights the wide range of emotions and feelings experienced in relation to the assessment and feedback policy, such as unfairness, disrespect, isolation, unhappiness, and stress. The pressure educators face due to time and workload constraints evokes a sense of unfairness, as individual circumstances, including different schools, student age groups, class sizes, and subject types, are not taken into account. When teaching staff have opportunities to attend open forums and discuss these issues, they often find that these events coincide with their teaching schedules. As a result, they feel unable to sacrifice their teaching time to have their voices heard, further contributing to their sense of isolation and disconnection from the policy. This inability to participate in policy decision-making discussions exacerbates feelings of isolation and loneliness. Although some educators express unhappiness about not being able to contribute to the policy consultation, there is a prevailing sentiment that their views would not be considered even if they attended. Educators often feel excluded from the discussion and disrespected, leading them to question the value of the policy.

As a consequence, they ignore the policy consultation, “[…] There’s a view that - you know - that we’re asked for our opinions, but then people who make the decision would do what they want anyway. So, a lot of the people just don’t bother to respond.” (Staff A). According to Staff H, this shows the lack of respect from decision-makers for the teaching staff: “Personally, I think a wee bit more respect should be given to the staff.” (Staff H). This lack of ‘respect’ and consideration also makes them feel undervalued since they are not being listened to, which can affect their conception about making the policy. The feeling of isolation also appears within the role and responsibility for the policy decision-making process. For example, Staff N thinks that the policy is the seniors’ responsibility and does not usually involve those who are supposed to apply it. Furthermore, staff report being under a lot of pressure and stress to use technology for marking and feedback within a short period of time (two weeks) because they have other tasks such as administration or duties as personal tutors: “I think the situation I keep running into is the pressure from my school for me to do other things. I am under pressure to do other things all the time.” (Staff E).

Staff I and B’s comments can summarise the sub-theme of increasing workload and its psychological and emotional consequences. Staff I’s comments encapsulate the degree of staff unhappiness. The use of language suggests that Staff I’s narrative conveys the deeply internal perceived emotion that revolves around Staff I and his peers.

[…] We are not happy about the feedback electronically because of the amount of time and from the point of view of a practical assessment. We are not happy with the consequences of that because it can only go two ways, that the student gets less feedback, or we don’t do it. The feedback that we do give our students is a commented part of the quality of our course by externals, so not too happy about the changeover to do it all electronically. (Staff I)

During the interview, Staff B let her resentment, frustration, and anger known. As she expressed herself, she kept banging the table with her fists. As I listened to her, she became quite agitated and raised her voice as she explained:

[...] I have something like approaching 300 plus scripts. So, how can I provide feedback in two weeks, comprehensive feedback, detailed feedback, constructive feedback in a class like this? And of course, if you, if you compare me with a, with one of the others or any colleague who’s got, a kind of, 20 students plus and providing very constructive feedback, providing continuous feedback, formative assessment, all of these kinds of things, it is not fair […] I am not in the same boat! That’s all! (Staff B)

Accountability to policy knowledge and implementation

Another significant aspect identified from the interviews is the incomplete knowledge of the assessment and feedback policy and its implementation. Some participants (Staff A, H, I, and P) had only partial knowledge of the policy, which explained their limited implementation. There was also a feeling of being misled or not receiving adequate information about their involvement in policy development, which can impact policy adoption and implementation. Staff Q, for example, mentioned receiving an email requesting opinions on various aspects of feedback and technologies without realising that their responses would contribute to the policy. Their feedback might have been different if they had known it was part of the consultation process. Additionally, their presence was only required once the policy had already been established rather than during the initial design phase.

Policy consultation meetings to discuss the policy often occurred at inconvenient times, leading to low attendance and limited engagement with the policy. Interviews indicate that flexibility, communication, and collaboration are crucial for fostering active involvement in the assessment and feedback policy process. This aligns with a study by Harvey and Kosman (2013), which found that meeting academics at their preferred time and location increased their participation and interest in the assessment review policy process more effectively than sending them an email survey.

In line with the literature (i.e., Ball et al., 2012; Meyer et al., 2010), this study reveals that most educators perceive the ambiguous policy, lacking strict regulations, as an obstacle that affects implementation. For instance, the policy’s vagueness led to difficulties in interpretation, as its practical application varied across schools, modules, and individual educators (Staff C, K, L, M, N).

The assessment and feedback policy emphasises using technologies, such as Turnitin, for digital submission of essays and electronic feedback. Educators demonstrated resistance to this due to perceived time and workload issues. For example, Staff I’s frustration and resistance towards implementing the policy were evident during the interview as they banged their fists on the table. Referring to themselves and other educators, Staff I stated: “We probably are resisting; we are not happy about the feedback electronically because of the amount of time” (Staff I). The literature also discusses educators’ resistance to introducing technology in education (Kehrwald & McCallum, 2015; Laurillard et al., 2009).

While Staff H attributes their resistance to their age and reluctance to change their “dinosaur approach to feedback,” resistance also stemmed from health concerns. Staff L expressed, “It is quite sore on your eyes, actually, just staring at the screen and working your way through the electronic submissions” (Staff L). Staff R’s experience echoed this sentiment: “I personally find it quite tiring on my eyes. Just for all this marking, for all the time I spend on my wee laptop screen, my eyes are quite sore” (Staff R). Similar concerns about reading large amounts of text on a computer screen have been mentioned in the literature (Maggs, 2014).

Some staff members (F, G, J, K, L, R) acknowledged that the policy has positively influenced their teaching by increasing their awareness of related aspects. However, most academics stated that the policy did not impact their teaching because they either believed they already possessed good feedback values, or the policy was ineffective. It does not address the most critical issues, such as “the quality of feedback, the consistency of feedback, the engagement with students about the understanding of feedback” (Staff M). In alignment with Harvey and Kosman’s (2013) findings, participants’ views in this study suggest that universities should encourage academics to engage in open and critical dialogues regarding their policy and provide sufficient evidence of best practices to clarify any misinterpretations or lack of understanding of the assessment and feedback policy.

The conflict between assessment and feedback policy and practice highlights the tensions present in many higher education institutions. For example, the contradictory nature of policy and practice for assessing learning outcomes is indicated in Meyer et al. (2010, p. 343), where course requirements do not consistently follow the moderation procedures.

Mixed perceptions on the effectiveness of feedback (we give the feedback, but the student doesn’t recognise it)

This superordinate theme is concerned with participants’ perceptions of the effectiveness of assessment feedback and its impact on educators. Related sub-themes are ‘the use of feedback and feedforward’, ‘the effectiveness of feedback is questioned,’ and ‘the effect of student engagement with feedback on educators’.

The use of feedback and feedforward

Regardless of whether they implement the policy, many educators recognise the crucial role feedback and feedforward play in student learning. Feedback is designed to enhance teaching and student learning (Staff A, Q). It is especially valuable for struggling students, as it helps prevent them from dropping out of the course (Staff L). Feedback is also about engaging with students and encouraging learning. However, interviews with staff reveal that not all students receive equal feedback. For example, Staff P emphasises providing more feedforward opportunities to final-year students, while Staff Q focuses more on developing first-year students.

Feedback also varies in quality and quantity, as Staff J believes in striking a balance between the two – not being overly “descriptive” or “prescriptive.” Instead, feedback should encourage students to reflect and foster deep learning. Staff C sees the entire feedback process as a cycle in which comments are continuously provided to support the next stage until success is achieved. This aligns closely with the principles of assessment for learning, where feedback is seen as a constructive tool to guide student progress, foster understanding, and drive learning forward (Carless, 2017; Wiliam, 2011).

Another approach to feedback is to avoid pointing out students’ mistakes directly. For instance, Staff M is concerned about offending students if their work is not understood by the educator. To avoid potential issues, Staff M refrains from highlighting what the students did wrong and instead focuses on guiding them toward the right approach or directing them to resources for further assistance when faced with more significant challenges.

The effectiveness of feedback is questioned

According to interviews of staff experiences, the general perception is that despite educators’ efforts, most students do not engage with feedback, raising questions about its effectiveness (Carless & Boud, 2018). Many frustrations emerged during the interviews regarding students’ engagement with their feedback. Both passing and failing students tend to focus on grades or marks rather than reading or utilising feedback. Failing students typically only seek feedback when attempting to improve their grades or complain about perceived unfairness (Staff J, M). Students’ relationships with their teachers also influence their engagement with feedback. If students do not value a staff member, they may not value the feedback received from that person, leading to a lack of trust in the feedback’s credibility (Staff D, E; Vattøy et al., 2021). Large classes further hinder students’ experiences with feedback, as educators cannot ensure that every student learns from it (Staff Q). Difficulties in understanding written feedback also affect student engagement.

On the other hand, factors such as higher achievement, age (mature students), and level of study (postgraduate students) tend to foster student engagement with assessment feedback (Staff A, B, H, J, K, I, L, P, S). For instance, mature students, who have often already chosen their career paths, are more likely to collect and use feedback than younger students (Staff B). This observation aligns with findings from Hamilton & O'Dwyer's (2018) study, which highlights a greater engagement with their feedback. Also, staff members believe that there is a general assumption that all students fully understand feedback when in reality, many students do not. This misconception is concerning, as educators feel their hard work may be in vain, as they “[...] give the feedback, but the student doesn’t recognise it” (Staff G). This idea of significant effort in producing feedback without the desired impact has been highlighted in the literature, as noted by Price et al. (2010). However, as found in this study, the lack of student knowledge about feedback is linked to the absence of formal university training or classes on understanding and utilising feedback.

I don’t know that the university centrally does anything about this in terms of teaching students how to use feedback, or even what feedback is because, sometimes, I suspect that students don’t actually know what is meant by feedback. (Staff C)

Interviews revealed that Staff E and F were unfamiliar with the concept of feedforward, which is feedback focussed on “improvement from one task to the next, almost exclusively within the ‘future horizon’ of the module/study unit’ or ‘on improving the amount, nature or quality of the information delivered to learners” (Sadler et al., 2023, p. 305). This lack of knowledge could affect their approach to feedback and influence students’ perceptions of the comments they receive. Similar observations have been made in the literature regarding educators’ limited knowledge of feedback. For instance, Meyer et al. (2010) noted that teaching staff “lacked knowledge about sound assessment strategies” and had “little formal preparation for assessment responsibilities (pp. 345, 347).” In fact, the absence or low level of knowledge about assessment is one of the barriers to assessment policy implementation that policymakers often overlook (Flórez Petour, 2015, p. 4).

The effect of students’ engagement with feedback on educators

For educators, student engagement with feedback is highly challenging yet ultimately rewarding. Staff B’s story highlights the emotional response educators experience when students engage with their feedback. She expressed her happiness, achievement, purpose, and fulfilment upon realising that at least one student had read and utilised the assignment feedback comments:

[...] but at least someone has done so. So, they have read it straightforward after I put it on the [virtual learning environment], and they put it into their mind or just print it out and just to keep - you know - learning from it. So, […] I was so happy, to be honest with you, to see some students kind of planning ahead and keeping - you know - some kind of document just to improve their learning curve. (Staff B)

However, this theme also captures the intense frustrations experienced by educators when most students do not collect, read, or use their feedback. This lack of engagement has adverse effects on educators and negatively impacts the student learning experience. Educators’ emotions are in turmoil as they feel various negative emotions, such as dissatisfaction, disillusionment, discouragement, frustration, unhappiness, and annoyance. They also feel that they are wasting their time (e.g., Staff A, B, D, G). Staff D even feels guilty, as there is not enough time to engage in a dialogue with every single student and provide appropriate feedback, not “enough time to build that culture in enough” (Staff D).

So - you know- that doesn’t encourage me to give any feedback because it just gets ignored [...] So, why should I waste my time writing detailed comments when they are just going to get either not picked up or ignored? (Staff A)

Moreover, Staff D feels that her image as a good teacher would be tarnished if she asked students to “go the extra mile” to engage more with their feedback, as opposed to an educator who barely provides any feedback at all. These comments also suggest that students consider ‘bad’ teachers as those who constantly ask them to make an extra effort and work hard. These two excerpts summarise the demoralising impact of students’ lack of engagement with their feedback on educators.

[...] but one of the problems is that if you’re the person who is asking them to go this extra mile every time, you end up being the one that’s exposed as opposed to other people that have said or haven’t even marked it - you know-, ‘don’t worry about the feedback just get the mark’ type of thing. (Staff D)

Facilitating conditions for a successful assessment and feedback policy implementation

This overarching theme addresses the facilitating conditions that educators, who have encountered numerous challenges with assessment and feedback, perceive as essential for improving assessment and feedback practices and policy implementation. The related sub-themes include ‘cultural and structural changes’ and ‘personal and professional development’.

Cultural and structural changes

Formal feedback and assessment guidance or training to change feedback culture

Although feedback is a crucial issue affecting students and lecturers, little has been done to develop comprehensive strategies to enhance the recognition of the significance of feedback and promote its value in the learning process. Only a few educators stated that feedback was part of their students’ training or skills development, with guidance or training primarily focusing on first-year students or those with direct entry from college (Staff P & Q). The general perception among the participants of this study is that cultural and structural changes are necessary through education around feedback, as most students have not received any formal guidance or training (Staff A, B, C, D, F, G, J, K, L M, N R). This training is essential, as students have different ideas about feedback. Indeed, students need to appreciate the value of feedback and move “away from the mentality that the only thing that matters is the marks they get” (Staff A). Staff D suggests that more frequent exams could help develop this culture, allowing educators to build upon it to prepare students. Changing feedback culture could also involve providing students with coping mechanisms, as students’ coping issues can be concerning. For example, Staff F believes students can be emotionally affected by their feedback, significantly impacting their motivation. He has had multiple instances where students were in his “room in tears asking how they cope with it”. Similar emotional responses to feedback have been identified in the literature (Hill et al., 2021; Holmes, 2023; Ryan & Henderson, 2018). Constantly reinforcing the usefulness of feedback could help avoid the credibility gap, and “keep reinforcing” (Staff R) it will positively impact students’ university experience.

Changing the entire culture of students’ experience also means encouraging students to take more responsibility for their choices and decisions. Students have significant responsibility for their own experience, and not much can be done if they are unwilling to assume this responsibility (Staff H). On the other hand, educators need to foster productive interaction and proactively engage with students. Indeed, they have a greater responsibility to help overcome some of the issues related to student engagement with assessment feedback (Nash & Winstone, 2017). Educators should find ways to motivate students (Staff A), as motivated students are more willing to communicate with their teacher. Promoting understanding of the topics should be the primary focus, in addition to providing constructive, positive feedback.

However, feedback education should not only be provided to students but also to educators, including those with years of experience who may still need help. This observation is not surprising. In Beaumont et al. (2011), tutors reported their lack of training in providing quality feedback, even though some had completed accredited courses. The literature also reflects similar observations concerning feedback education. According to Rae and Cochrane (2008), the responsibility for making feedback effective lies with students and lecturers, which explains why both parties are affected emotionally or academically when the other party fails to act. Therefore, educators need to ensure feedback literacy (Carless & Boud, 2018), shifting “the culture of feedback to emphasise and support students’ use of it” and fostering a culture of “shared responsibility around feedback” with feedback interventions used across students’ programs to build such a culture (Pitt & Quinlan, 2022, p. 77).

Educators should be encouraged to use different methods of providing feedback, as the lack of variation could discriminate against students if it does not attempt to adapt to the diverse learning needs of the new generation. Although there is still much to learn about technology’s effective educational contributions (Kirkwood & Price, 2014), new forms of communication that increase accessibility must be implemented, as technology plays a crucial role in fostering student engagement and dialogue (Bikanga Ada & Stansfield, 2017; Serrano et al., 2019). Staff C summarises the need for feedback education:

I think it would be an improvement if students were actually taught formally about what feedback is, what purpose it serves, and how it fits into the learning circle and how it should be an integral part of their learning, as much as it’s an integral part of my teaching. […] I think some education is required for students - and also some lecturers, I have to say - about what feedback is and is not and what it’s for. And I think that’s the main single way that I can think of improving the effectiveness of feedback. (Staff C)

In summary, the perspectives of interview participants indicate that these cultural changes should not only occur at the individual level, where stakeholders examine their own emotional and professional experiences through reflection, but also at the institutional level to eliminate contradictions and misconceptions. This is crucial because “the cultures of institutions and the patterns of social interaction within them exert a formative effect on the ‘what’ and ‘how’ of learning” (Daniels, 2012, p. 2).

Policy amendment

Educators’ comments emphasise the need for a constructive framework in which the university’s assessment and feedback policy can be reviewed. There is a demand for a more realistic policy regarding time and workload allocation that fosters flexibility while maintaining a consistent approach to feedback. Educators appreciate the flexibility and self-tailored teaching experiences. They dislike the blanket approach to their institution’s policy, which highlights the challenge of merging practice and reality, as educators find it difficult to translate the assessment and feedback policy into practice. As Black and Wiliam (2010, p. 88) observed:

Teachers will not take up ideas that sound attractive, no matter how extensive the research base, if the ideas are presented as general principles that leave the task of translating them into everyday practice entirely up to the teachers.

According to the interviews, assessment strategies should be re-examined (Staff C, D, J, L, N). This study suggests that all stakeholders—policymakers and ‘grassroots’ workers—need to collaborate, necessitating significant changes. For instance, establishing a staff committee forum that includes those at the front-end implementing the assessment and feedback policy, teaching staff, and students from the early stages of policy development is crucial. This approach would foster a better understanding of feedback issues and expectations and could help provide more consistency in policy implementation. Indeed, the apparent disassociation between policymakers and ‘grassroots’ workers in this study aligns with Hayward’s (2015) observation that “researchers, policymakers, and practitioners working together is a phrase that slips easily off the tongue. Yet the worlds inhabited by these groups are very different” (p. 39).

This study concurs with Selwyn and Facer (2014) that engaging all stakeholders in actively constructing educational practices while considering broader socio-technical changes is essential. This ensures that they understand how the assessment and feedback policy guides procedures and requirements and possess sufficient “evidence of higher-level vision about how the purposes of assessment guide procedures and requirements” (Meyer et al., 2010, p. 342). Connecting with ‘grassroots' stakeholders can help reduce the tangible tensions and resentment many educators feel towards decision-makers. This could subsequently help avoid inconsistencies in feedback practice, as identified in the literature (Beaumont et al., 2011).

Reaching out also means bridging the gap between all stakeholders to construct a viable policy that requires adopting both a “bottom-up and top-down approach” (Harvey & Kosman, 2013, p. 95). These approaches differ; for example, some top-down methods impose "recipes" upon the low-level stakeholders, which can lead to educators being "less inclined to cooperate" (Black, 2015, p. 163). In this study, some participants experienced this reluctance to cooperate, as they felt their opinions were not valued.

Personal and professional development

There is a prevailing sentiment of being deprived of a fulfilling teaching experience, with academic and professional freedom being compromised. These frustrations are perceived in terms of personal, systematic, and societal factors that impact numerous educators.

Systematic factors: Rethinking the academic role

Systematic factors are related to the inflexibility of the university. Educators feel that the policy can constrain their broader academic trajectories by limiting their professional, societal, and personal growth. The policy forces most participants to modify their daily academic lives, leading them to question their assumptions about how their academic world functions (Gunn & Larkin, 2020). They are deprived of a satisfying teaching experience, and academic and professional freedom is compromised. As front-line workers—teaching students—they are so burdened with work that they lack time to perform other roles. The assessment and feedback policy, which mandates the use of technology (Turnitin) for feedback with a two-week turnaround, adds to their workload. This creates tensions between educators' engagement and dedication to enhancing student learning and the pressure of policy demands as they focus on the forced choices they need to make to advance professionally.

There is a need to reconsider the multifaceted roles of educators, as they cannot fulfil all the roles and functions without negatively affecting teaching and student learning. This includes roles that contribute to recognition, reputation, and promotion (Staff J). They cannot manage everything (Staff B), as they lack sufficient time (Staff P) to simultaneously be "great researchers, great teachers, academics, and PAs and administrators" (Staff J), concurring with Henderson et al. (2019) that little has improved in the last two decades. Time continues to be a scarce and influential factor in the educator’s workload. This leads to personal frustrations, which impact not only their professional development, but also their interactions with students, including a reluctance to put forth effort in teaching. The lack of professional growth undermines their personal development, even affecting their private/home life (Staff Q). Staff J's comments convey a sense of inner turmoil stemming from being an unacknowledged "grassroots" worker and having to choose between enhancing student learning and advancing professionally.

[...] it’s about being more realistic and not just talking about investing in teaching-learning and assessment as a tick box, but truly investing in it by acknowledging staff who may not be at the forefront of research, but who are at the grassroots working with students. (Staff J)

Technology-enhanced feedback acceptance issues and conditioning of success

Although technology integration into higher education has gained momentum in the last ten years, academics' mindsets can either facilitate or hinder educational innovations (Handal et al., 2013, p. 360). In this study, despite the pressure, some staff members viewed technology integration positively. One reason is that it can make it easier for students to read their written feedback. According to Staff I, students often struggle to read handwritten comments, which can impact their understanding of the feedback they receive.

However, various reasons cause educators to be hesitant about technology-enhanced feedback, including a lack of confidence, digital skills, health concerns (difficulty reading script on screen), generational issues, and insufficient resources (poor-performing technology used at home). Therefore, changing many educators' perceptions is crucial to increase broader acceptance of technology use. These changes can take time, as they involve educators' introspective examination of their roles as teachers. They attempt to determine how to implement these changes while connecting with their students and considering their own behaviour in the classroom, incorporating their beliefs and experiences without sacrificing professional autonomy and academic freedom (Black, 2015, p. 171).

One significant obstacle to technology-enhanced feedback is the lack of digital skills and the limited time available for training. Studies have study identified these issues as key factors preventing or limiting the integration of technologies into teaching, including the institution’s culture (Álvarez et al., 2009; Cubeles & Riu, 2018; Margaryan et al., 2011; Rienties, 2014). Indeed, according to interviews, there is a need to shift the mindset of teaching staff, eliminate their biases, and convince them that technology can support them (Staff Q). The university's strategy should include evidence that technology enhances student learning, particularly student experience and engagement with feedback. Consistency is essential so people are not pulled "in as many different directions" (Staff J). Staff D cautions that using technology should not replace various styles of feedback and engagement with students, which are vital. This sentiment is echoed by Staff J, who explains that technology should not be used solely as a substitute for hard copies; otherwise, it becomes pointless.

I think you have to then look at the personnel providing the feedback. How can we persuade certain people to change their ways and to adopt or adapt new technologies that make it easier for them, rather than actually just thinking that technology is a barrier or thinking that it’s a source of support for them. (Staff Q)

In summary, many staff members require evidence that the use of technology improves feedback quality, is time-efficient, visibly reduces workload, and enhances the effectiveness of marking and feedback. They also need assurance that students appreciate the technology and that Turnitin is not merely a replacement for hard copies. However, some educators believe that their feedback methods should remain unchanged. They are committed to their traditional approaches and feel that new strategies could disrupt their working methods. For instance, Staff G argues that using criteria is the best way to mark and provide feedback to students, while Staff H believes that verbal feedback, requiring students to predict a grade and mark before meeting their teacher one-on-one, is most effective.

Conclusion

This study investigated educators' experiences with a UK university's assessment and feedback policy, which stipulated the use of technology (Turnitin) with a two-week turnaround for providing feedback. The research aimed to understand educators' perspectives on assessment feedback. Participants’ individual accounts were brought together to become part of the overall interpretation. The conclusions drawn resulted from educators’ sense-making thoughts and the researcher trying to make sense of their accounts during the analysis. The study identified three themes: a) assessment and feedback policy as a personal and professional challenge, b) mixed perceptions on the effectiveness of feedback, and c) facilitating conditions for successful implementation of assessment and feedback policy. These themes revealed the complex ways educators experience the policy and their beliefs on feedback effectiveness. The findings showed that educators are affected both professionally and personally, which can, in turn, impact student learning.

Although not generalisable, the findings may contribute to developing assessment practices within the university and promote a broader understanding of assessment and feedback policies, particularly regarding potential mismatches between policy and educators' expectations. Additionally, the study could help educators, researchers, and policymakers appreciate the significance of assessment and feedback policies, enabling more effective implementation. A consistent and flexible policy that transcends disciplines and stakeholders is needed, one that goes beyond "tinkering at the edges of practice" (Medland, 2016, p. 85) and promotes transparency and accessibility (Rae & Cochrane, 2008).

This research contributes a psychological angle to the limited qualitative literature on educators' experiences with assessment and feedback policies and their effectiveness. By using Interpretative Thematic Analysis (ITA), the current study provides rich, contextualised accounts of staff experiences (Furtwängler & de Visser, 2017), potentially informing institutional support for assessment and feedback. ITA is relatively new and combines interpretative phenomenological analysis (IPA) and thematic analysis (TA). Its results may not look familiar to what could be usually observed when using those two approaches to qualitative data analysis separately. Researchers could conduct similar studies using the same data with those two data analysis approaches separately (Spiers & Riley, 2019) and include additional institutions.

Considering the implications of institutional changes on educators' professional and personal development is crucial, as their focus might shift away from providing quality teaching. After all, as attested by some of the comments in this study, it is research that gets teaching staff “noticed”. This sentiment is echoed in the literature, as McLean et al. (2008) argue that academic staff will prioritise their own benefit if senior faculty administrators pay only lip service to faculty development. The variety of issues highlighted in this study, including the undervaluing of teaching, feelings of isolation, and numerous professional and personal challenges, highlight some aspects of the motivational–psychological, interpersonal–social, and institutional–organisational domains of the ‘hidden curriculum’ for faculty (Hafler et al., 2011; Lee et al., 2023). These challenges stress the psychological and emotional impacts on educators, who, in response, may alter their self-perception and role within their institution. In essence, these aspects of the hidden curriculum reveal how educators navigate and respond to their environment, attempting to maintain their professional stature amidst a multitude of institutional obstacles.

This study was conducted before the Covid19 pandemic, which compelled institutions to adopt online or blended learning, thus embracing innovative technologies not universally embraced before the pandemic. However, a significant challenge in online education is that learners often receive low-quality, inappropriate, or insufficient information (Jensen et al., 2021), despite digital assessment and feedback gaining increased interest due to Covid19 (Casanova et al., 2021). The pandemic may have facilitated unlearning and adoption of new attitudes towards technology-enhanced learning, even among those classified as ‘laggards’ (Rogers, 2003) who resist change. However, decades after Sarason (1990) highlighted the need for support in making educational changes, institutions still struggle to realise and sustain this support. The question remains: if this study were conducted today, would academics' feelings align with those described in the paper? While the global Covid19 pandemic has altered the acceptance of technology-enhanced learning, it remains uncertain whether these changes will persist indefinitely.

Declarations

Funding

No funds, grants, or other support was received.

Conflicts of interest/Competing interests

The author has no conflicts of interest to declare.

References

Álvarez, I., Guasch, T., & Espasa, A. (2009). University teacher roles and competencies in online learning environments: A theoretical analysis of teaching and learning practices. European Journal of Teacher Education, 32(3), 321-336. https://doi.org/10.1080/02619760802624104

Ball, S., Maguire, M., & Braun, A. (2012). How schools do policy: policy enactments in secondary schools. Routledge. https://doi.org/10.4324/9780203153185

Beaumont, C., O’Doherty, M., & Shannon, L. (2011). Reconceptualising assessment feedback: a key to improving student learning? Studies in Higher Education, 36(6), 671-687. https://doi.org/10.1080/03075071003731135

Biggerstaff, D., & Thompson, A. R. (2008). Interpretative Phenomenological Analysis (IPA): A Qualitative Methodology of Choice in Healthcare Research. Qualitative Research in Psychology, 5(3), 214-224. https://doi.org/10.1080/14780880802314304

Bikanga Ada, M. (2023). Evaluation of a mobile web application for assessment feedback. Technology, Knowledge and Learning, 28, 23-46. https://doi.org/10.1007/s10758-021-09575-6

Bikanga Ada, M. & Stansfield, M. (2017). The potential of learning analytics in understanding students’ engagement with their assessment feedback, 2017 IEEE 17th International Conference on Advanced Learning Technologies (ICALT), 227-229, https://doi.org/10.1109/ICALT.2017.40

Bikanga Ada, M., Stansfield, M., & Baxter, G. (2017). Using mobile learning and social media to enhance learner feedback: Some empirical evidence. Journal of Applied Research in Higher Education, 9(1), 70-90. https://doi.org/10.1108/JARHE-07-2015-0060

Birtill, P., Harris, R., & Pownall, M. (2022). Unpacking your hidden curriculum: A guide for educators. The Quality Assurance Agency for Higher Education. https://www.qaa.ac.uk/docs/qaa/members/unpacking-your-hidden-curriculum-guide-for-educators

Black, P. (2015). Formative assessment – an optimistic but incomplete vision. Assessment in Education: Principles, Policy & Practice, 22(1), 161-177. https://doi.org/10.1080/0969594X.2014.999643

Black, P., & Wiliam, D. (2010). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 92(1), 81-90. https://doi.org/10.1177/003172171009200119

Bloxham, S. (2016, June). Central challenges in transforming assessment at departmental and institutional level. Keynote address at the Assessment in Higher Education (AHE) seminar day, Manchester. https://adept.qmul.ac.uk/wp-content/uploads/2018/01/QMUL-keynote-slides-Bloxham.pdf

Brady, M., Devitt, A., & Kiersey, R. A. (2019). Academic staff perspectives on technology for assessment (TfA) in higher education: A systematic literature review. British Journal of Educational Technology, 50(6), 3080–3098. https://doi.org/10.1111/bjet.12742

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa

Brown, G. T. L. (2004). Teachers’ conceptions of assessment: Implications for policy and professional development. Assessment in Education: Policy, Principles and Practice, 11(3), 301-318. https://doi.org/10.1080/0969594042000304609

Campbell, M. L. C., & Morrison, A. P. (2007). The subjective experience of paranoia: Comparing the experiences of patients with psychosis and individuals with no psychiatric history. Clinical Psychology & Psychotherapy, 14(1), 63-77. https://doi.org/10.1002/cpp.510

Carless, D. (2006). Differing perceptions in the feedback process. Studies in Higher Education, 31(2), 219-233. https://doi.org/10.1080/03075070600572132

Carless, D. (2017). Scaling up assessment for learning: Progress and prospects. In D. Carless, S. Bridges, C. Chan, & R. Glofcheski (Eds.), Scaling up Assessment for learning in higher education: The enabling power of assessment (Vol. 5, pp. 3-17). Springer. https://doi.org/10.1007/978-981-10-3045-1_1  

Carless D., & Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assessment Evaluation Higher Education, 43(8), 1315–1325. https://doi.org/10.1080/02602938.2018.1463354   

Casanova, D., Alsop, G., & Huet, I. (2021). Giving away some of their powers! Towards learner agency in digital assessment and feedback. Research and Practice in Technology Enhanced Learning, 16(20), 1 - 19. https://doi.org/10.1186/s41039-021-00168-6

Casanova, D., & Huet, I. (2021). Sustainable Online and digital assessment practices in higher education: The case of an English University during the COVID19 pandemic. Revista de Educação a Distância e Elearning, 4(2), 5 - 21. https://revistas.rcaap.pt/lead_read/article/download/25242/18970/104288

Clifford, G., Craig, G., & McCourt, C. (2019). “Am iz kwiin” (I’m his queen): Combining interpretative phenomenological analysis with a feminist approach to work with gems in a resource-constrained setting. Qualitative Research in Psychology, 16(2), 237-252, https://doi.org/10.1080/14780887.2018.1543048

Cohen, V. B. (1985). A reexamination of feedback in computer-based instruction: Implications for instructional design. Educational Technology, 25(1), 33–37. https://www.jstor.org/stable/44424353

Colbran, S., Gilding, A., & Colbran, S. (2016). Animation and multiple-choice questions as a formative feedback tool for legal education. The Law Teacher, 51(3), 249–273. https://doi.org/10.1080/03069400.2016.1162077

Cubeles, A., & Riu, D. (2018). The effective integration of ICTs in universities: The role of knowledge and academic experience of professors. Technology, Pedagogy and Education, 27(3), 339-349. https://doi.org/10.1080/1475939X.2018.1457978

Daniels, H. (2012). Institutional culture, social interaction and learning. Learning, Culture and Social Interaction, 1(1), 2-11. https://doi.org/10.1016/j.lcsi.2012.02.001

Deeley, S. J., Fischbacher-Smith, M., Karadzhov, D., & Koristashevskaya, E. (2019). Exploring the ‘wicked’ problem of student dissatisfaction with assessment and feedback in higher education. Higher Education Pedagogies, 4(1), 385-405. https://doi.org/10.1080/23752696.2019.1644659

Eatough, V., & Shaw, K. (2019). It’s like having an evil twin”: an interpretative phenomenological analysis of the lifeworld of a person with Parkinson’s disease. Journal of Research in Nursing, 24(1-2), 49-58. https://doi.org/10.1177/1744987118821396

Farr, J., & Nizza, I. E. (2019). Longitudinal Interpretative Phenomenological Analysis (LIPA): A review of studies and methodological considerations. Qualitative Research in Psychology, 16(2), 199-217. https://doi.org/10.1080/14780887.2018.1540677

Ferrell, G. (2012). A view of the assessment and feedback landscape: baseline analysis of policy and practice from the JISC assessment & feedback programme. JISC. http://www.jisc.ac.uk/media/documents/programmes/elearning/Assessment/JISCAFBaselineReportMay2012.pdf

Ferrell, G. (2014). Electronic Management of Assessment (EMA): A landscape review. JISC. https://www.eunis.org/wp-content/uploads/2015/05/EMA_REPORT.pdf

Flórez Petour, M. T. (2015). Systems, ideologies and history: A three-dimensional absence in the study of assessment reform processes. Assessment in Education: Principles, Policy & Practice, 22(1), 3-26. https://doi.org/10.1080/0969594X.2014.943153

Frost, N. A, Holt, A., Shinebourne, P., Esin, C., Nolas, S. M., Mehdizadeh, L., & Brooks-Gordon, B. (2011). Collective findings, individual interpretations: an illustration of a pluralistic approach to qualitative data analysis. Qualitative Research in Psychology, 8(1), 93–113. https://doi.org/10.1080/14780887.2010.500351  

Furtwängler, N. A. F., & de Visser, R. O. (2017). University students’ beliefs about unit-based guidelines: A qualitative study. Journal of Health Psychology, 22(13), 1701–1711. https://doi.org/10.1177/1359105316634449

Gray, K., Riegler, R., & Walsh, M. (2022). Students' feedback experiences and expectations pre- and post-university entry. SN Social Sciences, 2(16), 1 - 16. https://doi.org/10.1007/s43545-022-00313-y

Gregory, M. S-J., & Lodge, J. M. (2015). Academic workload: the silent barrier to the implementation of technology-enhanced learning strategies in higher education. Distance Education, 36(2), 210-230. https://doi.org/10.1080/01587919.2015.1055056

Grion, V., Serbati, A., & Nicol, D. (2018). Editorial. Technology as a support to traditional assessment practices. Italian Journal of Educational Technology, 26(3), 3-5. https://doi.org/10.17471/2499-4324/1082

Gunn, R., & Larkin, M. (2020). Delusion formation as an inevitable consequence of a radical alteration in lived experience. Psychosis, 12(2), 151-161. https://doi.org/10.1080/17522439.2019.1690562  

Hafler, J. P., Ownby, A. R., Thompson, B. M., Fasser, C. E., Grigsby, K., Haidet, P., Kahn, M., & Hafferty, F. (2011). Decoding the learning environment of medical education: a hidden curriculum perspective for faculty development. Academic Medicine, 86(4), 440-444. https://doi.org/10.1097/ACM.0b013e31820df8e2

Hamilton, M., & O'Dwyer, A. (2018). Exploring student learning approaches on an initial teacher education programme: A comparison of mature learners and direct entry third-level students. Teaching and Teacher Education, 71, 251-261. https://doi.org/10.1016/j.tate.2018.01.011

Handal, B., MacNish, J., & Petocz, P. (2013). Adopting mobile learning in tertiary environments: Instructional, curricular and organizational matters. Education Sciences, 3(4), 359-374. https://doi.org/10.3390/educsci3040359

Harvey, M., & Kosman, B. (2013). A model for higher education policy review: the case study of an assessment policy. Journal of Higher Education Policy and Management, 36(1), 88-98. https://doi.org/10.1080/1360080x.2013.861051

Hayward, L. (2015). Assessment is learning: the preposition vanishes. Assessment in Education: Principles, Policy & Practice, 22(1), 27-43. https://doi.org/10.1080/0969594X.2014.984656

Henderson, M., Selwyn, N., & Aston, R. (2017). What works and why? Student perceptions of ‘useful’ digital technology in university teaching and learning. Studies in Higher Education, 42(8), 1567–1579. https://doi.org/10.1080/03075079.2015.1007946

Henderson, M., Ryan, T., & Phillips, M. (2019). The challenges of feedback in higher education. Assessment & Evaluation in Higher Education, 44(8), 1237-1252. https://doi.org/10.1080/02602938.2019.1599815

Hill, J., Berlin, K., Choate, J., Cravens-Brown, L., McKendrick-Calder, L., & Smith, S. (2021). Exploring the emotional responses of undergraduate students to assessment feedback: Implications for instructors. Teaching and Learning Inquiry, 9(1), 294-316. https://doi.org/10.20343/teachlearninqu.9.1.20

Holmes, A. G. D. (2023). ‘I was really upset and it put me off’: The emotional impact of assessment feedback on first-year undergraduate students. Journal of Perspectives in Applied Academic Practice, 11(2), pp-pp.

Hutchinson, C., & Young, M. (2011). Assessment for learning in the accountability era: Empirical evidence from Scotland. Studies in Educational Evaluation, 37(1), 62-70. https://doi.org/10.1016/j.stueduc.2011.03.007

Jensen, L. X., Bearman, M., & Boud, D. (2021). Understanding feedback in online learning – A critical review and metaphor analysis. Computers & Education, 173, 1-12. https://doi.org/10.1016/j.compedu.2021.104271

Kehrwald, B. A., & McCallum, F. (2015). Degrees of change: Understanding academics experiences with a shift to flexible technology-enhanced learning in initial teacher education. Australian Journal of Teacher Education (Online), 40(7), 43–56. https://search.informit.org/doi/10.3316/ielapa.297088224156472  

Kirkwood, K., & Price, L. (2014). Technology-enhanced learning and teaching in higher education: what is ‘enhanced’ and how do we know? A critical literature review. Learning, Media and Technology, 39(1), 6-36. https://doi.org/10.1080/17439884.2013.770404

Knight, P. T. (2002). The Achilles’ Heel of Quality: The assessment of student learning. Quality in Higher Education, 8(1), 107–115. https://doi.org/10.1080/13538320220127506

Laurillard, D., Oliver, M., Wasson, B., & Hoppe, U. (2009). Implementing technology-enhanced learning. In: Balacheff, N., Ludvigsen, S., de Jong, T., Lazonder, A., Barnes, S. (Eds.), Technology-Enhanced Learning (pp. 289–306). Springer. https://doi.org/10.1007/978-1-4020-9827-7_17

Lee, C. A., Wilkinson, T. J., Timmermans, J. A., Ali, A. N., & Anakin, M. G. (2023). Revealing the impact of the hidden curriculum on faculty teaching: A qualitative study. Medical Education, 1-9. https://doi.org/10.1111/medu.15026

Li, J., & De Luca, R. (2014). Review of assessment feedback. Studies in Higher Education, 39(2), 378–393. https://doi.org/10.1080/03075079.2012.709494

Love, B., Vetere, A., & Davis, P. (2020). Should Interpretative Phenomenological Analysis (IPA) be used with focus groups? Navigating the bumpy road of “iterative loops,” idiographic journeys, and “phenomenological bridges.” International Journal of Qualitative Methods, 19, 1-17. https://doi.org/10.1177/1609406920921600

Maggs, L. A. (2014). A case study of staff and student satisfaction with assessment feedback at a small specialised higher education institution. Journal of Further and Higher Education, 38(1), 1-18, https://doi.org/10.1080/0309877X.2012.699512

Mann, J. (2016). Using Turnitin to improve academic writing: an action research inquiry. Research in Teacher Education, 6(2), 16–22. https://doi.org/10.15123/PUB.5642

Margaryan, A., Littlejohn, A., & Vojt, G. (2011). Are digital natives a myth or reality? University students’ use of digital technologies. Computers & Education, 56(2), 429-440. https://doi.org/10.1016/j.compedu.2010.09.004

McCarthy, J. (2017). Enhancing feedback in higher education: students’ attitudes towards online and inclass formative assessment feedback models. Active Learning in Higher Education, 18(2),127–141. https://doi.org/10.1177/1469787417707615

McLean, M., Cilliers, F., & Van Wyk, J. M. (2008). Faculty development: yesterday, today and tomorrow. Medical Teacher, 30(6), 555–584. https://doi.org/10.1080/01421590802109834

Medland, E. (2016). Assessment in higher education: drivers, barriers and directions for change in the UK. Assessment & Evaluation in Higher Education, 41(1), pp. 81-96. https://doi.org/10.1080/02602938.2014.982072

Meyer, L. H., Davidson, S., McKenzie, L., Rees, M., Anderson, H., Fletcher, R., & Johnston, P. M. (2010). An investigation of tertiary assessment policy and practice: Alignment and contradictions. Higher Education Quarterly, 64(3), 331-350. https://doi.org/10.1111/j.1468-2273.2010.00459.x

Muir, K. (2022). Putting learners at the centre: Towards a future vision for Scottish education. Cabinet Secretary for Education and Skills. https://www.gov.scot/publications/putting-learners-centre-towards-future-vision-scottish-education/

Nash, R. A., & Winstone, N. E. (2017). Responsibility-sharing in the giving and receiving of assessment feedback. Frontiers in Psychology, 8(1519), 1-9. https://doi.org/10.3389/fpsyg.2017.01519

Nicol, D. (2009) Transforming assessment and feedback: Enhancing integration and empowerment in the first year. In Quality Enhancement Themes: The First Year Experience (pp. 1 – 84). The Quality Assurance Agency for Higher Education http://dera.ioe.ac.uk/11605/1/First_Year_Transforming_Assess.pdf

Nicol, D., & Macfarlane-Dick, D. (2004). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31(2),199–218. https://doi.org/10.1080/03075070600572090

Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education: a peer review perspective. Assessment & Evaluation in Higher Education, 39(1), 102-122. https://doi.org/10.1080/02602938.2013.795518

National Student Survey (2021). National Student Survey 2021 results. https://www.officeforstudents.org.uk/advice-and-guidance/student-information-and-data/national-student-survey-nss/nss-2021-results/ 

National Student Survey (2022). National Student Survey – NSS. https://www.officeforstudents.org.uk/advice-and-guidance/student-information-and-data/national-student-survey-nss/

Price, M., Handley, K., Millar, J., & O'Donovan, B. (2010). Feedback: All that effort, but what is the effect? Assessment & Evaluation in Higher Education, 35(3), 277-289. https://doi.org/10.1080/02602930903541007

Pietkiewicz, I., & Smith, J. A. (2014). A practical guide to using interpretative phenomenological analysis in qualitative research psychology. Psychological Journal, 20, 7-14. https://doi.org/10.14691/CPPJ.20.1.7

Pitt, E., & Quinlan, K. M. (2022). Impacts of higher education assessment and feedback policy and practice on students: A review of the literature 2016-2021. Advance HE. https://kar.kent.ac.uk/95307/ (KAR id:95307)

Rae, A. M., & Cochrane, D. K. (2008). Listening to students - How to make written assessment feedback useful. Active Learning in higher education, 9(3), 217–230. https://doi.org/10.1177/1469787408095847

Rienties, B. (2014). Understanding academics’ resistance towards (online) student evaluation. Assessment & Evaluation in Higher Education, 39(8), 987-1001. https://doi.org/10.1080/02602938.2014.880777

Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press.

Ryan, T., & Henderson, M. (2018). Feeling feedback: students’ emotional responses to educator feedback. Assessment & Evaluation in Higher Education, 43(6), 880-892. https://doi.org/10.1080/02602938.2017.1416456

Sadler, I., Reimann, N., & Sambell, K. (2023). Feedforward practices: A systematic review of the literature. Assessment & Evaluation in Higher Education, 48(3), 305-320. https://doi.org/10.1080/02602938.2022.2073434

Sarason, S. B. (1990). The predictable failure of educational reform: Can we change course before it’s too late? The Jossey-Bass Education Series. Jossey-Bass Inc.

Selwyn, N., & Facer, K. (2014). The sociology of education and digital technology: Past, present and future. Oxford Review of Education, 40(4), 482-496. https://doi.org/10.1080/03054985.2014.933005  

Serrano, D. R., Dea-Ayuela, M. A., Gonzalez-Burgos, E., Serrano-Gil, A., & Lalatsa, A. (2019). Technology-enhanced learning in higher education: How to enhance student engagement through blended learning. European Journal of Education, 54(2), 273-286. https://doi.org/10.1111/ejed.12330

Smith, J. A. (2011). Evaluating the contribution of interpretative phenomenological analysis. Health Psychology Review, 5(1), 9-27. https://doi.org/10.1080/17437199.2010.510659

Smith, J.A., & Osborn, M. (2007). Interpretative Phenomenological Analysis. In J. A. Smith & M. Osborn (Eds.), Qualitative psychology: A practical guide to research methods (pp. 53-80): SAGE Publications.

Spiers, J., & Riley, R. (2019). Analysing one dataset with two qualitative methods: The distress of general practitioners, a thematic and interpretative phenomenological analysis. Qualitative Research in Psychology, 16(2), 276-290. https://doi.org/10.1080/14780887.2018.1543099

Turner, D. W. (2010). Qualitative interview design: A practical guide for novice investigators. The Qualitative Report, 15(3), 754-760. https://doi.org/10.46743/2160-3715/2010.1178

Vattøy, K-D., Gamlem, S. M. & Rogne, M. R. (2021). Examining students’ feedback engagement and assessment experiences: a mixed study. Studies in Higher Education, 46(11), 2325-2337. https://doi.org/10.1080/03075079.2020.1723523

Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1), 3-14. https://doi.org/10.1016/j.stueduc.2011.03.001

Williams, J., & Kane, D. (2009). Assessment and feedback: Institutional experiences of student feedback, 1996 to 2007. Higher Education Quarterly, 63(3), 264–86. https://doi.org/10.1111/j.1468-2273.2009.00430.x