Changes in engineering students’ surface and deep approaches to learning in first- and second-year university courses

Cristina Mio1 and Erzsebet Dombi2

1 University of Glasgow, Glasgow, UK
2 University of Strathclyde, Glasgow, UK

Abstract

The aim of this study is to investigate how students’ approaches to learning change during early years of university studies in classes where assessments have been designed to increase students’ engagement and ownership. The revised two-factor Study Process Questionnaire (R-SPQ-2F), a self-reporting survey, was used to measure students’ deep and surface approaches. Two first-year and one second-year cohorts of engineering students completed the questionnaire at the beginning and at the end of their modules. The majority of the assessment questions activated the first three levels of Bloom’s revised taxonomy, possibly encouraging a more surface approach to learning. However, the learning environment made use of peer assessment and online quizzes to provide prompt feedback to students and opportunities for self-reflection, thus possibly inducing a deeper level of engagement with the material. Data analysis of the R-SPQ-2F questionnaire scores showed that the deep approach score did not change while the surface approach score increased. The findings align with those of previous studies: the relationship between assessment and learning is complex, and surface approaches might have to be expected and accepted in the early years at university, in preparation for deeper approaches to learning which should be developed in later university years, when the more cognitive demanding levels of Bloom’s taxonomy, such as evaluating and creating, are required.

Keywords

approaches to learning, engineering, higher education, revised study process questionnaire, teaching and learning environment, student engagement

Introduction

In today’s rapidly changing world, higher-order thinking skills, such as problem solving and critical thinking, are much more valued by employers than basic skills, such as fact recalling and memorisation (Burbach et al., 2004; Koster & Vermunt, 2020; O’Leary, 2017, 2021; Pithers & Soden, 2000). In 2020, the World Economic Forum (WEF, 2020, p. 5) reported that “The top skills and skill groups which employers see as rising in prominence in the lead up to 2025 include groups such as critical thinking and analysis as well as problem-solving, and skills in self-management such as active learning, resilience, stress tolerance and flexibility”.

Higher Education Institutions have a crucial role in preparing graduates with these skills (Donald et al., 2018; Mayorga, 2019; Nieminen et al., 2021). Universities are considered to be places where higher-order thinking skills are developed and practised, where the ability to go beyond what has been taught and the ability to deal with new situations are fostered (Biggs & Tang, 2003). To engage and support today’s highly diverse student population (Haggis, 2003; Hockings, 2011) in becoming lifelong learners, critical thinkers and problem solvers, universities are moving, to varying degrees, away from traditional transmissive models of education and towards active learning and student-centred teaching practices that support deeper approaches to learning (Børte et al. 2020; Bridgstock, 2017; Felder & Brent, 2005).

Education research on Student Approaches to Learning originates in the work of Marton and Säljö, who, in the 1970’s designed a range of experiments to explore how students went about doing a specific learning task, involving the reading of an academic text (Marton & Säljö, 1976a, 1976b). Their interview-based research identified two qualitatively different ‘levels of processing’: surface level and deep level processing. The former is characterised by memorisation, “[skating] along the surface of the text” (Marton & Säljö, 2005, p. 44), without seeking meaning, while the latter entails connecting what has been learnt to previous knowledge and experiences and a desire to understand meaning. Building on Marton and Säljö’s research, Entwistle and Ramsden in the UK and Biggs in Australia further explored students’ approaches to learning at course level with the aim to capture both students’ strategy and students’ motivation (Biggs, 1987; Entwistle et al., 1984). Three approaches to learning were identified, namely surface, deep and strategic approaches. Students adopting a surface approach (SA) learn by memorization, they rote-learn with the intention to be able to reproduce content without seeking connections and meaning. A SA is linked to extrinsic motivation, fulfilling demands raised by others, and fear of failure. Students adopting a deep approach (DA) focus on main ideas, concepts, and principles with the intention to look for meaning and to seek relationships. A DA is linked to intrinsic motivation, “learning […] out of interest” (Marton & Säljö, 2005, p. 53). The third approach, known as strategic or achieving in the research literature, is characterised by organised study, a systematic attitude, and efficient time management with the intention to optimise academic achievement. Subsequent research however suggested that the strategic approach is conceptually different from DA and SA, as it refers to the ways students organise their studying rather than how they engage in the content of the class and that students’ approaches to learning are best described and measured using the dichotomy of surface and deep approaches (Biggs et al., 2001; Clack & Dommett, 2021; Kember et al., 1999; Martinelli & Raykov, 2017; Volet & Chalmers, 1992; Zakariya et al., 2020).

Approaches to learning are both student and context dependent (Baeten et al., 2013). A large body of research focused on investigating how different teaching and learning environments influence students’ approaches to learning (Asikainen & Gijbels, 2017; Entwistle et al., 1984; Gijbels & Dochy, 2006; Kember et al., 2008; Rust, 2002; Segers et al., 2006; Tomas & Jessop, 2019). Research suggests that heavy assessment and workloads, anxiety provoking environments, a high number of contact hours, lack of opportunity to pursue the subject in depth, and lack of student choice over the method of study lead to surface approaches to learning (Lizzio et al., 2002; Reid et al., 2005). In contrast, characteristics of environments supporting deep approaches to learning are clear goals, interaction with others, appropriate motivational context, and learner activity (Baeten et al., 2010).

Research in the Student Approaches to Learning domain also noted that, even if teachers made changes to the teaching environment, for example by using student-activating teaching methods instead of traditional lectures and by adopting different assessment strategies to induce deep approaches to learning, the effect on students’ approaches was opposite to the desired one: DA decreased and SA increased (Asikainen and Gijbers, 2017; Entwistle et al., 1984; Struyven et al., 2006,). This study aims to investigate how university engineering students’ approaches to learning change during first- and second- year classes that used either peer-assessed weekly homework or formative multiple-choice quizzes throughout the duration of the courses.

Appendix A gives a list of acronyms used in the paper.

Materials and method

Curriculum design of the modules

The authors were involved in teaching three core modules to first- and second-year engineering students at a Scottish university in 2017-18 and 2018-19:

·        CE1 (first year) - Basic Principles in Chemical Engineering (mass/energy balances)

·        CE2 (second year) - Statistics for Chemical Engineers (descriptive and inferential statistics)

·        ME1 (first year)- Mathematics 1M (algebra, geometry and calculus).

CE1 and CE2 are attended by students enrolled either in the Chemical Engineering degree or in the Applied Chemistry and Chemical Engineering degree. ME1 is the core first-year mathematics module of Mechanical and Aerospace Engineering students. The class sizes were between 100 and 200 students (Table 1).

All three modules were delivered in a ‘traditional manner’ (Mason et al., 2013; Yelamarthi et al., 2016), through weekly or twice-weekly one-hour lectures, and weekly tutorials, either one or two hours long. During the lectures, the lecturer presented a topic to the students and worked through several examples. During the tutorials, students were asked to solve problems related to the topic covered in the lecture to promote the development of independent problem solving. Exercises for each tutorial were set in advance so that students had time to think about them and work on them prior to the tutorial. The problems were usually presented in increasing order of difficulty and required students to develop higher-order thinking skills, connect techniques, apply concepts, and develop metacognitive skills. During the tutorials, tutors were present to provide help if needed, and to give feedback on the students’ solutions.

Table 1 gives details of the three modules.

 

Table 1. Details of the module structure

 

Level

Credits (*)

Delivery period

Students attending

Number of one-hour lectures per week

Duration of the weekly tutorial (hours)

CE1

1

20

September – March

100-120

1

2

CE2

2

10

September-December

120-150

2

2

ME1

1

20

September – March

150-180

2

1

*1 credit=10 nominal hours

Assessment, feedback, and learning environment

The interdependence of assessment, learning, and teaching is widely recognised and accepted in education (Biggs, 2003). In their seminal work, Black and Wiliam (1998) collected evidence showing that assessment substantially enhances the learning process when it is seen as a moment of learning. In the light of emergent constructivist pedagogies, Elwood and Klenowski (2002) viewed the learners as active participants in the process of making sense of new knowledge and recognised the importance of assessment to make students aware of their own learning approaches and responsible for their own understanding. Assessment is also an opportunity for students to receive feedback on the quality of their work, so that they can develop as independent learners through reflection and evaluation (Evans, 2013).

The lecturers of CE1, CE2, and ME1 wanted to use assessment to induce deep approaches to learning, while still working within the constraints imposed by the overall programme organisation. In fact, barriers to changing to student-centred curricula and approaches, due to organisational factors or limited resources, are well-known in engineering teaching. (Danko & Duarte, 2009; Sahonero-Alvarez & Calderon, 2018)

The main difference between CE1 and CE2 classes and the other Chemical Engineering classes consisted in peer-marked continuous assessment in the form of weekly homework.

The final mark for CE1 consisted of:

·        coursework/project (10% of the final mark)

·        two class tests in October and December (10 % of the final mark)

·        nine weekly peer-marked homework assignments in January-March (10% of the final mark)

·        a three-hour exam in April/May (70% of the final mark).

 

The final mark for CE2 consisted of:

·        a class test in November (10 % of the final mark)

·        nine weekly peer-marked homeworks in October-December (20% of the final mark)

·        a three-hour exam in December (70% of the final mark).

The Workshop peer review tool available in Moodle, the virtual learning environment (VLE) of choice in this university, was used to implement peer-marking of the weekly homework in CE1 and CE2. Students worked on the homework problems during tutorials, submitted their worked solutions online and peer-assessed each other’s piece of work in an anonymous way. The weekly homework was designed to motivate students to engage with their studies throughout the course in order to avoid students cramming right before the final examination. The efficacy of distributed practice has been documented and it is recognised that distributing learning over time improves long-term retention (Dunlosky et al., 2013). To keep homework marking manageable for the lecturer, it was decided to include the students in the assessment process, by using peer-marking to assess the weekly homework. Peer assessment has received much attention as a pedagogical tool in the past decade: it engages students in the assessment process, and it provides them with valuable feedback. Several benefits are recognised for both peer assessment participants; the assessor is engaged in cognitively demanding activities, such as reviewing, summarising, giving feedback, identifying mistakes and misconceptions, which should deepen their understanding, while the assessed peer is provided with prompt and abundant feedback and with clear assessment criteria of what constitutes high-quality work. In addition, peer assessment has positive effects by increasing time on task, motivation, and engagement, and giving a greater sense of ownership and responsibility to the learner (Bloxham & West, 2004; Elwood & Klenowski, 2002; Topping, 1998). Peer assessment can be rather time consuming, but the learning achieved by both assessors and assessees can be significant, in both the short term (e.g., achievement in that particular class) and long term (e.g., transferable skills in communication and collaboration) (Topping, 2017; Nicol & McCallum, 2022).

The assessment of ME1 was a three-hour examination during the April/May examination diet which covered both first and second semester material. During the academic year, four multiple-choice class tests were also set. To reward good performance and encourage engagement throughout the year, students achieving an overall average of 60% or above in the four tests were exempt from the final examinations. In this cohort, 61% of the students gained this exemption.

The main difference between ME1 and other mathematics service-teaching classes was the introduction of regular multiple-choice quizzes with a feedback mechanism requiring student engagement. The quizzes were available online through Moodle. Several research studies have observed that participation in regular online voluntary quizzes has a positive impact on achievement in summative assessment tasks and found that students perceive quizzes as a useful tool to help them study regularly (Angus & Watson, 2009; Förster et al., 2018; Kibble, 2007, 2011). For ME1, four quizzes were set in the first semester and seven quizzes were set in the second semester. Each quiz consisted of four to five questions covering topics that were discussed during lectures before the respective quiz opened. The aims of quizzes were to pace students’ learning, to strengthen conceptual understanding, to provide feedback, and to prepare students for class tests. Students could attempt quizzes as many times as they wanted to in their own time. Participation was optional and results did not contribute to the final mark. Feedback was offered in three different ways. Upon submitting their answers, students received immediate, pre-built feedback in the form of a hint rather than the correct answer. The hint asked students to revise a particular step in their solution process that might have led to an incorrect answer, and to attempt the question again. This mechanism required students to actively engage with feedback and reflect on their own thinking. The lecturer also provided collective feedback. This was based on VLE data that gave an insight into which questions students found most challenging and what the most common wrong answer was. Collective feedback was normally provided a week before class tests took place. Students were also offered individual feedback on their worked solution but very few students took up this offer.

The questions used in CE1, CE2 and ME1 assessments (homeworks, tests, quizzes, exams) were classified according to the six major categories of the cognitive process dimension of the revised Bloom’s taxonomy (Anderson et al., 2001): remembering, understanding, applying, analysing, evaluating, creating, listed here in increasing cognitive complexity. The guidelines given by Radhmehr and Drake (2018) were followed to map the questions to the revised Bloom’s categories. Each question was assigned to the highest level (‘remembering’, ‘understanding’, ‘applying’, ‘analysing’, ‘evaluating’, ‘creating’) that the question would activate, although it was recognised that each question activates a range of cognitive processes (Radmehr & Drake, 2018). Radhmehr and Drake (2018, p. 43) warn that “the cognitive processes activated in a student’s mind while solving a problem depend on the student’s prior knowledge and experience”. Therefore, when classifying the questions, the authors had to use their judgment in terms of what they expected the students’ prior knowledge to be, according to what the students had been taught in lectures and tutorials.

In CE1 and CE2, most of the questions (around 60%) mapped to ‘applying’, around 30% to ‘analysing’, and the remaining were split between ‘understanding’ and ‘evaluating’. No question focused on ‘remembering’, as a periodic table (with chemical element symbols, atomic mass, and so on) and a formula sheet (with, for example, Hess’s law and the formula for the equilibrium constant) were provided in CE1 lectures, tutorials and exams, and statistical tables (z-value, t-values, chi-values) and a formula sheet (with main statistical formulas such as standard deviation and confidence intervals with known or unknown variance) were provided to students in CE2 lectures, tutorials, and exams. Some level of ‘remembering’ was needed to solve all questions (for example, knowing the name of common chemical compounds or statistical terminology, such as frequency, error, percentage) and was embedded in all questions, but not tested explicitly. In CE1 and CE2 most of the questions belonged to the ‘applying’ category as the questions were word problems describing a chemical process where a material/energy balance is required or a statistical scenario where a statistical procedure is needed to be applied and carried out. Some questions were constructed in such a way that ‘analysing’ is also required (for example, to sort relevant and irrelevant information, or when information had to be extracted from a complex table of data) (Radmehr & Drake, 2018). A minority of questions asked to make assumptions, approximations, and comparisons between two different ways of solving a problem; those questions were assigned to the ‘evaluating’ category.

 

As ME1 is a service teaching mathematics class attended by engineering students, its purpose is to give students mathematical foundations to use mathematics procedures fluently and accurately in their engineering modules in subsequent years. Most of the ME1 questions required students to use the processes in the ‘understanding’ and ‘applying’ categories. A typical question asked to simplify, find, calculate, differentiate, or integrate. Similarly to CE1 and CE2, no questions activated only the first level, ‘remembering’, as students were not required to memorise formulas. A one-page formula sheet with key formulas needed in the ME1 class (e.g., formulas for differentiating or integrating common functions) was provided to the students in class and during examinations.

Survey and data collection

Many self-reporting survey instruments are available to measure students’ approaches to learning (Leiva-Brondo et al., 2020 and references therein). Among these, Biggs’ revised two-factor study process questionnaire (R-SPQ-2F) (Biggs et al., 2001), a refined version of Biggs’ Study Process Questionnaire (SPQ) (Biggs, 1987), is widely used because of its concise length (Zakariya et al., 2020), high psychometric properties (Zakariya et al., 2020), good construct validity, and acceptable internal consistency (Martinelli & Raykov, 2017). The R-SPQ-2F consists of two 10-item scales (deep approach (DA) and surface approach (SA)) scored on a five-point Likert scale: never or only rarely true, sometimes true, true about half the time, frequently true, always, or almost always true. Each scale has two sub-components (motive and strategy) with five items each, producing four categories overall: deep motive (DM), deep strategy (DS), surface motive (SM), and surface strategy (SS). Table 2 shows the questions of the R-SPQ-2F questionnaire.

 

Table 2. Questions of the R-SPQ-2F questionnaire (Biggs et al., 2001)

Item number

Item description

Scale category

1

I find that at times studying gives me a feeling of deep personal satisfaction.

DM

2

I find that I have to do enough work on a topic so that I can form my own conclusions before I am satisfied.

DS

3

My aim is to pass the course while doing as little work as possible.

SM

4

I only study seriously what’s given out in class or in the course outlines.

SS

5

I feel that virtually any topic can be highly interesting once I get into it.

DM

6

I find most new topics interesting and often spend extra time trying to obtain more information about them.

DS

7

I do not find my course very interesting so I keep my work to the minimum.

SM

8

I learn some things by rote, going over and over them until I know them by heart even if I do not understand them.

SS

9

I find that studying academic topics can at times be as exciting as a good novel or movie.

DM

10

I test myself on important topics until I understand them completely.

DS

11

I find I can get by in most assessments by memorizing key sections rather than trying to understand them.

SM

12

I generally restrict my study to what is specifically set as I think it is unnecessary to do anything extra.

SS

13

I work hard at my studies because I find the material interesting.

DM

14

I spend a lot of my free time finding out more about interesting topics which have been discussed in different classes.

DS

15

I find it is not helpful to study topics in depth. It confuses and wastes time, when all you need is a passing acquaintance with topics.

SM

16

I believe that lecturers shouldn’t expect students to spend significant amounts of time studying material everyone knows won’t be examined.

SS

17

I come to most classes with questions in mind that I want answering.

DM

18

I make a point of looking at most of the suggested readings that go with the lectures.

DS

19

I see no point in learning material which is not likely to be in the examination.

SM

20

I find the best way to pass examinations is to try to remember answers to likely questions.

SS

 

The DM score measures intrinsic motivation, related to the student’s own curiosity and personal interest in the subject. The SM score measures extrinsic motivation: the student carries out the task because of consequences imposed from the outside. The DS involves higher cognitive levels: the student looks for meaning in what they are studying. The SS is characterised by rote learning: the student focuses on memorising facts without seeking connections between different pieces of information.

The responses were coded as 1 = ‘never or only rarely true’, to 5 = ‘always or almost always true’. The DA scale is the sum of the DS and the DM subscales and scores between 10 and 50. A high score signifies a high DA. The SA scale is the sum of the SS and SM subscales and scores between 10 and 50. A high score denotes a high SA.

To study if students’ approaches to learning change overtime, three cohorts of students (A, B, and C in Table 3) were asked to fill in the R-SPQ-2F questionnaire at different times during their university studies. Cohort A took the survey in September 2017 and March 2018 (start and finish of CE1) and in September 2018 (start of CE2); cohort B took the survey in September 2017 and December 2018 (start and finish of CE2); cohort C took the survey in September 2017 and March 2018 (start and finish of ME1). Table 3 shows the questionnaire completion rates for each cohort.

 

Table 3. Cohort details and questionnaire completion rate

Cohort

Number of students

Number (%) of returned questionnaires (module and date of survey)

A

102

64 (63%) (CE1 September 2017)

75 (74%) (CE1 March 2018)

90 (88%) (CE2 September 2018)

B

132

112 (85%) (CE2 September 2017)

94 (71%) (CE2 Dec 2017)

C

152

88 (58%) (ME1 September 2017)

85 (56%) (ME1 March 2018)

 

The paper questionnaire was handed out at the beginning of a tutorial and collected at the end. Completion of the questionnaire was voluntary. For ME1, the survey was delivered in two out of three tutorial groups. Demographic information, such as respondents’ age, gender, nationality, was not collected as it was felt that the students might be inclined to self-report more honestly if such information was not requested.

The study received approval from the Ethics Committee of the Department of Chemical and Process Engineering for the stated research procedures.

Analysis

Cronbach’s alpha was used as a measure of internal consistency (or reliability) of the R-SPQ-2F questionnaire (Cronbach, 1951). Cronbach’s alpha tests how correlated the items measuring each of the subscales/factors are: for a set number of scale items, alpha will approach zero if scale items are uncorrelated, and will approach one if scale items are highly correlated. Highly correlated scale items will probably measure the same underlying concept. Acceptable values of alpha are usually between 0.70 and 0.95 (Tavakol & Dennick, 2011). Cronbach’s alphas of the two main subscales/factors, DA and SA, were calculated and found to be between 0.65 and 0.80 for surface and between 0.68 and 0.82 for deep learning (Appendix B). These values are similar to those in previous studies (Gijbels et al., 2009). Reliability was therefore deemed to be sufficient.

The Cronbach’s alphas of the four subscales (DM, DS, SM, SS) were also calculated and found to be much lower, between 0.31 and 0.70. It was therefore decided not to compare scores of the 4 subscales.

Overall mean scores for DA and SA were calculated for each instance a questionnaire was deployed with each cohort and either Analysis of Variance (ANOVA) or two-sided two-sample t-tests (p-value < 0.05) were performed to test for differences:

·        between DA and SA scores for each cohort

·        in DA scores for each cohort at different points in time

·        in SA scores for each cohort at different points in time

Questionnaire responses were collated for first-year (CE1 and ME1) students, in September and in March, and for second-year students (CE2), in September and in December, and mean scores were calculated. ANOVA was performed to assess whether students changed their approaches to learning as they progressed in their university studies.

Results

Figures 1 and 2 show the DA and SA mean scores (respectively) for each cohort and their 95% confidence intervals (the mean scores and standard deviation values are given in Table C1 in Appendix C). Cohort A was surveyed three times, cohorts B and C twice (dates of surveys are given in Table 3).

A picture containing text, screenshot, diagram, parallel

Description automatically generated

Figure 1. DA mean scores and 95% confidence intervals for each cohort (cohort A was surveyed three times, cohorts B and C twice; dates of surveys are given in Table 3)

 

A picture containing text, screenshot, diagram, parallel

Description automatically generated

Figure 2. SA mean scores and 95% confidence intervals for each cohort (Cohort A was surveyed three times, Cohorts B and C twice; dates of surveys are given in Table 3)

 

The difference between DA score and SA score is statistically significant (two-sided t-test, p<0.05) for cohort A in CE1 Sept 2017, CE1 March 2018, CE2 Sept 2018, for cohort B in CE2 Sept 2017, and cohort C in ME1 Sept 2017. In all these cases, the DA score is higher than the SA score. Cohort B and cohort C did not show a statistically significant difference between DA and SA scores the second time the questionnaire was deployed, in December 2017 for cohort B and in March 2018 for cohort C.

ANOVA was performed for cohort A to test if the DA and SA scores changed overtime (September 2017, March 2018, September 2018). There was no statistically significant difference for either DA (F(2,228)=0.47, p = 0.628)) or SA (F(2,228)=1.33, p = 0.267)): DA and SA scores for cohort A did not change significantly overtime.

A two-sided t-test (p<0.05) was run to test if the DA and SA scores changed overtime for cohorts B and C. For cohort B, it was found that DA did not change between September 2017 and December 2017, but SA increased from September to December 2017 and showed a statistically significant difference. Similarly, for cohort C, DA did not change between September 2017 and March 2018, but SA did, showing a statistically significant increase at the end of the module (March 2018) compared to scores at the beginning (September 2017).

For each cohort, DA did not change overtime (within the space of one or two semesters), but SA increased in two of the three cohorts (namely, cohorts B and C).

The DA and SA scores were compared (two-sided t-test, p<0.05) between cohorts at the same point of university studies:

·        cohort A vs cohort C in September 2017: no statistically significant difference between either DA or SA scores

·        cohort A vs cohort C in March 2018: statistically significant difference between DA (with DA of CE1 greater than ME1) and SA scores (with SA of CE1 smaller than ME1)

·        cohort A in September 2018 vs cohort B in September 2017: no statistically significant difference between either DA or SA scores.

To obtain an insight into how deep and surface approaches change from first to second year, questionnaire results were combined for cohorts A and C taking the first-year modules CE1 and ME1 at the beginning (September) and at the end (March) and for cohorts A and B taking the second-year module CE2 at the beginning (September 2017 and September 2018 respectively) and at the end (December 2017 and December 2018 respectively) of the module. Table C2 in Appendix C shows the mean score and standard deviation for DA and SA for first-year and second-year classes combined.

The DA score is higher than the SA score (two-sided t-test, p<0.05) for students in first year, both at the beginning and at the end of the year, and for students in second year at the beginning of the year. However, the students in second year did not show a statistically significant difference between DA and SA scores when they took the survey at the end of the semester.

Figures 3 and 4 show the DA and SA mean scores (respectively) reported in Table C2 and their 95% confidence intervals.

Figure 3 shows the DA scores over time from first to second year. A one-way ANOVA test was carried out (p<0.05) and showed no statistically significant difference between the four DA scores (F(3,606)=0.14, p=0.934).

 

 

A picture containing screenshot, diagram, text, rectangle

Description automatically generated

Figure 3. DA mean scores and 95% confidence intervals in first-year modules at the beginning (1st B) and end (1st E) and in second-year modules at the beginning (2nd B) and end (2nd E).

 

Figure 4 shows that SA increased over time from first to second year. An ANOVA test of the scores yielded significant variation among groups (F(3,606)=12.36, p<0.001). A Tukey-Kramer post hoc test for multiple comparisons of mean values was performed to determine which individual means were significantly different (Lee & Lee, 2018). The Tukey-Kramer test showed that there was a statistically significant difference between 1st B and 2nd B, 1st B and 2nd E, 1st E and 2nd B, 2nd B and 2nd E (at 0.05 significance level) (see Table C2 for definitions of these codes).

 

A picture containing diagram, screenshot, text, rectangle

Description automatically generated 

Figure 4. SA mean scores and 95 % confidence intervals in first-year modules at the beginning (1st B) and end (1st E) and in second-year modules at the beginning (2nd B) and end (2nd E).

 

Figures 3 and 4 show that the DA scores stayed the same and the SA scores increased. The increase in surface approach does not cause a decrease in the adoption of a deep approach: one approach might not exclude the other.

Discussion

This research investigated how students’ approaches to learning change during the course of first- or second-year modules and explored if activities designed to enhance student engagement with the feedback and assessment process induce deeper approaches to learning. Overall, our results show that the level of deep approaches to learning did not change during the course of the modules for any of the cohorts. However, despite expectations, the surface approach increased for two out of three cohorts. Other researchers have attempted to find ways to increase deep strategies and decrease surface strategies in students: Gijbels and Dochy (2006) used extensive formative assessment in their course; Nijhuis et al. (2005) and Segers et al. (2006) redesigned a course around problem-based learning; Struyven et al. (2006) employed several novel student-activating teaching methods as opposed to traditional lecture-based modules. In all these cases, the direction of the change in Student Approaches to Learning was opposite to the desired one: deep approaches to learning decreased, surface approaches to learning increased.

The analysis of the assessment questions using Bloom’s taxonomy highlighted the fact that the higher levels (‘evaluating’ and ‘creating’) were rarely reached in these classes: most CE1 and CE2 questions mapped to ‘applying’ and ‘analysing’, and most ME1 questions to ‘understanding’ and ‘applying’. It is possible that students might have adopted more deep learning approaches if assessments had required a higher level of cognitive complexity (Newton & Martin, 2013). However, as highlighted by Swart (2009), it is appropriate for first- and second-year university courses to focus mainly on the lower levels of Bloom’s taxonomy (‘remembering’, ‘understanding’, and ‘applying’), as long as higher order levels (‘analysing’, ‘evaluating’, and ‘creating’) are activated in third and fourth year (Swart, 2009). The engineering students enrolled in CE1, CE2, and ME1 will experience enquiry-based learning in third and fourth year, when, for some classes, they will work in groups to tackle complex engineering problems and to design a chemical plant, drawing on what they have learnt in first and second year. These problem-based and project-based activities will require higher cognitive skills, such as critical thinking and creativity (Ahern et al., 2019).

Donnison and Penn-Edwards (2012) also argue that it is unreasonable to expect first-year students to consistently use a deep approach to learning and that a surface approach should be thought of as a necessary first stage in a cycle of learning at university as it is possible for memorisation (surface approach) to be a precursor to understanding (deep approach) (Darlington, 2019; Kember, 2016; Matic et al., 2013). Lindblom-Ylänne et al. (2019, p. 2192) suggest that an “unreflective approach could better and more neutrally describe these students’ approach to learning” to partially remove the negative connotation that the term ‘surface’ carries. In addition, Darlington (2019) argues that a deep approach might not be the “most appropriate” one in undergraduate mathematics, as “if a student cannot remember specific aspects of a mathematical definition then they will not be able to use it effectively in proofs which require its use and interpretation” (p. 305). The courses described in this paper (CE1, CE2, ME1) have a heavy mathematical content, and they also require some degree of memorisation, although formula sheets were provided to the students in exams so that the students did not have to memorise formulas.

Our results also show that neither the DA nor the SA score differed for cohorts A and C taking first-year modules at the beginning of first year, but differed in both DA and SA at the end of first year, with the Chemical Engineering students (cohort A) displaying more DA and less SA than the Mechanical Engineering students (cohort C). The majority of students in the two first-year cohorts would have had similar prior educational experiences in secondary school, therefore it is not surprising that the DA and SA scores do not differ between the two cohorts at the beginning of university. However, at the end of first year, after completion of CE1 (Basic Principles in Chemical Engineering) for cohort A and ME1 (Mathematics M1) for cohort C, DA and SA scores were different for the two cohorts, with cohort A displaying higher DA score and lower SA score. The lower DA/SA ratio of Cohort C at the end of the course, compared to that of Cohort A, correlates with the assessment questions analysis through Bloom’s taxonomy: ME1 questions mapped to lower levels of Bloom’s (‘understanding’ and ‘applying’) than CE1 did (‘applying’ and ‘analysing’). This also confirms that approaches to learning are context specific and can vary across different courses (Asikainen & Gijbels, 2017). It has been found that students in soft disciplines, i.e., fields with less consensus about content and methods of inquiry, such as behavioural and social sciences, adopt a deeper approach than students in hard disciplines, i.e., fields with greater consensus, such as engineering and science (Baeten et al., 2010; Laird et al., 2008; Parpala et al., 2010). The link between discipline and DA/SA could explain why the present study observed an increase in SA overtime in engineering students more so in the Maths-based courses, ME1 and CE2, compared to the less mathematical course, CE1.

Case and Gunstone (2003) studied students’ learning approaches in a specific context, chemical engineering, the same context as cohorts A and B in this study. They identified three approaches: conceptual (similar to the deep approach), algorithmic, and information-based (both with characteristics similar to the surface approach). They argue that the algorithmic approach is often reinforced in science and engineering courses, where students solve several numerical examples. They warn that this approach can be an obstacle to developing conceptual (deep) understanding. To strengthen students’ conceptual approach to learning, Case and Gunstone used frequent non-numerical questions requiring conceptual understanding and reflective-journal tasks. However, they recognised that not all students made a shift towards a conceptual approach, possibly because of the perception of an excessive workload.

Another contributing factor to the increase in surface level approaches for cohort C compared with cohort A may lie in the difference in assessment structure and students’ level of engagement with feedback. Cohort A completed a wider range of continuous assessment tasks during the academic year and students’ results in these contributed to the final mark. This assessment strategy ensured student interaction, active engagement with assessment tasks and, at the same time, prepared students for assessment through marking exercises (Rust, 2002). In contrast, cohort C only encountered multiple-choice assessments during the teaching term. This assessment system might be perceived as threatening or anxiety provoking by some students, as multiple-choice assessments do not leave room for error. An approach to a solution might be correct but a minor error might lead to an incorrect answer. Students might also have felt more under pressure in the second semester to improve or keep up performance on class tests in order to qualify for exemption from sitting the three-hour examination. It has been observed by several researchers that if students perceive the workload to be excessively high and assessment to not reward deep approaches, SA will be enhanced. Students will choose to adopt strategic, time-saving, surface approaches, rather than invest their time on deep approaches (Gijbels & Dochy, 2006). Students’ perceptions of the assessment demands will influence the learning strategies that students choose to adopt (Tomas & Jessop, 2019).

Our study confirms that the relationship between assessment and learning is complex and that changing learning and assessment environment and feedback mechanisms might not have the desired effect of inducing more deep-learning approaches.

However, the literature has reached a consensus that student-centred teaching that requires learners to take an active role is more likely to induce and support deep approaches to learning than traditional teaching based on a transmission model, especially for students who would more readily adopt a surface, or ‘unreflective’, approach (Gozalo et al., 2020; Hailikari et al., 2021). Developing deep, reflective learning is more effective when active learning approaches are introduced in a curriculum-wide, systematic way, over the course of the entire programme of study (Koster & Vermunt, 2020). One course is not enough to impact approaches for learning: a sequence of courses with well-designed student-centred instructional practices are needed (Lahdenperä et al., 2021).

It is widely recognised that constructive alignment in assessment design is important for students’ learning (Biggs, 2003) and that assessment has a crucial role in students’ adoption of surface and deep approaches to learning (Hailikari et al., 2021), but “it is ultimately students’ perceptions of assessment that influence their approaches to learning” (Hattingh & Dison, 2021, p. 162). Therefore, when designing assessment, instructors should involve students and take students’ perspectives into consideration to increase their sense of ownership (Hailikari et al., 2021).

Finally, individual student characteristics, such as critical reflection and metacognitive capabilities, play a crucial part in achieving the full potential of a curriculum (Baeten et al., 2010; Koster & Vermunt, 2020). Santangelo et al. (2021) reported how embedding metacognition instruction in student-centred first-year classes could support the adoption of deep approaches. Improving students’ organisation skills might also help students to sustain a deep approach (Parpala et al., 2021). In fact, recent studies have shown that students who adopt a deep approach to learning but have poor organisation skills have lower achievement and more readily switch to an ‘unreflective’ (surface) approach compared to students with different learning profiles (Parpalaet al., 2021, 2022). Therefore, to facilitate and sustain deep approaches to learning, universities should actively support students in developing and improving their study skills from the beginning of their studies (Asikainen et al., 2013, 2019).

Limitations

This paper has several limitations:

·        The students were all from one university, therefore caution must be used when generalising the results. However, three cohorts of students, with similar background, were surveyed and gave similar results.

·        At most, three points in time were collected for each cohort, in the first two years of their studies. Further studies should survey later years’ students to complete the trajectory of the students’ learning approaches.

·        The timing of the administration of the questionnaire might have an effect on the responses, as students might feel differently about learning depending on how close their final examination is (Struyven et al., 2006).

·        The data were collected prior to the COVID-19 pandemic during which education was delivered mainly online for two years. Students might have changed how they learn and engage with resources and activities as a consequence of this. Further empirical research is needed to investigate surface and deep approaches to learning post-pandemic.

·        Finally, the instrument used to measure DA and SA was a self-reported questionnaire. Lindblom-Ylänne et al. (2019) warn that a high score of surface learning measured by a questionnaire might not correspond to a full surface approach, where memorisation is not enriched by deep processes at all. They argue that, to capture the complexity and variation in use of the surface approach, rich research methods are needed. Future work should include students’ interviews to understand more precisely the learning approaches of students and to probe why they do not engage more in deep learning.

Conclusions

Our research aligns with the findings of previous studies: inducing deeper approaches to learning is not straightforward. Even though the lecturers of CE1, CE2, and ME1 implemented changes in assessment to increase students’ engagement throughout the semester and used peer assessment to give students a sense of ownership, students’ deep approach to learning stayed the same while their surface approach increased. It has been argued that a possible reason for this trend is students’ perception of work overload. The mathematical nature of the courses might also play a role in inducing a procedural way of tackling the course material. On the other hand, a surface approach might be a precursor to understanding, and it might have to be expected at the beginning of university studies, as students have to start with a surface approach to learning before they become actively involved in deep learning. Future research should study how approaches to learning used in later years compare with those in first and second year and how developing students’ study and organisation skills might impact deep/surface approaches.

Acknowledgments

The authors would like to thank Dr Esther Ventura-Medina for many helpful discussions and the reviewers for their supportive feedback and suggestions.

References

Ahern, A., Dominguez, C., McNally, C., O’Sullivan, J. J., & Pedrosa, D. (2019). A literature review of critical thinking in engineering education. Studies in Higher Education, 44(5), 816-828. https://doi.org/10.1080/03075079.2019.1586325

Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., Raths, J., & Wittrock, M. C. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Longman.

Angus, S. D., & Watson, J. (2009). Does regular online testing enhance student learning in the numerical sciences? Robust Evidence from a large data set. British Journal of Educational Technology, 40(2), 255-272. https://doi.org/10.1111/j.1467-8535.2008.00916.x 

Asikainen, H., & Gijbels, D. (2017). Do students develop towards more deep approaches to learning during studies? A systematic review on the development of students’ deep and surface approaches to learning in higher education. Educational Psychology Review, 29(2), 205-234. https://doi.org/10.1007/s10648-017-9406-6 

Asikainen, H., Kaipainen, K., & Katajavuori, N. (2019). Understanding and promoting students’ well-being and performance in university studies. Journal of University Teaching & Learning Practice, 16(5). https://doi.org/10.53761/1.16.5.2 

Asikainen, H., Parpala, A., Virtanen, V., & Lindblom-Ylänne, S. (2013). The relationship between student learning process, study success and the nature of assessment: A qualitative study. Studies in Educational Evaluation, 39(4), 211-217. https://doi.org/10.1016/j.stueduc.2013.10.008

Baeten, M., Kyndt, E., Struyven, K., & Dochy, F. (2010). Using student-centred learning environments to stimulate deep approaches to learning: Factors encouraging or discouraging their effectiveness. Educational Research Review, 5(3), 243-260.  https://doi.org/10.1016/j.edurev.2010.06.001

Baeten, M., Struyven, K., & Dochy, F. (2013). Student-centred teaching methods: Can they optimise students’ approaches to learning in professional higher education? Studies in Educational Evaluation, 39(1), 14-22. https://doi.org/10.1016/j.stueduc.2012.11.001

Biggs, J. B. (1987). Student approaches to learning and studying. Research Monograph: ERIC.

Biggs, J. B. (2003). Aligning teaching for constructing learning. Higher Education Academy, 1(4), 1-4.

Biggs, J. B., Kember, D., & Leung, D. Y. (2001). The revised twofactor study process questionnaire: RSPQ2F. British Journal of Educational Psychology, 71(1), 133-149. https://doi.org/10.1348/000709901158433

Biggs, J. B., & Tang, C. (2003). Teaching for quality learning at university. Open University Press.

Bloxham, S., & West, A. (2004). Understanding the rules of the game: Marking peer assessment as a medium for developing students' conceptions of assessment. Assessment & Evaluation in Higher Education, 29(6), 721-733. https://doi.org/10.1080/0260293042000227254

Børte, K., Nesje, K., & Lillejord, S. (2020). Barriers to student active learning in higher education. Teaching in Higher Education 28(3), 597-615. https://doi.org/10.1080/13562517.2020.1839746

Bridgstock, R. (2017). The university and the knowledge network: A new educational model for twenty-first century learning and employability. In Graduate employability in context (pp. 339-358). Springer. https://doi.org/10.1057/978-1-137-57168-7_16

Burbach, M. E., Matkin, G. S., & Fritz, S. M. (2004). Teaching critical thinking in an introductory leadership course utilizing active learning strategies: A confirmatory study. College Student Journal, 38(3), 482-494.

Clack, A., & Dommett, E. J. (2021). Student learning approaches: Beyond assessment type to feedback and student choice. Education Sciences, 11(9), 468. https://www.mdpi.com/2227-7102/11/9/468

Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297-334. https://doi.org/10.1007/BF02310555

Danko, C. C., & Duarte, A. A. (2009). The challenge of implementing a student-centred learning approach in large engineering classes. WSEAS Transactions on Advances in Engineering Education, 8(6), 225-236.

Darlington, E. (2019). Shortcomings of the ‘approaches to learning’ framework in the context of undergraduate mathematics. Journal of Research in Mathematics Education, 8(3), 293-311. https://doi.org/10.17583/redimat.2019.2541

Donald, W. E., Ashleigh, M. J., & Baruch, Y. (2018). Students’ perceptions of education and employability: Facilitating career transition from higher education into the labor market. Career Development International, 23(4), 513-540. https://doi.org/10.1108/CDI-09-2017-0171

Donnison, S., & Penn-Edwards, S. (2012). Focusing on first year assessment: Surface or deep approaches to learning? International Journal of the First Year in Higher Education, 3(2), 9-20 https://doi.org/10.5204/intjfyhe.v3i2.127

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4-58. https://doi.org/10.1177/1529100612453266

Elwood, J., & Klenowski, V. (2002). Creating communities of shared practice: The challenges of assessment use in learning and teaching. Assessment & Evaluation in Higher Education, 27(3), 243-256. https://doi.org/10.1080/02602930220138606

Entwistle, N. J., Hounsell, D., & Marton, F. (1984). The experience of learning. Scottish Academic Press.

Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Educational Research, 83(1), 70-120. https://doi.org/10.3102/0034654312474350

Felder, R. M., & Brent, R. (2005). Understanding student differences. Journal of Engineering Education, 94(1), 57-72. https://doi.org/10.1002/j.2168-9830.2005.tb00829.x

Förster, M., Weiser, C., & Maur, A. (2018). How feedback provided by voluntary electronic quizzes affects learning outcomes of university students in large classes. Computers & Education, 121, 100-114. https://doi.org/10.1016/j.compedu.2018.02.012

Forum, W. E. (2020). The future of jobs report 2020. https://www.weforum.org/reports/the-future-of-jobs-report-2020

Gijbels, D., Coertjens, L., Vanthournout, G., Struyf, E., & Van Petegem, P. (2009). Changing students' approaches to learning: a twoyear study within a university teacher training course. Educational Studies, 35(5), 503-513. https://doi.org/10.1080/03055690902879184

Gijbels, D., & Dochy, F. (2006). Students’ assessment preferences and approaches to learning: Can formative assessment make a difference? Educational Studies, 32(4), 399-409. https://doi.org/10.1080/03055690600850354

Gozalo, M., León-del-Barco, B., & Mendo-Lázaro, S. (2020). Good practices and learning strategies of undergraduate university students. International Journal of Environmental Research and Public Health, 17(6), 1849. https://doi.org/10.3390/ijerph17061849

Haggis, T. (2003). Constructing images of ourselves? A critical investigation into ‘approaches to learning’ research in higher education. British Educational Research Journal, 29(1), 89-104. https://doi.org/10.1080/0141192032000057401

Hailikari, T., Virtanen, V., Vesalainen, M., & Postareff, L. (2021). Student perspectives on how different elements of constructive alignment support active learning. Active Learning in Higher Education, 23(3), 217-231.. https://doi.org/10.1177/1469787421989160

Hattingh, T. S., & Dison, L. (2021). How assessment shapes learning: A perspective from engineering students. SAIEE Africa Research Journal, 112(4), 161-170.

Hockings, C. (2011). Hearing voices, creating spaces: the craft of the ‘artisan teacher’in a mass higher education system. Critical Studies in Education, 52(2), 191-205. https://doi.org/10.1080/17508487.2011.572831

Kember, D. (2016). Why do Chinese students out-perform those from the West? Do approaches to learning contribute to the explanation? Cogent Education, 3(1), 1248187. https://doi.org/10.1080/2331186x.2016.1248187

Kember, D., Leung, D. Y., & McNaught, C. (2008). A workshop activity to demonstrate that approaches to learning are influenced by the teaching and learning environment. Active Learning in Higher Education, 9(1), 43-56. https://doi.org/10.1177/1469787407086745

Kember, D., Wong, A., & Leung, D. Y. (1999). Reconsidering the dimensions of approaches to learning. British Journal of Educational Psychology, 69(3), 323-343. https://doi.org/10.1348/000709999157752

Kibble, J. D. (2007). Use of unsupervised online quizzes as formative assessment in a medical physiology course: Effects of Incentives on student participation and performance. Advances in Physiology Education, 31(3), 253-260. https://doi.org/10.1152/advan.00027.2007

Kibble, J. D. (2011). Voluntary participation in online formative quizzes is a sensitive predictor of student success. Advances in Physiology Education, 35(1), 95-96. https://doi.org/10.1152/advan.00053.2010

Koster, A. S., & Vermunt, J. D. (2020). Longitudinal changes of deep and surface learning in a constructivist pharmacy curriculum. Pharmacy, 8(4), 200. https://doi.org/10.3390/pharmacy8040200

Lahdenperä, J., Rämö, J., & Postareff, L. (2021). Contrasting undergraduate mathematics students’ approaches to learning and their interactions within two student-centred learning environments. International Journal of Mathematical Education in Science and Technology, 54(5), 687-705. https://doi.org/10.1080/0020739X.2021.1962998

Laird, T. F. N., Shoup, R., Kuh, G. D., & Schwarz, M. J. (2008). The effects of discipline on deep approaches to student learning and college outcomes. Research in Higher Education, 49(6), 469-494. https://doi.org/10.1007/s11162-008-9088-5

Lee, S., & Lee, D. K. (2018). What is the proper way to apply the multiple comparison test? Korean Journal of Anesthesiology, 71(5), 353-360. https://doi.org/10.4097/kja.d.18.00242

Leiva-Brondo, M., Cebolla-Cornejo, J., Peiró, R., Andrés-Colás, N., Esteras, C., Ferriol, M., Merle, H., José Díez, M., & Pérez-de-Castro, A. (2020). Study approaches of life science students using the revised two-factor study process questionnaire (R-SPQ-2F). Education Sciences, 10(7), 173. https://doi.org/10.3390/educsci10070173

Lindblom-Ylänne, S., Parpala, A., & Postareff, L. (2019). What constitutes the surface approach to learning in the light of new empirical evidence? Studies in Higher Education, 44(12), 2183-2195. https://doi.org/10.1080/03075079.2018.1482267

Lizzio, A., Wilson, K., & Simons, R. (2002). University students' perceptions of the learning environment and academic outcomes: implications for theory and practice. Studies in Higher Education, 27(1), 27-52. https://doi.org/10.1080/03075070120099359

Martinelli, V., & Raykov, M. (2017). Evaluation of the revised two-factor study process questionnaire (R-SPQ-2F) for student teacher approaches to learning. Journal of Educational and Social Research, 7(2), 9-13. https://doi.org/10.5901/jesr.2017.v7n2p9

Marton, F., & Säljö, R. (1976a). On qualitative differences in learning—ii Outcome as a function of the learner's conception of the task. British Journal of Educational Psychology, 46(2), 115-127. http://dx.doi.org/10.1111/j.2044-8279.1976.tb02304.x

Marton, F., & Säljö, R. (1976b). On qualitative differences in learning: I—Outcome and process. British Journal of Educational Psychology, 46(1), 4-11. https://doi.org/10.1111/j.2044-8279.1976.tb02980.x

Mason, G. S., Shuman, T. R., & Cook, K. E. (2013). Comparing the effectiveness of an inverted classroom to a traditional classroom in an upper-division engineering course. IEEE Transactions on Education, 56(4), 430-435. https://doi.org/10.1109/TE.2013.2249066

Matic, L. J., Matic, I., & Katalenic, A. (2013). Approaches to learning mathematics in engineering study program. In Z. K.-B. Margita Pavlekovic & R. Kolar-Super (Eds.), Mathematics teaching for the future (pp. 186-195). Element.

Mayorga, L. K. (2019). HEIs and workforce development: Helping undergraduates acquire career-readiness attributes. Industry and Higher Education, 33(6), 370-380. https://doi.org/10.1177/0950422219875083

Newton, G., & Martin, E. (2013). Blooming, SOLO taxonomy, and phenomenography as assessment strategies in undergraduate science education. Journal of College Science Teaching, 43(2), 78-90. https://doi.org/10.2505/4/jcst13_043_02_78

Nicol, D., & McCallum, S. (2022). Making internal feedback explicit: exploiting the multiple comparisons that occur during peer review. Assessment & Evaluation in Higher Education, 47(3), 424-443. https://doi.org/10.1080/02602938.2021.1924620

Nieminen, J. H., Asikainen, H., & Rämö, J. (2021). Promoting deep approach to learning and self-efficacy by changing the purpose of self-assessment: A comparison of summative and formative models. Studies in Higher Education, 46(7), 1296-1311. https://doi.org/10.1080/03075079.2019.1688282

O’Leary, S. (2017). Graduates’ experiences of, and attitudes towards, the inclusion of employability-related support in undergraduate degree programmes; trends and variations by subject discipline and gender. Journal of Education and Work, 30(1), 84-105. https://doi.org/10.1080/13639080.2015.1122181

O’Leary, S. (2021). Gender and management implications from clearer signposting of employability attributes developed across graduate disciplines. Studies in Higher Education, 46(3), 437-456. https://doi.org/10.1080/03075079.2019.1640669

Parpala, A., Katajavuori, N., Haarala-Muhonen, A., & Asikainen, H. (2021). How did students with different learning profiles experience ‘normal’and online teaching situation during COVID-19 Spring? Social Sciences, 10(9), 337. https://doi.org/10.3390/socsci10090337 

Parpala, A., LindblomYlänne, S., Komulainen, E., Litmanen, T., & Hirsto, L. (2010). Students' approaches to learning and their experiences of the teaching–learning environment in different disciplines. British Journal of Educational Psychology, 80(2), 269-282. https://doi.org/10.1348/000709909x476946

Parpala, A., Mattsson, M., Herrmann, K. J., Bager-Elsborg, A., & Hailikari, T. (2022). Detecting the variability in student learning in different disciplines—a person-oriented approach. Scandinavian Journal of Educational Research, 66(6), 1020-1037. https://doi.org/10.1080/00313831.2021.1958256

Pithers, R. T., & Soden, R. (2000). Critical thinking in education: A review. Educational research, 42(3), 237-249. https://doi.org/10.1080/001318800440579

Radmehr, F., & Drake, M. (2018). An assessment-based model for exploring the solving of mathematical problems: Utilizing revised bloom’s taxonomy and facets of metacognition. Studies in Educational Evaluation, 59, 41-51. https://doi.org/10.1016/j.stueduc.2018.02.004

Reid, W., Duvall, E., & Evans, P. (2005). Can we influence medical students’ approaches to learning? Medical Teacher, 27(5), 401-407. https://doi.org/10.1080/01421590500136410

Rust, C. (2002). The impact of assessment on student learning: how can the research literature practically help to inform the development of departmental assessment strategies and learner-centred assessment practices? Active Learning in Higher Education, 3(2), 145-158. https://doi.org/10.1177/1469787402003002004

Sahonero-Alvarez, G., & Calderon, H. (2018). Implementation issues of student-centered learning based engineering education in developing countries universities. World Engineering Education Forum-Global Engineering Deans Council (WEEF-GEDC), Albuquerque, NM, USA, 2018 [Paper presentation]. https://doi.org/10.1109/WEEF-GEDC.2018.8629701

Santangelo, J., Cadieux, M., & Zapata, S. (2021). Developing student metacognitive skills using active learning with embedded metacognition instruction. Journal of STEM Education: Innovations and Research, 22(2).

Segers, M., Nijhuis, J., & Gijselaers, W. (2006). Redesigning a learning and assessment environment: The influence on students' perceptions of assessment demands and their learning strategies. Studies in Educational Evaluation, 32(3), 223-242. https://doi.org/10.1016/j.stueduc.2006.08.004

Struyven, K., Dochy, F., Janssens, S., & Gielen, S. (2006). On the dynamics of students' approaches to learning: The effects of the teaching/learning environment. Learning and Instruction, 16(4), 279-294. https://doi.org/10.1016/j.learninstruc.2006.07.001

Swart, A. J. (2009). Evaluation of final examination papers in engineering: A case study using Bloom's Taxonomy. IEEE Transactions on Education, 53(2), 257-264. https://doi.org/10.1109/te.2009.2014221

Tavakol, M., & Dennick, R. (2011). Making sense of Cronbach's alpha. International Journal of Medical Education, 2, 53. https://doi.org/10.5116/ijme.4dfb.8dfd

Tomas, C., & Jessop, T. (2019). Struggling and juggling: a comparison of student assessment loads across research and teaching-intensive universities. Assessment & Evaluation in Higher Education, 44(1), 1-10. https://doi.org/10.1080/02602938.2018.1463355 

Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249-276. https://doi.org/10.3102/00346543068003249

Topping, K. J. (2017). Peer assessment: Learning by judging and discussing the work of other learners. Interdisciplinary Education and Psychology, 1(1), 1-17. https://doi.org/10.31532/InterdiscipEducPsychol.1.1.007

Volet, S. E., & Chalmers, D. (1992). Investigation of qualitative differences in university students' learning goals, based on an unfolding model of stage development. British Journal of Educational Psychology, 62(1), 17-34. https://doi.org/10.1111/j.2044-8279.1992.tb00996.x

Yelamarthi, K., Drake, E., & Prewett, M. (2016). An instructional design framework to improve student learning in a first-year engineering class. Journal of Information Technology Education: Innovations in Practice, 15, 195. https://doi.org/10.28945/3617

Zakariya, Y. F., Bjørkestøl, K., Nilsen, H. K., Goodchild, S., & Lorås, M. (2020). University students’ learning approaches: An adaptation of the revised two-factor study process questionnaire to Norwegian. Studies in Educational Evaluation, 64, 100816. https://doi.org/10.1016/j.stueduc.2019.100816

Zakariya, Y. F., Nilsen, H. K., Bjørkestøl, K., & Goodchild, S. (2020). Impact of attitude on approaches to learning mathematics: a structural equation modeling approach. Third conference for the International Network for Didactic Research in University Mathematics (INDRUM), Cyberspace – Virtually from Bizerte, 2020 September 12-19 [Paper presentation]. https://indrum2020.sciencesconf.org/data/pages/INDRUM2020_Proceedings.pdf

Appendices

Appendix A. List of acronyms

ANOVA: Analysis of Variance

DA: Deep Approach

DM: Deep Motive

DS: Deep Strategy

SA: Surface Approach

SM: Surface Motive

SS: Surface Strategy

SPQ: Study Process Questionnaire

StD: Standard Deviation

R-SPQ-2F: Revised two-factor Study Process Questionnaire

VLE: Virtual Learning Environment

 

Appendix B. Cronbach’s alphas of the DA and SA subscales of the
R-SPQ-2F questionnaire

Cohort

Module and date of survey

DA alpha

SA alpha

A

CE1 Sept 2017

CE1 March 2018

CE2 Sept 2018

0.78

0.76

0.80

0.80

0.65

0.77

B

CE2 Sept 2017

CE2 Dec 2017

0.68

0.73

0.76

0.75

C

ME1 Sept 2017

ME1 March 2018

0.75

0.82

0.78

0.79

 

Appendix C. DA and SA mean scores and standard deviations

Table C1. DA and SA mean score and standard deviation (StD) for each cohort

Cohort

Module and date of survey

DA mean score [StD]

SA mean score [StD]

A

CE1 Sept 2017 *

CE1 March 2018 *

CE2 Sept 2018 *

30.33 [6.32]

30.51 [5.17]

29.62 [5.75]

24.88 [6.51]

24.85 [4.56]

26.02 [6.05]

B

CE2 Sept 2017 *

CE2 Dec 2017

29.32 [5.26]

29.12 [5.41]

27.49 [6.30]

29.62 [6.12]

C

ME1 Sept 2017 *

ME1 March 2018

28.88 [5.91]

28.08 [6.71]

24.67 [6.45]

27.11 [7.11]

* indicates that the difference between DA and SA scores is statistically significant (p<0.05).

 

Table C2. DA and SA mean score and standard deviation (StD) (n= number of questionnaires) for first-year and second-year classes combined

University Year

Module, Date of survey, Cohort

Code

n

DA mean score [StD]

SA mean score [StD]

1

CE1 Sept 2017 A

ME1 Sept 2017 C

1st B*

152

29.49 [6.11]

24.76 [6.45]

CE1 March 2018 A

ME1 March 2018 C

1st E*

160

29.22 [6.14]

26.05 [6.14]

2

 

CE2 Sept 2017 A

CE2 Sept 2018 B

2nd B*

202

29.46 [5.47]

26.84 [6.21]

CE2 Dec 2017 B

2nd E

94

29.12 [5.41]

29.62 [6.12]

* indicates that the difference between DA and SA scores is statistically significant (p<0.05).