Homework, in-class assignments, and midterm exams: Investigating the predictive utility of formative and summative assessments for academic success

Alice S. N. Kim1, Cassandra R. Stevenson1 and Lillian Park2

1 Teaching and Learning Research In Action, Ontario, Canada
2 Psychology Department, State University of New York at Old Westbury

Abstract

Formative assessments can be used more effectively to support students’ learning when coupled with insights about which types of formative assessments are predictive of students’ subsequent learning achievement. In this study, we investigated the predictive utility of students’ grades on homework and in-class assignments for the midterm and final cumulative exam, which were taken as measures of student learning. The data consisted of the grades of 241 undergraduate students for homework, in-class assignments, midterm and final cumulative exams in a variety of psychology courses. Using regression analyses, we found that students’ midterm exam grades were predicted by their grades for homework and in-class assignments completed before the midterm exam. Final cumulative exam grades were predicted by students’ homework and midterm exam grades, but not their in-class assignment grades. These findings suggest that the effectiveness of formative assessments as tools to predict student achievement varies. Additionally, although homework was not as strong of a predictor as the midterm exam, it was still an adequate predictor of final cumulative exam performance. Since homework feedback is provided earlier and often more frequently, in the context of the present courses under investigation, it can be a useful tool in informing educators’ and students’ learning plans early in a course. Future research should investigate further the relation between different types of formative and summative assessments across different instructors, disciplines and institutions.

Keywords

assessments, low-stakes, high-stakes, homework, in-class assignments

Introduction

Assessing students’ knowledge as they learn is a vehicle for engagement and active learning (Bernstein, 2018; Freeman et al., 2007; Freeman et al., 2011), which allows students to learn through practice and participation, rather than through passive absorption (Bernstein, 2018; Schneider & Preckel, 2017). Whereas formative assessments provide information about students’ learning with the intention of enhancing it (Weston & McAlpine, 2004), summative assessments evaluate students’ understanding of the material in question (Harlen, 2012). Although past research has shown that both types of assessment are predictive of students’ performance on summative assessments (Kim & Shakory, 2017; Krasne et al., 2006), it is not known if one type of assessment is a better predictor than the other and to what extent. In this study, we investigated (1) whether students’ grades on formative assessments, specifically homework and in-class assignments, were predictive of their grades on the midterm exam, and (2) whether students’ grades on these formative assessments and a summative assessment (the midterm exam) were predictive of their grade on the final cumulative exam.

Formative assessments are used to gather information about students’ learning, and it may be accomplished with or without the use of grading (Harlen & James, 1997; Harlen & James, 1997; Weston & McAlpine, 2004). In Weston and McAlpine’s (2004, p. 98) description of formative assessment, they assert that “It is usually done in an ongoing way during the learning process and commonly (but not always) associated with evaluation techniques that do not involve grading.” Importantly, grading is not a feature that defines an assessment as being formative or summative in nature; rather, it is the purpose of the assessment that characterizes whether it is formative or summative as described in the following passage by Harlen (2012, p. 97):

What is described as ‘informal summative’ may involve similar practice to ‘formal formative’. However, the essential difference is the use made of the evidence. If the cycle is closed ... and the evidence is used in adapting teaching, then it is formal formative. If there is no feeding back into teaching … then it falls into the category of ‘informal summative’, even though the evidence may be the same classroom test.

Formative feedback benefits both the student and instructor in terms of how they should proceed with the course to enhance student learning (Weston & McAlpine, 2004): whereas the course instructor may choose to adapt the lesson plan if the majority of students in the class demonstrate difficulty with a particular key concept, students may spend extra time reviewing the material or seek out additional help to understand any material that they found challenging. Interestingly, in a study conducted by Carrillo-de-la-Peña et al. (2009), whether students participated in formative assessment was found to be a better predictor of their final outcome in a course compared to their actual performance on the assessment; this finding highlights the importance of the formative feedback that students received. As mentioned above, in the context of the present study, homework and in-class assignments were both formative assessments; they differed, however, in that assignments were completed in class, whereas homework referred to any work completed by the student for the course outside of class.

Students’ achievement in a course has been shown to be positively related to the amount of time they spend on homework (Cooper, 1989; Paschal et al., 1984). Additionally, positive effects of homework have been shown to be enhanced when the homework is graded and comments are provided to students, compared to when homework is not graded. The use of online homework has also been linked to higher grades on final exams, even after accounting for students’ level of preparation coming into the class (Arasasingham et al., 2011). Though in-class assignments can vary widely in nature, they generally require students’ active participation (e.g., reflecting on an assigned topic), which has been shown to benefit students’ performance in a course compared to passive participation (e.g. listening to class discussion without contributing oneself). In contrast to formative assessments, summative assessments typically take the form of midterm and final exams, as well as major projects and essays that account for a large portion of students’ final grade in a course.

In this study, we used multiple regression analyses to investigate whether students’ grades on formative assessments, specifically in-class assignments and homework, were predictive of their grades on the summative assessments for the course: the midterm and final cumulative exams. We conducted a second regression analysis to investigate whether, as well as to what extent, students’ grades for in-class assignments, homework, and the midterm exam were predictive of their grade on the final cumulative exam. If students’ grades on any or all of these assessments are predictive of their midterm and/or their final cumulative exam grades, course instructors may wish to strategically schedule the use of these types of assessments throughout the course so that they can identify students who need additional support earlier on in the course, and so that students can make informed decisions about how they proceed in the course. Based on past studies, we hypothesized that homework and in-class assignments would be predictive of students' midterm grades, and that students’ homework, in-class assignments, and midterm grades would be predictive of their final cumulative exam grades.

Method

Participants

All students were enrolled in a psychology course taught at an American Primarily Undergraduate Institute (PUI). Students’ grades from a given course are considered as archival data by our institution. The archival dataset that we used in this study consisted of data from 252 students who completed the respective courses. These data were collected over the span of eight years, across two different courses (13 course sections) that were all taught by the same course instructor. On average, there were 19 students enrolled in each class (SD = 2). For the present study, after all outliers were removed, data from 241 participants were used for the regression analyses. Given that these data are de-identified and stripped of any personal information that can be traced back to an individual student, the Institutional Review Board at our institution did not require students to provide consent to have their data included in this study. Consequently, demographic information specifically corresponding to the individuals whose data were analyzed cannot be provided. However, the college conducted a five-year self-review during the 2014-2015 academic year, in which demographic information from students enrolled in psychology courses was collected. The demographic information from the self-study is a suitable approximation of the demographic information from the students in the archival data, since the self-study took place during the eight-year span corresponding to when the courses investigated in this study were offered, and all the courses from our archival data were from courses required in the psychology major. From the self-study, 79% of the students were female and 21% of the students were male. The mean age of the students was 23.6 years (SD=7.60). In addition, 68% of the students entered this particular institution as a transfer student, while 32% entered during their first year as a college student.

Materials

Homework and in-class assignments

All in-class assignments and homework were created by the instructor. Across courses there were, on average, 11 homework assignments (SD=3) and eight in-class assignments (SD=4). The mean weighting of overall homework and in-class assignments for students’ final grade in a course were 49.19% (SD=4.66%) and 14.65% (SD=5.61%), respectively. Before the midterm, there were, on average, six homework assignments (SD=2) and three in-class assignments (SD=1), which were used to investigate whether students' grades on these assessments were predictive of their grades on the midterm exam and the final cumulative exam. We focused on homework and in-class assignments completed before the midterm to assess whether these formative assessments from earlier on in the course could be used to predict students’ overall performance in the course.

The homework and in-class assignments were designed with the purpose of reinforcing lecture material. Both homework and in-class assignments generally had 10-20 short-answer questions or short essay questions that on average took about 45 minutes to complete. Items for the in-class assignments and homework asked students to recall information, demonstrate comprehension, and apply knowledge of the course material in new contexts, i.e., the first three levels of Bloom’s (1969) taxonomy. For example, students were asked to write definitions of key terms, identify variables and formulate a hypothesis from a study described in a mass media article, and make inferences of causal relationships as a function of study design. Short essay questions required students to write a paragraph explaining a concept (e.g., What is a true experiment?) or applying lecture material (e.g., writing an APA-style procedure section of a mock study).

Students were allowed to work with classmates on both the homework and in-class assignments if they wished. Based on personal observations made by the instructor, approximately 25%-40% of students in the class would work together on the in-class assignments in pairs or in a small group of three students. These students generally answered questions on their own first, then worked with their peers on questions they were unsure about or to check their own answers. It is not known whether students worked together for homework assignments.

Written feedback was provided to students with correct answers, an explanation for why the provided answer was correct and if necessary, clarification of any confusion or misunderstanding on the part of the student. If several students missed the same question, the instructor covered the material again in the following lecture, with an explanation of the correct answer and a discussion about the incorrect responses. Whenever the instructor found that a third or more of the students in the course missed a question or concept, a portion of the following lecture time was devoted to re-teaching the topic. Furthermore, future homework assignments that were focused on reviewing prior material were specifically created by the instructor to test weaknesses in understanding that had been revealed in past homework assignments. Through this method, students had an opportunity to demonstrate mastery of previously weakly-learned or poorly understood material before exams were taken.

Midterm exam

The midterm consisted of 10-20 multiple choice questions, 15-20 short answer questions, and one to two short essay questions. Students completed the exam in 60-90 minutes. All questions were created by the instructor. Much like the questions in the homework and in-class assignments, multiple choice questions tested key concepts and required students to recall course material, demonstrate comprehension, and apply knowledge of the course material in new contexts. The exam questions were not identical to questions on the homework and in-class assignments, but they were similar in terms of question format, content, and difficulty. Short answer questions required students to demonstrate comprehension of the course material, apply knowledge, and to synthesize ideas by formulating alternative proposals (i.e., levels II-V of Bloom’s (1969) taxonomy). Short essay questions required students to synthesize ideas or to evaluate and assess information, i.e., levels V & VI of Bloom’s (1969) taxonomy. The mean weighting of the midterm exam on students' final grade was 11.61% (SD=1.80%). In contrast to the multiple homework and in-class assignments, there was only one midterm exam for each course.

Final exam 

The final exam was cumulative, comprising all content covered throughout the course. Final exams were composed of multiple choice, short answer questions, and an essay. In some cases, additional multiple-choice questions replaced short answers on the exam. Consistent with levels I to III of Bloom’s (1969) taxonomy (e.g. knowledge, comprehension, application), the exam format allowed for students to both display and apply what they learned throughout the course. The instructor used the textbooks and lecture content to create the exam questions. Students were given a time limit of 90 minutes to complete the exam, and a percentage grade was calculated for each student using total points possible as a denominator. The instructor graded exams with an answer key for the multiple-choice questions and short answer questions and a rubric for the essay. The mean weighting of the final exam on students’ final grade was 14.36% (SD=4.46%).

Procedure

For courses that were offered from 2010 to 2016 (which accounts for all but two of the thirteen courses included in the present study), the classes consisted of two 100-minute classes per week, for a total of three hours and 20 minutes of class time each week. For the remaining courses that were offered in the 2016-2018 academic year, classes consisted of two 90-minute classes per week, for a total of three hours of class time each week. Each class typically covered the contents of half of a textbook chapter and was supplemented by additional examples and world events that corresponded to the content being covered.

Regression analysis

A regression analysis was conducted to assess whether students’ grades on homework and in-class assignments were predictive of students’ grades on the midterm exam. Only the grades for homework and in-class assignments that took place before the midterm were used for this analysis. We then conducted a second analysis to investigate whether students’ grades for homework and in-class assignments that were completed before the midterm, as well as students’ grades for the midterm exam, were predictive of students’ grades on the final exam. Each course had a varied number of in-class assignments and homework assignments (please see the results section for descriptive statistics). However, each course had only one midterm exam and one final exam.

Results

Across all the courses, there were, on average, 19 students enrolled in a given course (SD=2). The average grades for homework and in-class assignments completed before the midterm was 65% (SD=23%) and 71% (SD=22%), respectively. The average grade for the midterm and final exams were 77% (SD=14%) and 75% (SD=14%), respectively.

Regression models

The first regression analysis assessed whether students’ grades for homework and in-class assignments completed before the midterm exam were predictive of their grade on the midterm exam. An analysis of standard residuals revealed that the data did not contain any outliers (Std. Residual Min=-2.960, Std. Residual Max=2.033). When the assumption of collinearity was tested, the results demonstrated that multicollinearity was not a concern (Average homework grade, Tolerance=0.894, VIF=1.118; Average assignment grade, Tolerance=0.894, VIF=1.118. The data also met the assumption of independent errors (Durbin-Watson value=1.894). The histogram of standardized residuals showed that the data contained approximately normally distributed errors, as did the normal P-P plot of standardized residuals, which showed points that were close to being on the line. The scatter plot of standardized predicted values indicated that the data met the assumptions of homogeneity of variance and linearity.

Using the enter method, a significant regression equation was found for our first regression analysis (F(2, 240) = 24.127, p < .001), with an R2 of .169.  Participants’ predicted scores on the midterm exam is equal to .559 + .180 (homework) + .125 (in-class assignment). Participants’ predicted midterm exam scores increased by .180 percent for each percentage point of the homework average (β = .301, t(240) = 4.812, p < .001), and by .125 percent for each percentage point of the in-class assignment average (β = .198, t(240) = 3.173, p = .002).

The second regression analysis investigated whether students’ grades on homework and in-class assignments completed before the midterm, and midterm grades were predictive of their final cumulative exam scores. An analysis of standard residuals revealed that the data did not contain any outliers (Std. Residual Min = -2.802, Std. Residual Max = 2.998). When the assumption of collinearity was tested, the results demonstrated that multicollinearity was not a concern (Average homework grade, Tolerance = 0.815, VIF = 1.227; Average assignment grade, Tolerance = 0.858, VIF = 1.166; Midterm grade, Tolerance = 0.831, VIF = 1.203). The data also met the assumption of independent errors (Durbin-Watson value = 1.767). The normal P-P plot and histogram of standardised residuals showed that the data contained approximately normally distributed errors. The scatter plot of standardised predicted values indicated that the data met the assumptions of homogeneity of variance and linearity.

A significant regression equation was found for our second regression (enter method) analysis (F(3, 240) = 46.510, p < .001, R2 = .371.) However, though students’ grades for homework and the midterm exam were predictive of their grades on the final exam, students’ grades for in-class assignments were not. Participants’ predicted scores on the final cumulative exam is equal to .274 + .089 (homework) + .050 (in-class assignment) + .502 (midterm grade). Participants’ predicted cumulative exam scores increased by .089 percent for each percentage point of the homework average (β = .150, t(240) = 2.632, p = .009) and by .502 percent for each percentage point of the midterm grade (β = .504, t(240) = 8.923, p<.001).

Discussion

In addition to demonstrating that students' grades on the midterm exam were predictive of their final cumulative exam grades, our findings also show that students’ grades on formative assessments—specifically homework—were predictive of their grades on the midterm and final cumulative exams. One of the major implications of these findings is that in the specific context of the courses under investigation instructors can use students' performance on homework far earlier in the semester, compared to a midterm exam, to engender improved academic performance amongst their students. In addition to providing students with formative feedback, homework can be used to identify students who could benefit most from additional support throughout the course, and effectively enhance the students’ learning trajectory. Moreover, for instructors who do not have enough class time or resources to integrate in-class assignments, as may be the case particularly for courses with large enrollment, our results suggest that homework assignments could be equally, if not more, sufficient to provide students with feedback about their progress.

The finding that grades on homework and in-class assignments were predictive of grades on the midterm exam indicates that these formative assessments were good sources of feedback for students, and aligns with past research demonstrating that performance on formative assessments are predictive of performance on summative assessments (Carrillo-de-la-Penã et al., 2009; Krasne et al., 2006; Siweya & Letsoalo, 2014). In the context of this study, the benefit of in-class assignments to students was that they could receive immediate assistance from the instructor or peers when they encountered difficulty. Our findings also showed that students’ midterm grades were predictive of their final cumulative exam grades, which is consistent with the results of past research (e.g., Azzi et al., 2015). It is worth highlighting that students’ grades on the midterm was a much stronger predictor of students’ grades on the final exam compared to the formative assessments (homework and in-class assignments). The R2 value for the regressions models increased from .17 to .37 when the midterm exam was added to the model as a predictor, which is a substantial increase in the amount of variance accounted for by the model.

There are several potential factors that may have contributed to students’ performance on the midterm exam being a stronger predictor of their performance on the final exam compared to that of the summative assessments. For example, the midterm and final exams were both high-stakes assessments and were more similar in format compared to the formative assessments; students had to complete both the midterm and final exams within strict time constraints without aids or the help of peers, whereas homework and in-class assignments could be completed with peers, and students were given more time per question to complete these assignments. Additionally, whereas the midterm and final exams were composed of multiple choice, short answer, and essay questions, the formative assessments did not include multiple choice questions except for the assignments that were meant to serve as a review to help prepare students for the midterm and final exams. Lastly, students may have put more effort into their performance on the midterm and final exams compared to the formative assessments, since they both accounted for a large portion of students’ final mark. Along these lines, on average each homework assignment accounted for approximately 4.5% of students’ final grade in the course, whereas each assignment accounted for approximately 1.8%; this may have contributed to why students’ performance on homework, but not the assignments, were predictive of students’ final exam grades. Additionally, students’ effort on individual homework and in-class assignments may have been inconsistent throughout the course due to both internal and external factors, including busy work and/or family schedules.

Time management and efficiency in learning should undoubtedly be priorities for working students. Thus, it is important for students to have the information needed to make decisions regarding how they should manage their time in a given course between work, other courses, and extracurriculars. If students know which assessments predict their academic achievement in a course, as well as which assessments are the strongest predictors, they can gauge their expected level of achievement as they receive feedback on course components. This should help with making decisions on time management, including the decision to potentially drop a course if the outcome does not look satisfactory. Our findings suggest that students should be advised to consider their grades on formative and summative assessments when making decisions about whether they should withdraw from a class because of poor performance. Students may be reluctant to withdraw from a class they are doing poorly in because of sunk cost or optimistic bias, where they naively believe that they can still somehow turn things around (Price et al., 2002). However, persistence in a failing course has significant cost to students, including time and effort in a failing class that could have been reallocated to doing better in other courses, a failing grade on their transcript that lowers GPA, and continued stress of struggling in a course. Interestingly, Myers and Myers (2007) found that students were less likely to drop a course and more likely to evaluate it more favourably when they wrote bi-weekly quizzes compared to students who wrote a midterm exam instead. Moreover, they found that the students who wrote bi-weekly quizzes scored 15 percentage points higher on the final cumulative exam.

Students’ grades on assessments have been shown to be positively related to the use of active learning practices (Crouch & Mazur, 2001; Freeman et al., 2011; Voelkl, 1995). However, courses in higher education often adopt a lecture-style format (e.g., Bazar, 2015; Goffe & Kauper, 2014; Lammers & Murphy, 2002; Mulryan-Kyne, 2010), which traditionally takes on a teacher-centered approach, in which students typically have a passive role (listening and writing notes) while the instructor presents information to them (Bernstein, 2018; Goffe & Kauper, 2014; Friedland, 1996). Although students’ participation has been shown to be positively related to their grades on summative assessments (Petress, 2006; Handelsman et al., 2005; Kim et al., 2019), a large portion of students are not likely to actively participate in class for various reasons. Thus, course instructors should consider offering alternative opportunities for their students to engage with the course material. In addition to the varying requirements of homework and in-class assignments, having students complete both types of formative assessments provides them with multiple opportunities to engage with the course material and in a variety of ways and contexts, aligning with principles of universal design for learning (UDL), which aims to make learning accessible to the widest range of individuals (Pisha & Coyne, 2001). UDL has been shown to benefit students’ learning in various ways, including by increasing interest and engagement with the course content (Rao et al., 2014; Smith, 2012) and mitigating learning barriers for students (Al-Azawei et al., 2016; Black et al., 2015).

Since our findings are based on data that were collected from one institution, and from courses taught by the same instructor, our results should be generalized with caution as, among other factors, the alignment between formative and summative assessments may vary across instructors, and student populations may differ across institutions. Future research should investigate further the relation between different types of formative and summative assessments across different instructors, disciplines, and institutions.

Conclusion

Our findings show that students’ grades on homework are predictive of their grades on midterm and final cumulative exams in the context of the present courses under investigation. Although students’ grades on the midterm were found to better predict their performance on the final cumulative exam compared to homework grades, the latter is typically provided much earlier and more frequently to students and could be helpful when making decisions relevant to students’ academic success. For example, students and instructors often wait until midterm exams are graded to assess student progress, which is halfway through the semester. However, since students’ homework grades were also predictive of their performance on exams in the present study, students and the instructor can use this information far earlier in the semester to either engender change in academic performance or to reassess the viability of staying in the course.

References

Arasasingham, R. D., Martorell, I., & McIntire, T. M. (2011). Online homework and student achievement in a large enrollment introductory science course. Journal of College Science Teaching, 40(6), 70-79.

Al-Azawei, A., Serenelli, F., & Lundqvist, K. (2016). Universal Design for Learning (UDL):A content analysis of peer reviewed journals from 2012 to 2015. Journal of the Scholarship of Teaching and Learning, 16(3), 39-56. https://doi.org/10.14434/josotl.v16i3.19295

Azzi, A. J., Ramnanan, C. J., Smith, J., Dionne, É., & Jalali, A. (2015). To quiz or not to quiz: Formative tests help detect students at risk of failing the clinical anatomy course. Anatomical Sciences Education, 8(5), 413-420.  https://doi.org/10.1002/ase.1488

Bazar, J. L. (2015). Origins of teaching psychology in America. In D. S. Dunn (Ed.), The oxford hand-book of undergraduate psychology (pp. 25–32). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199933815.013.002

Bernstein, D. A. (2018). Does active learning work? A good question, but not the right one. Scholarship of Teaching and Learning in Psychology, 4(4), 290-307.    http://doi.org/10.1037/stl0000124

Black, R. D., Weinberg, L. A., & Brodwin, M. G. (2015) Universal design for learning and instruction: Perspectives of students with disabilities in higher education. Exceptionality Education International, 25(2), 1-16. Retrieved from https://ir.lib.uwo.ca/eei/vol25/iss2/2

Bloom, B. S. (1969). Taxonomy of educational objectives: The classification of educational goals: Handbook I, Cognitive domain. McKay.

Carrillo-de-la-Pena, M. T., Bailles, E., Caseras, X., Martínez, À., Ortet, G., & Pérez, J. (2009). Formative assessment and academic achievement in pre-graduate students of health sciences. Advances in Health Sciences Education, 14(1), 61-67. https://doi.org/10.1007/s10459-007-9086-y

Connor, J., Franko, J., & Wambach, C. (2006). A brief report: The relationship between mid-semester grades and final grades. University of Minnesota, USA.

Cooper, H. (1989). Homework. Longman. https://doi.org/10.1037/11578-000

Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970-977. https://doi.org/10.1119/1.1374249

Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, 111(23), 8410-8415.  https://doi.org/10.1073/pnas.1319030111

Freeman, S., Haak, D., & Wenderoth, M. P. (2011). Increased course structure improves performance in introductory biology. Cell Biology Education—Life Sciences Education, 10(2), 175-186. https://doi.org/10.1187/cbe.10-08-0105

Freeman, S., O'Connor, E., Parks, J. W., Cunningham, M., Hurley, D., Haak, D., Dirks, C., & Wenderoth, M. P. (2007). Prescribed active learning increases performance in introductory biology. Cell Biology Education—Life Sciences Education, 6(2), 132-139. https://doi.org/10.1187/cbe.06-09-0194

Friedland, S. I. (1996). How we teach: A survey of teaching techniques in American law schools. Seattle University Law Review, 20, 1–44.

Goffe, W. L., & Kauper, D. (2014). A survey of principles instructors: Why lecture prevails. The Journal of Economic Education, 45, 360–375. http://dx.doi.org/10.1080/00220485.2014.946547

Handelsman, M. M., Briggs, W. L., Sullivan, N., & Towler, A. (2005). A measure of college student course engagement. The Journal of Educational Research, 98(3), 184-192. https://doi.org/10.3200/JOER.98.3.184-192

Harlen, W. & James, M. (1997) Assessment and Learning: differences and relationships between formative and summative assessment. Assessment in Education: Principles, Policy & Practice, 4:3, 365-379, http://doi.org/10.1080/0969594970040304

Harlen, W. (2012). On the relationship between assessment for formative and summative purposes. In J. Gardner (Ed.), Assessment and learning (pp. 87-102). SAGE Publications Ltd, https://www.doi.org/10.4135/9781446250808.n6

Jensen, P. A., & Barron, J. N. (2014). Midterm and first-exam grades predict final grades in biology courses. Journal of College Science Teaching, 44(2), 82-89. https://doi.org/10.2505/4/jcst14_044_02_82

Kim, A. S. N., Shakory, S., Azad, A., Popovic, C., & Park, L. (2019). Understanding the impact of attendance and participation on academic achievement. Scholarship of Teaching and Learning in Psychology. https://doi.org/10.1037/stl0000151

Kim, A. S. N., & Shakory, S. (2017). Early, but not intermediate, evaluative feedback predicts cumulative exam scores in large lecture-style post-secondary education classrooms. Scholarship of Teaching and Learning in Psychology, 3(2), 141-150. http://dx.doi.org/10.1037/stl0000086

Krasne, S., Wimmers, P. F., Relan, A., & Drake, T. A. (2006). Differential effects of two types of formative assessment in predicting performance of first-year medical students. Advances in Health Sciences Education, 11(2), 155-171. https://doi.org/10.1007/s10459-005-5290-9

Lammers, W. J., & Murphy, J. J. (2002). A profile of teaching techniques used in the university classroom: A descriptive profile of a U.S. public university. Active Learning in Higher Education, 3, 54–67. http://dx.doi.org/10.1177/1469787402003001005

Mulryan-Kyne, C. (2010). Teaching large classes at college and university level:  Challenges and opportunities. Teaching in Higher Education, 15, 175–185. http://dx.doi.org/10.1080/13562511003620001

Myers, C. B., & Myers, S. M. (2007). Assessing assessment: The effects of two exam formats on course achievement and evaluation. Innovative Higher Education, 31(4), 227-236. https://doi.org/10.1007/s10755-006-9020-x

Paschal, R. A., Weinstein, T., & Walberg, H. J. W. (1984). The effects of homework on learning: A quantitative synthesis. The Journal of Educational Research, 78(2), 97-104. https://doi.org/10.1080/00220671.1984.10885581

Petress, K. (2006). An operational definition of class participation. College Student Journal, 40(4), 821-824.

Pisha, B., & Coyne, P. (2001). Smart from the start: The promise of universal design for learning. Remedial and Special Education, 22(4), 197-203. https://doi.org/10.1177/074193250102200402

Price, P. C., Pentecost, H. C., & Voth, R. D. (2002). Perceived event frequency and the optimistic bias: Evidence for a two-process model of personal risk judgments. Journal of Experimental Social Psychology, 38(3), 242-252. https://doi.org/10.1006/jesp.2001.1509

Rao, K., Ok, M. W., Bryant, B. R. (2014). A review of research on universal design educational models. Remedial and Special Education, 35, 153-166. https://doi.org/10.1177/0741932513518980

Schneider, M., & Preckel, F. (2017). Variables associated with achievement in higher education: A systematic review of meta-analyses. Psychological Bulletin, 143(6), 565. https://doi.org/10.1037/bul0000098

Siweya, H. J., & Letsoalo, P. (2014). Formative assessment by first-year chemistry students as predictor of success in summative assessment at a South African university. Chemistry Education Research and Practice, 15(4), 541-549. https://doi.org/10.1039/C4RP00032C

Smith, F. G. (2012). Analyzing a college course that adheres to the Universal Design for Learning (UDL) framework. Journal of the Scholarship of Teaching and Learning, 12(3), 31-61. Retrieved from https://scholarworks.iu.edu/journals/index.php/josotl/article/view/2151

Voelkl, K. E. (1995). School warmth, student participation, and achievement. The Journal of Experimental Education, 63(2), 127-138. https://doi.org/10.1080/00220973.1995.9943817

Weston, C., & McAlpine, L. (2004). Evaluation of student learning. In A. Saroyan & C. Amundsen (Eds.) Rethinking teaching in higher education: From a course design workshop to a faculty development framework. Stylus publishing. https://doi.org/10.1111/j.1467-9647.2006.00291.x