Easy A – Auto-graded Assessments

By: Osmara Salas

Of the many management tools within learning management systems (LMS), the testing option is the one least used by faculty (Rhode et al., 2017) yet exams are often worth the most in terms of percent of the overall course grade in online courses (Arend, 2006). Regardless of whether it is formative or summative, automatically graded (auto-graded) assessments are a convenient assessment strategy for instructors with several benefits for students as well. For example, students can actively engage with the material leading to increased class and exam preparation (Hillman, 2012). Students acquire self-confidence in their knowledge through practices tests and quizzes and feel that they perform better on finals (Douglas et al., 2012). These assessment types have evolved beyond the standard true/false or multiple-choice only questions. Well-designed online quizzes and tests can meet the higher levels of Bloom’s Taxonomy cognitive levels such as apply, analyze, and evaluate (Stack et al., 2020). While there is some effort required initially to create the questions, they can be re-used to populate future assessments. Auto-graded assessments can also ease the burden of grading on instructors once courses have started.

In many LMS, auto-graded assessments have expanded and now include a variety of question types including calculated formulas, fill in the blanks, multiple answers, matching, and more. Grading parameters can be set to provide customizable feedback and, depending on the question type, grant partial credit. Online quizzes and tests are traditionally used for assessing factual knowledge and simple comprehension of topics. Pools of questions can be created so that students receive a randomly generated set of questions for the quiz or test. Online quizzes and tests are traditionally used for assessing factual knowledge and simple comprehension of topics. A good resource for foundational questions can be found as part of the instructor resources on the textbook publishers’ websites. However, if using publisher generated questions, ensure their accuracy and look for “clueing signals” (Boitshwarelo et al., 2017, p.10) in the questions that may help students guess the correct response before using them. Publishers’ questions are often generic in nature; therefore they should be intermingled with those of your own creation to address more specific and detailed content from your course.

Creating the Question

For any given question type, an appropriately and carefully worded question will drive inquiry and guide the student to the correct response. In this sense, higher-order cognitive skills can be achieved in auto-graded assessments. A study conducted by Cox and Clark (1997) on the use of multiple-choice questions (MCQ) in formative quizzes in a computer programming course found that deep learning could be attained through simple adjustments in the construction of the question. The wording for a knowledge question can be simple, such as “which of the following is an example of [X]?” thereby addressing definitions and identifying basic concepts. Re-expressing the concept, “what is the outcome of [given input/problem/variable]?”, requires comprehension and understanding of the concepts. Application questions can also take multiple forms (i.e., ordering, fill-in-blanks, MCQ), requiring students to select the steps needed to complete a procedure or task, in conjunction with a required component/device/element needed to perform the task, if desired. For testing analysis, Cox and Clark (1997) offer the following questions as guidelines:

  • What does [X] do?
  • Which of the following is most likely to be correct/the best solution to the problem?
  • Under what circumstances will [component/variable/device] perform properly?

Additionally, questions that require students to ‘calculate’ fall under the analyze cognitive skill set in Bloom’s Taxonomy. Using distractor questions and/or answers requires students to evaluate the context of the question, as well as all possible solutions, and reflect on their knowledge to select the correct response.

Formative and Summative Value

When used as a formative assessment tool, online quizzes can scaffold learning by becoming progressively harder from module to module. Including questions from previous modules also enables recall of older information, leading to the construction of knowledge. Online quizzes should occur on a regular basis with low-grade stakes to not overwhelm students in frequency nor value. When using this strategy it is important to note that the use of non-graded assessment tools tends to be used less often by students as the course continues (Boitshwarelo et al., 2017; Ćukušić et al., 2013; Arend, 2006). Due to its low-grade characteristic, formative online quizzes engage the student and increase student learning (Boitshwarelo et al., 2017). Assigning the quiz a low grade value emphasizes that its importance is in the completion attempt towards deeper understanding of the course material and not necessarily in the performance value (grade).

From the student perspective, they tend to use formative online quizzes to prepare for summative assessments (Boitshwarelo et al., 2017; Hillman, 2012). Therefore, if using an online summative test, it should include some questions from previous quizzes. Student learning is also improved when two or more attempts are made available and instantaneous feedback is provided. The more students interact with formative quizzes, the more confident they become with the content and the assessment strategy over time (Douglas et al., 2012). being more actively engaged with the course material and being more focused on it, knowing there was an accompanying online quiz (Hillman, 2012).

Corrective feedback is useful for helping students to learn from their mistakes. Auto-graded assessments provide instantaneous quantitative feedback (score) and, depending on the question type, qualitative (written) corrective feedback. Corrective feedback does not necessarily mean providing the student with the correct answer. Enabling “view correct answer” is an optional setting on most LMS. When the incorrect answer is submitted, the corrective feedback can instead guide the student to review specific parts of the course material where the correct answer can be found.

Knowing that students can reference course materials during an online quiz should be taken into consideration and the quiz/test questions should be developed accordingly. Much like any other assessment strategy, online quizzes and tests align with learning objectives through a deliberate selection of question content and format. As with other course elements, quizzes affect students’ perception of the course and therefore should reinforce course goals and be used as part of a more comprehensive assessment strategy.

Although often under-utilized, auto-graded assessments have their advantages for both instructors and students. Instructors can implement an effective assessment strategy that is reconfigurable for future terms and quickly provides responsive and corrective feedback, minimizing the need to personally do so once the course has started. Students appreciate the immediate feedback that positively contributes to their learning experience and can be scaffolded from module to module with low stakes grading that offers high-end rewards.

References

Arend, B. D. (2006). Course assessment practices and student learning strategies in online courses. 1–15.

Boitshwarelo, B., Reedy, A. K., & Billany, T. (2017). Envisioning the use of online tests in assessing twenty-first century learning: A literature review. Research and Practice in Technology Enhanced Learning, 12(1), 1–16. https://doi.org/10.1186/s41039-017-0055-7

Cox, K., & Clark, D. (1998). The use of formative quizzes for deep learning. Computers Education, 30(3/4), 157–167.

Douglas, M., Wilson, J., & Ennis, S. (2012). Multiple-choice question tests: A convenient, flexible and effective learning tool? A case study. Innovations in Education and Teaching International, 49(2), 111–121. https://doi.org/10.1080/14703297.2012.677596
Hillman, D. J. (2012). The impact of online quizzes on student engagement and learning. 1-6.

Rhode, J., Richter, S., Gowen, P., Miller, T., & Wills, C. (2017). Understanding faculty use of the learning management system. Online Learning, 21(3), 68–86. https://doi.org/10.24059/olj.v21i3.1217

Stack, A., Boitshwarelo, B., Reedy, A., Billany, T., Reedy, H., Sharma, R., & Vemuri, J. (2020). Investigating online tests practices of university staff using data from a learning management system: The case of a business school. Australasian Journal of Educational Technology, 36(4), 72–81


Keywords: Academic Integrity, Assessment, Course Design, Engineering Education

Contributor Bio

Osmara Salas, Instructional Designer

Osmara received her Master of Education in Curriculum and Instruction with a focus on Educational Technologies from the University of Florida and is currently pursuing her Doctorate in the same field. She received her Bachelor of Science from Arizona State University in Technical Communications. In addition to instructional design, her vast professional experience includes advertising, graphic design in multiple mediums, Sharepoint administration and both personnel and content management. Learn more about Osmara at osalas.org/eportfolio.