Creating Assessments in an AI World

Since the release and popularization of ChatGPT in late 2022, higher education instructors have been asking: 

  • How can we ensure students are learning effectively ? 
  • Are we teaching concepts and skills relevant in a world with generative artificial intelligence (AI)?

Generative AI refers to technology such as ChatGPT, Bard, and other similar tools that can produce text-based content such as essays, articles, or even computer code which mimics human-like quality. While these advancements offer innovative possibilities, they can also be exploited for unethical purposes, including academic dishonesty. Students may be tempted to utilize generative AI tools to generate writing assignments, thus compromising the principles of fairness and personal academic growth.

To support instructors and provide information on the topic, the CLDT has published a website resource on teaching and learning in the ChatGPT era. This resource will be updated as the technology develops and we gather input and examples from WSE instructors.

Importance of Assessment Design

The best approach to preserving the integrity of your course is to evaluate the design of your course assessments. Designing assessments using research-based teaching and learning practices that support student learning will help prevent students from misusing AI technologies. Effective assessment design focuses on how to measure a student’s understanding, skills, and critical thinking abilities rather than compiling resources and parroting conclusions back. Assessments should require students to deeply engage with the material, analyze it critically, and synthesize their own insights and perspectives.

Some questions upon which you may reflect as you review your assessments for effective design are: 

  • Is this assessment collaborative? 
  • Is the assessment active and/or hands-on? 
  • Is the assessment authentic and based on real-world scenarios? 
  • Can you (as the instructor) generate responses from AI for the assessment that 
  • can be used verbatim? If you can, so can your students! 

You may not check all these boxes, but by critically evaluating your assessments you will better understand the vulnerability and resilience of your assessment strategy

EN.525.642.81 FPGA Design using VHDL

In Keith Newlander’s course, students work in groups to design and build a BRAM Music Generator to certain specifications, including a top-level block diagram and an interface to load music from an external device. In addition to providing the programming files, students work together to develop a report summarizing the overall design and the breakdown of work performed by each group member.

Newlander’s approach demonstrates several important aspects of AI-resilient assessment. The assessment is project-based and hands-on. Students are required to collaborate, requiring critical thinking, peer-to-peer feedback, and consensus among students.

“In my estimation, the group assessment is more resilient to AI as it is more targeted for each user of the course and requires interfacing collaboratively with your other classmates to complete. With this approach, students have to work together to create their overall goal and break-out tasks that are then completed individually. Since the overall design is worked out collaboratively, it’s more immune to AI completion and requires discussion and work between the students. The assessment then requires students to complete their portion of the task which is very targeted for their group design and involves integrating tightly with the software tools, their own overall design, and the group’s specific interface. The assessment then involves integrating all of the group’s components to a workable design and final report, which will likely involve rounds of troubleshooting and diagnosing of problems. This is not something AI can work well at and involves knowledge of the toolset and their initial design to complete successfully.”

EN.605.601.81.SU2 Foundations of Software Engineering 

Sam Schappelle, an instructor co-teaching with several others on the course, shared their approach to emerging AI. Students working in teams create a real-world software engineering project using AI as an optional support tool. Like Newlander, the approach includes critical thinking, peer-to-peer collaboration and opportunities to reflect on experiences using AI within the assignment. 

Teams have the option to use AI to help create eight deliverables throughout the semester including five documents and three presentations. Team members work together to plan, code, and test a solution to design a specified complex problem. The document deliverables follow those found in real software engineering projects and require students to pull together their knowledge from different course modules.

“What we are trying to do is encourage learners to explore the use of AI on their projects. One of the things we teach in software engineering is that the use of tools is a good thing. Tools make us more productive and reduce defects. AI is a tool that we expect will be of great use in software engineering. Eventually, we will be adding modules on AI to the course, but…it’s a bit early for that. Right now, we are letting the learners try things out on their own. The results from the summer [2023] semester are encouraging.”

Learn More

For more information on generative AI and effective assessment design, visit the CLDT’s website resources on the topic. Our webpage on AI and Assessments provides assessment examples from real engineering courses that are effective at mitigating the use of generative AI tools.

JHU Generative AI Tool Implementation Guidance and Best Practices: This resource was created by teaching and learning centers across the different JHU schools. It provides great pedagogical information for instructors about the ways to use and not use generative AI models in the classroom, including a section on redesigning assessments to cope with AI challenges.


Keywords: Artificial Intelligence