Newsletter June 2025: Generative AI and Examinations – and a Note on GenAI and Programming
The June 2025 issue of the SDU Center for Teaching and Learning’s newsletter once again addresses GenAI, this time with a focus on AI and examinations. Higher education institutions around the world are working to adapt to the new conditions that GenAI introduces for assessment and examination. The newsletter refers to a number of research articles in which results and experiences are exchanged, and which are relevant to SDU’s examination regulations and practices.
SDU Regulations on AI and Examinations
At the heart of SDU’s regulations on the use of AI in examinations are the following points: (i) students are not permitted to use AI in oral exams and written in-house exams; (ii) students may use AI in all other examinations, including graded home assignments, bachelor’s theses, and master’s theses; and (iii) students are required to declare their use of GenAI in their assignments.
It can be challenging to gain a fully precise overview of the overall examination practices across all of SDU’s degree programmes. However, developments suggest that, at many of SDU’s programmes, what has been termed “the two-lane road” in the literature is being developed. That is, either the use of GenAI in examinations is completely prohibited (Lane 1), or GenAI may be used freely (Lane 2). This approach has both strengths and weaknesses, which are being actively debated internationally (see below).
The rule that students must declare their use of GenAI has, so far, proven not to be straightforward in practice:
(i) Students do not always have the prerequisites to make a meaningful declaration. It requires a good overview of—and a language for—the individual steps of an academic analysis in order to declare the use of GenAI meaningfully—a language that students only gradually develop throughout their studies. (ii) Students are not always willing to declare. They are often unsure how faculty members assess the use of GenAI—that is, whether the use of GenAI will have a positive or negative effect on their grade.
(iii) Faculty members are uncertain whether the use of AI is under-declared, and they spend a disproportionate amount of energy assessing or guessing what constitutes the students’ independent contributions to the assignment.
(iv) GenAI is, so far, only rarely incorporated into teaching through shared dialogue and collaboration between faculty and students, which contributes to the uncertainty. The issue of declaration is also being actively debated internationally (see below).
The Two-Lane Road?
Article: Guy J. Curtis (18 Mar 2025): The two-lane road to hell is paved with good intentions: why an all-or-none approach to generative AI, integrity, and assessment is insupportable, Higher Education Research & Development, DOI: 10.1080/07294360.2025.2476516
In this article, the author argues that the “two-lane road” approach to AI in examinations is not the most appropriate. There are two main problems.
Faculty members may easily abdicate responsibility for teaching students how to use a super-complex tool on super-complex academic problems, leaving this task to the students themselves. In subjects where GenAI is not permitted in examinations, teaching its use is considered “irrelevant,” and in subjects with home assignments, AI may be used freely, so there is also relatively little to discuss. However, as the authors point out, this is not how we normally teach complex topics in higher education. Normally, faculty members would attempt to design courses in which students are gradually guided into a new complex field with expert supervision. So the “two-lan-road”-strategy can get in the way of good teaching.
The second main problem, according to the authors, relates to the main argument put forward by proponents of the two-lane approach: Since we cannot effectively prevent cheating with the use of AI—because plagiarism can no longer be detected—we should not attempt to place restrictions on the use of AI where it is allowed. The issue of effectively preventing cheating is not new. For example, it has always been possible for students to receive help with graded home assignments, bachelor’s theses, and master’s theses—and even to purchase an assignment —without this leading to the abolition of the examination form or to encouragement for students to seek as much external help as possible. We have managed to administer “middle ways,” from which we can learn.
AI Declaration of Courses—by Faculty?
Article: Mike Perkins, Jasper Roe, Leon Furze, The AI Assessment Scale Revisited: A Framework for educational Assessment, December 2024
“The AI Assessment Scale Revisited (AIAS) is an attempt to develop a taxonomy for the role GenAI should play in teaching in individual courses. The background is the need to facilitate an open dialogue between faculty and students about the appropriate use of AI (given learning objectives, the nature of the material, the course’s place in the curriculum, etc.), and the idea is that it is the faculty who should take responsibility for this AI declaration of their own courses.
A previous version of AIAS was formulated and tested at a large number of universities worldwide, and a revised version has been formulated considering the experiences gained.
In the new version, the scale consists of five steps: No AI involved in teaching and examination; AI used in planning and idea development; AI as a collaboration partner in individual stages of analysis; full use of AI throughout the assignment; and finally, AI used for creative exploration of entirely new fields (see the figure via this link).
The taxonomy approach has been criticized for relying on agreements with students about the use of GenAI—agreements that are difficult to monitor for compliance. However, a very important premise of the AIAS model is that the way in which AI is incorporated into teaching should be reflected in the assessment criteria for assignments. The aim is to shift the focus from questions of ‘cheating’ to questions about whether clearly stated, transparent academic ‘criteria’ are met. This leads to another major theme in the international debate.
Validity More Important Than Cheating
Article: Phillip Dawson, Margaret Bearman, Mollie Dollinger & David Boud (2024) Validity matters more than cheating, Assessment & Evaluation in Higher Education, 49:7, 1005-1016, DOI: 10.1080/02602938.2024.2386662
The main point of the article is that, in connection with the spread of GenAI in education, we may so far have focused too much on cheating. By doing so, we risk: misusing our resources; reducing the issues to a question of rules and rule enforcement; shifting the responsibility for examination quality onto students and their morality; and causing faculty to develop ‘police eyes’ on examination assignments, rather than focusing on the quality of the academic contribution.
The authors argue that, in the context of examinations, the central question is whether the examinations are valid—that is, whether students cannot pass examinations if they have not learned anything along the way. Shifting the focus to validity shifts the focus to examination design and places the responsibility on the institution.
GenAI and Programming
”The Robots Are Here: Navigating the Generative AI Revolution in Computing Education”: https://dl.acm.org/doi/abs/10.1145/3623762.3633499
“The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming”: https://dl.acm.org/doi/10.1145/3511861.3511863.
”What is Programming? …and What is Programming in the Age of AI?” https://cacm.acm.org/opinion/what-is-programming/
AI has changed our relationship with texts and the conditions for writing and learning to write. The same can be said about programming and learning to program. This will have to be the subject of another newsletter; here, just three suggestions for summer reading on the topic.
EPICUR NEWS: Application phase for Seed Funding 2025 open!
EPICUR is supporting interdisciplinary research projects with up to €150,000 each.