
Newsletter November 2024: Generative AI in Teaching
The November 2024 edition of SDU Centre for Teaching and Learning’s newsletter focuses on some of the opportunities, dilemmas, and challenges that the accessibility of generative AI (GAI) presents in research-based teaching.
Generative AI significantly—some would say radically—changes the foundation for teaching. Fortunately, educators worldwide are working to find good answers. This newsletter points to some thoughts and resources that can support the work of understanding the new conditions and developing good initiatives and responses.
GAI is Upon Us
GenAI in Teaching – Potential and Challenges
The potential for using GAI in daily study practices and teaching is significant. To mention just a few things, GAI can be used for advanced lookups, explanation of concepts, summarization of articles and large amounts of material. GAI can provide feedback on produced texts and even draft texts; and GAI can—with the right questions/prompts—be useful as a 'research assistant' in all phases of an investigation, including as a (co)writer of texts.
However, the major pedagogical challenge is that GAI should preferably function as an aid (copilot) for academic learning and not as a robot or autopilot, that takes over the students' learning work without contributing to education and competence development.
The book "Teaching with AI" could serve here as a significant source of inspiration.
"Teaching with AI – a practical guide to a new era of human learning" by Jose Antonio Bowen and C. Edward Watson, Baltimore: Johns Hopkins University Press, 2024, 270 pp. Find it at SDU library.
This comprehensive work addresses both: (i) the 'big' questions stemming from the assertion that AI permanently alters our relationship with thinking, teaching, and learning, and (ii) the 'small' questions about how we can effectively structure our teaching with Generative AI (GAI) to provide students with a critical friend/guide in their own learning process. The book offers numerous concrete examples of AI-integrated teaching activities and ready-to-use prompts.
Chapters 7-11 are particularly relevant, covering: Policies (rules for AI use, including cheating), Grading and (Re-)Defining Quality, Feedback and Roleplaying with AI, Designing Assignments and Assessments for Human Effort, and Writing with AI.
The book presents several thought-provoking (and perhaps even provocative) viewpoints for discussion, including:
- Chatbots can produce academic work to a certain level (the new baseline). The future goal is to teach students to contribute value above this level.
- For students to learn what constitutes 'added value' beyond chatbot capabilities, (i) GAI must be extensively integrated into teaching and GAI-supported responses must be part of the teaching foundation, and (ii) instructors must be even more precise in their descriptions of goals and quality that surpass chatbot capabilities.
- 'Naked teaching' - the part of instruction aimed at teaching students what is beyond chatbot capabilities - the distinctly 'human' aspect of learning - will become increasingly important. Paradoxically, what the authors call 'Naked Teaching', i.e., teaching without technology, becomes even more crucial.
- While we cannot prohibit AI, we can 'prohibit' the lowest grades (i.e., raise the passing threshold for several of our examinations).
- Most future text writing will be blended writing between a chatbot and a person, but teaching students to write will remain crucial.
- Motivation becomes paramount - how to motivate students to 'struggle' with material when they can apparently have a chatbot do the work.
- There are no reliable methods to plagiarism-check for AI use (despite a large industry attempting to sell such services).
The book contains many concrete suggestions for teaching activities that consider these new conditions. An example from the book:
- Ask an AI to write an essay/write code/draw an image/create a script/design an experiment/draft a press release/propose a new business/analyze data
- Evaluate the results. Make a list of errors or how this result could have been better.
- Adjust your prompt to improve the output.
- Which result is best and why? What was your strategy to improve the promt? What worked best?
- Take the best output and make it even better with human editing.
- Describe for an employer what value you added to this process.
- Explain why your human work is better or improved the AI work
Desirable AI Usage in Different Courses
The appropriate level of GAI use in individual courses depends on various factors: learning objectives, nature of the material, examination format, stage in the study program, etc. Clear justification, communication, and expectation alignment regarding this are necessary for each course.
For this purpose, the University of British Columbia, Canada, has developed a 6-step classification that instructors use to communicate the acceptable use of GAI in individual courses: (i) No AI, (ii) AI as a Study Tool, (iii) AI for Idea Generation, (iv) AI-Assisted Editing, (v) AI Output Evaluated, and (vi) Full AI.
For a more detailed description of the six different degrees of AI integration in the course, see: The GENAI Assessment Scale
AI, Plagiarism, and Post-plagiarism
Critique of GAI and Tech Companies
The spread of GAI holds potential but also many contradictions and conflicts on multiple levels:
It is well-known that chatbots can hallucinate, i.e., invent answers that may sound plausible but are factually/historically incorrect. Chatbots often provide answers with value/cultural bias, as their responses depend on the material they are trained on and the (non-transparent) algorithms that govern the calculations.
Furthermore, one can certainly debate to what extent Tech companies have fairly acquired the rights to all the material their chatbots are trained on, and how chatbots are part of Tech companies' non-transparent data collection architecture aimed at selling prediction products.
Finally, both the training and use of GAI are extremely energy-consuming with significant negative consequences for CO2 emissions and climate impact.
To document and understand these and many other contradictions and conflicts embedded in the provision and use of GAI products, Teaching Critical AI Literacies may be useful.
Digital Competencies
As evident from this newsletter's brief highlights, the use of AI in teaching is quite complex and exemplifies the general complexity arising from both our learning activities and research and administrative activities being increasingly embedded in a digital infrastructure and digital procedures and activities. As students, researchers, administrative staff, and citizens in general, we increasingly need to know and understand these digital structures and procedures to function as competent citizens and avoid alienation.
This is the background for discussions about strengthening digital competencies throughout society and at SDU. Formulation of strategies to strengthen digital competencies among all SDU employees and students will be initiated at several levels in 2025.Inger-Marie Falgren Christensen's work on developing a systematic approach to describing digital competencies will form part of the foundation and support the strategic work by providing a common language and framework for dialogue and implementation.
Digitale kompetencer i universitetsuddannelser SDU (Digital Competencies in University Education SDU, only in Danish), October 2024, Inger-Marie Falgren Christensen and SDU Centre for Teaching and Learning.
SDU BLOG

Mubashrah Saddiqa: Enhancing Communication and Collaboration in Diverse Student Groups Through Adaptive Supervision Techniques
SDU BLOG

Vinay Chakravarthi Goginen: Effective Supervision Methods to Foster Independent Research Skills in Ph.D. Students