On Keeping Up with AI
A postscript on assessment and denial in education
It’s been said that keeping up with news in the GenAI space is like drinking from a firehose. This is particularly true in education, as so many tech releases impact our sector.
But, if you come to grips with the reality that unsupervised summative assessments present a high risk (likely x major impact) to the assurance of learning that cannot be mitigated (AI detectors are bullshit), as students can submit work of a passing or higher standard produced entirely by GenAI, a contract cheating service, or their cousin…if you accept that and rightfully abandon the nonsense busywork of attempting to design unsupervised “AI proof assessments” that pass an arbitrary “stress test” (also bullshit - just because you can’t get GenAI to complete an assessment doesn’t mean I / your students can’t), then there’s not much to keep up with.
New Comet AI browser? ChatGPT in Canvas? Agentic AI? ChatGPT Atlas?
None of this matters from either an assessment security or validity point of view.
With each subsequent release, nothing changes.
There haven’t been any AI releases that have changed the fact that ChatGPT’s launch charbroiled unsupervised summative assessments, which were already cooked. These assessments have been relegated to the learning assurance latrine for two and a half years since early 2023.
Whatever Microsoft or OpenAI’s next AI release is, it won’t change this.
THE thing to get up to speed on is that unsupervised summative assessments are the walking dead.
“Ok, so now what?” is the question to be asking, instead of attempting to prevent AI use in unsupervised assessments, via:
Hopelessly attempting to design AI-proof tasks.
Making tasks overly difficult, complicated, and so contrived that their validity is vaporised.
Drawing a line where AI can and can’t be used.
Asking students to submit their prompts as evidence of something. It’s adorable that your concept of student GenAI use is this:
when it’s closer to this:
Do you want the prompts that generated the prompts I submitted? What about the prompts that generated those? How about the custom GPT that generated those ones? How many of the 35 documents that underpin that custom GPT do you want? And what are you planning to do with all that? And what if I don’t use prompts, just the ‘suggest edits’ button in ChatGPT’s Canvas:
Or attempting to detect AI use after the fact through:
inaccurate and unreliable AI detection software (“it’s just a red flag” - Cool, let’s use tarot cards then, they’re cheaper and as reliable, but we won’t use them as the totality of evidence, so chill!).
Linguistic markers or “AI hallmarks”.
Asking GenAI software if it wrote a student’s work, breaching that student’s IP rights in the process (no, your policies don’t permit you to do whatever you want with a student’s work in the name of integrity).
Comparing a student’s work to the output of GenAI software (do I really have to explain this one?)
We owe it to our students and ourselves to do better.
To face reality, together, and get on with it.
Suggested readings / resources:
Corbin, T., Dawson, P., & Liu, D. (2025). Talk is cheap: why structural assessment changes are needed for a time of GenAI. Assessment & Evaluation in Higher Education, 1–11. https://doi.org/10.1080/02602938.2025.2503964
Australian Government. (2021). Higher Education Standards Framework (Threshold Standards) 2021. https://www.legislation.gov.au/F2021L00488/latest/text
Lodge, J.M., Howard, S., Bearman, M. and Dawson, P. (2023). Assessment reform for the age of artificial intelligence. Tertiary Education Quality and Standards Agency, Australian Government. https://www.teqsa.gov.au/sites/default/files/2023-09/assessment-reform-age-artificial-intelligence-discussion-paper.pdf
Lodge, J.M., (2024). The evolving risk to academic integrity posed by generative artificial intelligence: Options for immediate action. Tertiary Education Quality and Standards Agency, Australian Government. https://www.teqsa.gov.au/sites/default/files/2024-08/evolving-risk-to-academic-integrity-posed-by-generative-artificial-intelligence.pdf
Gen AI strategies for Australian higher education: Emerging practice. (2024). Tertiary Education Quality and Standards Agency, Australian Government. https://www.teqsa.gov.au/sites/default/files/2025-06/Gen-AI-strategies-research-training-emerging-practice-toolkit.pdf
Lodge, J. M., Bearman, M., Dawson, P., Gniel, H., Harper, R. Liu, D., McLean, J., Ucnik, L. & Associates (2025). Enacting assessment reform in a time of artificial intelligence. Tertiary Education Quality and Standards Agency, Australian Government. https://www.teqsa.gov.au/sites/default/files/2025-06/Gen-AI-strategies-research-training-emerging-practice-toolkit.pdf
Associate Professor Mark A. Bassett is Co-Director, Academic Quality, Standards, and Integrity, and Academic Lead (Artificial Intelligence) at Charles Sturt University. He is the author of the S.E.C.U.R.E. GenAI Use Framework for Staff.
Opinions expressed are the author’s own.





Brilliant. This is so tru. The energy spent on 'AI-proof' assessments is completely misplaced.