The rapid advancement of generative artificial intelligence (genAI) technologies has sent shockwaves through the education sector, sparking intense debates about academic integrity, assessment practices, and student learning (Roe et al., 2023; Rudolph et al., 2023; Susnjak & McIntosh, 2024; Swiecki et al., 2022; Yeo, 2023). Since the public release of ChatGPT in November 2022, educators have grappled with concerns about cheating and the potential erosion of traditional academic values (Gorichanaz, 2023; Sullivan et al., 2023). However, as our understanding of genAI capabilities evolves, so too must our approach to assessment and teaching (Lodge et al., 2023).
Initially, many institutions responded with bans and restrictions, seeking to safeguard academic integrity (Cotton et al., 2023; Perkins & Roe, 2023; Plata et al., 2023). Yet, as the ubiquity and sophistication of genAI tools became apparent, more nuanced approaches began to emerge (Fowler et al., 2023). This shift reflects a growing recognition that simply prohibiting or policing genAI use is neither practical nor beneficial to students' long-term success. Proctored exams and lockdown browsers create significant barriers—they require additional software, stable internet connections, and quiet, private spaces that many students lack. These requirements particularly disadvantage students from lower socioeconomic backgrounds or those in shared living spaces. Such high-stakes, timed assessments increase anxiety and disproportionately affect students with disabilities and test anxiety, and non-native English speakers (Fatemi & Saito, 2020; Pope et al., 2009). These tools also fail to address the fundamental challenge of preparing students for a world where AI tools are increasingly common in professional settings. Additionally, genAI detection tools have shown significant limitations, including accuracy problems, false positives, and potential bias against non-native English speakers, making them unreliable as a primary means of ensuring academic integrity (Chaka, 2023; Dalalah & Dalalah, 2023; Jiang et al., 2024; Perkins et al., 2024).
A more sustainable approach is to build a collaborative culture that encourages students to practice academic honesty and to view assessments as opportunities for growth rather than just a means to achieve high scores (Richardson, 2023; Robinson & Glanzer, 2017). The reality is that students are already using genAI tools, and these technologies will likely play a significant role in their future careers (Johnston et al., 2024; Shah et al., 2024; Smolansky et al., 2023). As genAI gets incorporated into mainstream products, tools like ChatGPT will become a part of everyday writing in some shape or form, just as calculators and computers have become part of math and science (McMurtrie, 2022).
Rather than viewing genAI as a threat, educators have an opportunity to help students develop the skills needed to use these tools ethically and effectively (Anson, 2022; Sharples, 2022; Thanh et al., 2023). Failing to adapt to the rise of genAI could not only compromise academic integrity but also diminish the value of a university degree (Moorhouse et al., 2023).
We need to fundamentally rethink our approach to assessment in the age of genAI. This means moving beyond traditional essays and exams to create more authentic, competency-based assessments that align with the realities of the modern workplace (Bearman et al., 2023). To begin this transformation, educators must first understand how genAI tools interact with their current assessment practices. This understanding can then inform the development of more resilient and meaningful evaluation methods that both challenge students and prepare them for their future careers.
Attack Your Assessments
Before implementing any changes, a useful first step is to evaluate your current assessments’ vulnerabilities to genAI. Furze (2024) recommends “attacking” your assessments from the perspective of a student seeking to complete the assignments using readily available genAI tools. For example, try inputting your assignment instructions and rubric into ChatGPT, specifying the course context and level, and requesting a response aimed at achieving high marks. This practical strategy can reveal potential weaknesses in your assessment design and help identify where modifications may be needed.
You can also consider incorporating this activity into your current courses as a competition for students to see who can produce the most sophisticated, human-like submission using genAI tools. In addition to helping students develop AI literacy, this exercise can help educators understand the capabilities and limitations of genAI within their specific disciplines, discover new tools, and identify areas where assessments need to be strengthened or redesigned.
It may also be useful to have faculty collaborate on testing each other's assessments to comprehend genAI's impact across disciplines and to encourage a shift away from traditional practices such as essays, multiple-choice tests, and basic problem sets that can be easily completed using genAI tools.
Researchers’ attempts to attack assessments using genAI have demonstrated these tools’ proficiency in generating code, writing academic papers, and passing university-level exams across disciplines (Geerling et al., 2023; Kolade et al., 2024; Lucey & Dowling, 2023; Malinka et al., 2023; Tenakwah et al., 2023). In addition to posing challenges to traditional assessment designs that rely on knowledge recall, synthesis, and simple analysis (Bearman et al., 2023; Thanh et al., 2023), genAI tools have been shown to possess problem-solving, analytical, critical thinking, and presentation skills and, thus, can limit the development of students' learning when used unethically (Ogunleye et al., 2024).
As you examine your existing assessments, consider also revising your learning objectives to better prepare students for an increasingly AI-driven society.
The path forward requires a delicate balance between embracing technological innovation and maintaining academic rigor. By proactively examining and adapting our assessment practices, we can create learning environments that both acknowledge the reality of genAI tools and foster genuine student growth. The goal isn't to outsmart AI but to develop assessment strategies that make meaningful use of these technologies while ensuring that students develop the critical thinking and practical skills they'll need in their future careers.
References
Anson, C. M. (2022). AI-based text generation and the social construction of “fraudulent authorship”: A revisitation. Composition Studies, 50(1), 37–46.
Bearman, M., Ajjawi, R., Boud, D., Tai, J., & Dawson, P. (2023). CRADLE Suggests… assessment and genAI. Centre for Research in Assessment and Digital Learning, Deakin University.
Chaka, C. (2023). Detecting AI content in responses generated by ChatGPT, YouChat, and Chatsonic: The case of five AI content detection tools. Journal of Applied Learning & Teaching, 6(2), 94–104.
Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239.
Dalalah, D., & Dalalah, O. M. A. (2023). The false positives and false negatives of generative AI detection tools in education and academic research: The case of ChatGPT. The International Journal of Management Education, 21(2).
Fatemi, G., & Saito, E. (2020). Unintentional plagiarism and academic integrity: The challenges and needs of postgraduate international students in Australia. Journal of Further and Higher Education, 44(10), 1305–1319.
Fowler, S., Korolkiewicz, M., & Marrone, R. (2023). First 100 days of ChatGPT at Australian universities: An analysis of policy landscape and media discussions about the role of AI in higher education. Learning Letters, 1, 1.
Furze, L. (2024, May 1). GenAI strategy for faculty leaders.
Geerling, W., Mateer, G. D., Wooten, J., & Damodaran, N. (2023). ChatGPT has aced the test of understanding in college economics: Now what? The American Economist, 68(2), 233–245.
Gorichanaz, T. (2023). Accused: How students respond to allegations of using ChatGPT on assessments. Learning: Research and Practice, 9(2), 183–196.
Jiang, Y., Hao, J., Fauss, M., & Li, C. (2024). Detecting ChatGPT-generated essays in a large-scale writing assessment: Is there a bias against non-native English speakers? Computers & Education, 217.
Johnston, H., Wells, R. F., Shanks, E. M., Boey, T., & Parsons, B. N. (2024). Student perspectives on the use of generative artificial intelligence technologies in higher education. International Journal for Educational Integrity, 20(2).
Kolade, O., Owoseni, A., & Egbetokun, A. (2024). Is AI changing learning and assessment as we know it? Evidence from a ChatGPT experiment and a conceptual framework. Heliyon, 10(4).
Lodge, J. M., Howard, S., & Bearman, M. (2023). Assessment reform for the age of artificial intelligence. Tertiary Education Quality and Standards Agency.
Lucey, B., & Dowling, M. (2023, January 27). ChatGPT: Our study shows AI can produce academic papers good enough for journals—just as some ban it. The Conversation.
Malinka, K., Perešíni, M., Firc, A., Hujnák, O., & Januš, F. (2023). On the educational impact of ChatGPT: Is artificial intelligence ready to obtain a university degree? Proceedings of the 2023 conference on innovation and technology in computer science education, 1, 47–53.
McMurtrie, B. (2022, December 13). AI and the future of undergraduate writing: Teaching experts are concerned, but not for the reasons you think. The Chronicle of Higher Education.
Moorhouse, B. L., Yeo, M. A., & Wan, Y. (2023). Generative AI tools and assessment: Guidelines of the world's top-ranking universities. Computers and Education Open, 5.
Ogunleye, B., Zakariyyah, K. I., Ajao, O., Olayinka, O., & Sharma, H. (2024). Higher education assessment practice in the era of generative AI tools. Journal of Applied Learning & Teaching, 7(1), 46–56.
Perkins, M., & Roe, J. (2023). Decoding academic integrity policies: A corpus linguistics investigation of AI and other technological threats. Higher Education Policy, 37, 633–653.
Perkins, M., Roe, J., Postma, D., McGaughran, J., & Hickerson, D. (2024). Detection of GPT-4 generated text in higher education: Combining academic judgement and software to identify generative AI tool misuse. Journal of Academic Ethics, 22(1), 89–113.
Plata, S., De Guzman, M. A., & Quesada, A. (2023). Emerging research and policy themes on academic integrity in the age of chat GPT and generative AI. Asian Journal of University Education, 19(4), 743–758.
Pope, N., Green, S. K., Johnson, R. L., & Mitchell, M. (2009). Examining teacher ethical dilemmas in classroom assessment. Teaching and Teacher Education, 25(5), 778–782.
Richardson, W. (2023). Assessing the learning process, not the product. Modern Learners.
Robinson, J. A., & Glanzer, P. L. (2017). Building a culture of academic integrity: What students perceive and need. College Student Journal, 51(2), 209–221.
Roe, J., Renandya, W. A., & Jacobs, G. M. (2023). A review of AI-powered writing tools and their implications for academic integrity in the language classroom. Journal of English and Applied Linguistics, 2(1).
Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6(1), 342–363.
Shah, P., Raheja, A., & Sridhar, H. (2024, February 4). How do students use ChatGPT? The Michigan Daily.
Sharples, M. (2022, May 17). New AI tools that can write student essays require educators to rethink teaching and assessment. London School of Economics.
Smolansky, A., Cram, A., Raduescu, C., Zeivots, S., Huber, E., & Kizilcec, R. F. (2023). Educator and student perspectives on the impact of generative AI on assessments in higher education. Proceedings of the tenth ACM conference on Learning @ Scale. ACM Digital Library.
Sullivan, M., Kelly, A., & McLaughlan, P. (2023). ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning and Teaching, 6(1), 31–40.
Susnjak, T., & McIntosh, T. R. (2024). ChatGPT: The end of online exam integrity? Education Sciences, 14(6), 656.
Swiecki, Z., Khosravi, H., Chen, G., Martinez-Maldonado, R., Lodge, J. M., Milligan, S., Selwyn, N., & Gašević, D. (2022). Assessment in the age of artificial intelligence. Computers and Education: Artificial Intelligence, 3.
Tenakwah, E. S., Boadu, G., Tenakwah, E. J., Parzakonis, M., Brady, M., Kansiime, P., Said, S., Ayilu, R., Radavoi, C., & Berman, A. (2023). Generative AI and higher education assessments: A competency-based analysis. Research Square.
Thanh, B. N., Vo, D. T. H., Nhat, M. N., Pham, T. T. T., Trung, H. T., & Xuan, S. H. (2023). Race with the machines: Assessing the capability of generative AI in solving authentic assessments. Australasian Journal of Educational Technology, 39(5), 59–81.
Yeo, M. A. (2023). Academic integrity in the age of artificial intelligence (AI) authoring apps. TESOL Journal, 14(3).