Generative Artificial Intelligence and Academic Integrity

An instructor engaging with various facets of artificial intelligence

Chatbots, such as OpenAI’s ChatGPT, are increasingly being integrated into higher education as a tool to improve student engagement and support. While ChatGPT has shown promise in its ability to assist students with a wide range of tasks and provide instant feedback, there are also potential risks associated with its use. This article aims to explore the benefits and risks of using ChatGPT in higher education and offer insights into how this technology can be leveraged effectively to support student learning and development.

The paragraph above was written by OpenAI’s ChatGPT using the following prompt: “Write an introductory paragraph to an article about the benefits and risks of ChatGPT in higher education.” With Microsoft's substantial investment in OpenAI, the emergence of rival services from Google, Meta, and Anthropic (among others), and a significant number of students already utilizing generative artificial intelligence (AI) for homework assignments, it is no surprise that educators are both curious and concerned about generative AI. As with many new technologies that faced initial skepticism, such as calculators and Google search, the impact of AI content generation on the higher education landscape remains an open question.

While this piece largely focuses on ChatGPT as an instance of generative AI, the suggestions provided here can be generalized to other competing tools.

What Is Generative AI?

IBM defines generative AI as "deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on" (Martineau, 2023, para. 1). These models, often accessed through a generative AI chatbot like ChatGPT, are capable of producing extraordinary amounts of relatively accurate knowledge in an easily accessible, conversational format. Reports show that these tools are already successfully passing qualifying exams in business, law, and medicine (Wilde, 2023).

However, there are significant risks inherent to the technology, particularly within education:

  • First, the model's responses are based on data that may be outdated, incorrect, or lacking in consideration of current events and dependable sources.
  • Second, it is limited in addressing complex or specialized inquiries. 
  • Third, it maintains an authoritative tone even when responding inaccurately.
  • Fourth, it can perpetuate biases or stereotypes and center Western perspectives. For example, by OpenAI’s admission, ChatGPT "performs best in English” and “can reinforce a user’s biases” throughout a chat (OpenAI, n.d.).
  • Fifth, many college professors express worry about generative AI facilitating academic dishonesty among students since this technology can generate content that surpasses the originality threshold of plagiarism detection tools like Grammarly, Unicheck, and Turnitin (Westfall, 2023). However, some research has shown this concern could be overblown (Spector, 2023).

The rapid expansion of generative AI raises concerns about its impact on learning in general. What should an appropriate assessment entail in order to effectively measure the mastery of course objectives? Looking at the bigger picture, a profound question arises about the purpose of education in a world where tasks such as original essay writing can be automated.

As a mitigating—but still insufficient—workaround, some companies are developing classifiers to help detect AI-generated content. Due to unreliable results, OpenAI discontinued its text classifier in the middle of 2023. Worrisome as well is research showing that false positives in AI detection are particularly common for non-native English writers (Myers, 2023). However, the popular plagiarism deterrence software Turnitin does include AI writing detection within its “Turnitin Originality” product.

Additionally, when generative AI is adopted, its sophistication poses risks of overreliance for both instructors and students. For instructors, generative AI can become a shortcut, bypassing thoughtful design, content accuracy, and real engagement. For students, generative AI can provide wrong or overly general responses that sound authoritative and accurate. Completely banning the technology (as attempted by some school districts) might only serve as a temporary solution with limited effectiveness.

Creative Uses of Generative AI

As Ellen B. Meier, professor of computing and educational practice, notes, "Tools such as ChatGPT can present teachers with new pedagogical opportunities to move away from transmission teaching and begin to design more active, culturally relevant learning environments" (Gilbard, 2023).

And indeed, despite the risks, there are several creative ways to leverage generative AI in education. With the assistance of exciting new resource hubs for educators, instructors are already utilizing generative AI in the following ways:

  • Creating lesson plans and activities: Generative AI can help engage students in your course by suggesting case studies, authentic activities, and projects that would be best suited to your course materials.
  • Assisting in designing assignments and quiz questions: Generative AI can provide initial questions that can then be edited by a subject matter expert (SME) into high-quality assignment prompts, quiz questions, and knowledge checks.
  • Providing feedback on assignments: If you have a list of points that you would or would not like to see in student submissions, generative AI can help you rewrite them into a paragraph of feedback.
  • Involving students in debugging or revising AI-generated text: Have students begin with some AI-generated text or code. Then, ask them to critique, expand, or add nuance to what generative AI created based on the knowledge they’ve acquired from your course.
  • Customizing course content by simplifying language and adapting materials to different reading levels: Reducing idioms and colloquialisms, matching student reading levels, and diversifying examples are important aspects of a successful online course. Input some text into ChatGPT and ask that it adjust your language to meet some of these goals. You can then keep what you like and discard what you do not. 

Generative AI Checklist

Regardless of where one stands on generative AI for learning—whether curious, concerned, or a combination of both—we recommend implementing the following strategies in your course:

  • Acknowledge the presence of the technology: Since a vast majority of students are already aware of generative AI, instructors should openly discuss the technology, regardless of whether its use is allowed or not (Smolansky et al., 2023). By simply acknowledging generative AI tools, instructors can reduce the confidence with which students might attempt to bypass integrity concerns.
  • Communicate expectations clearly: Faculty should set expectations for whether student use of generative AI is permissible. If permissible, you should also make clear whether the use of AI-generated content in coursework should be disclosed, and if so, how it should be cited.
  • Revisit assignments to ensure compatibility with generative AI: Consult with an instructional designer to review existing assignments and, if necessary, explore assignment alternatives. Discuss how assessments can incorporate AI or be designed to be too challenging for AI completion. For instance, a series of subjective personal reflections on an interactive scenario may not significantly benefit from AI assistance. Additionally, you can have students generate a paragraph or code snippet using AI and then ask them to revise or debug it, considering specific aspects of the assigned readings and lectures.

Conclusion

The integration of generative AI into higher education offers both benefits and risks. While generative AI tools show promise for enhancing student engagement and saving educators time, significant risks remain. To navigate the challenges generative AI poses, open discussion, clear communication of expectations, and thoughtful assignment design will be essential.

References

Gilbard, M. (2023, January 31). Navigating the risks and rewards of ChatGPT. Teachers College Newsroom.

Martineau, K. (2023, April 20). What is generative AI? IBM Research Blog.

Myers, A. (2023, May 15). AI-detectors biased against non-native English writers. Stanford HAI.

OpenAI. (n.d.). Is ChatGPT biased?

Smolansky, A., Cram, A., Raduescu, C., Zeivots, S., Huber, E., & Kizilcec, R. F. (2023). Educator and student perspectives on the impact of generative AI on assessments in higher education. In Proceedings of the tenth ACM conference on learning @ scale (pp. 378–382).

Spector, C. (2023, October 31). What do AI chatbots really mean for students and cheating? Stanford Graduate School of Education.

Westfall, C. (2023, January 28). Educators battle plagiarism as 89% of students admit to using OpenAI’s ChatGPT for homework. Forbes.

Wilde, J. (2023, January 26). ChatGPT passes medical, law, and business exams. Morning Brew.