The rapid integration of AI into professional practice across disciplines makes AI literacy increasingly crucial, not just for technology-focused fields but for all areas of study. Even faculty who are skeptical of AI's value need to consider how it's transforming their disciplines. For example, scientific fields are seeing AI adoption in literature reviews, experimental design, and data analysis. In the humanities, AI tools are already being used for textual analysis, translation, and content creation. Creative disciplines must grapple with AI's impact on artistic production and copyright. Professional programs face increasing pressure from employers who expect graduates to understand AI applications in their field.
AI is reshaping the workplace and labor market, making it essential to equip students with AI skills to prepare them for future job opportunities (Demirci et al., 2024). Furthermore, students are eager for instruction on how to develop their genAI skills and appreciate instructors who integrate genAI into their courses, provide equitable access to these tools, allow room for creativity, and communicate clear guidelines around usage (Freeman, 2024; Johnston et al., 2024; Kelly et al., 2023).
AI literacy encompasses multiple competencies that programs need to address systematically (Ng et al., 2021; Su et al., 2022). The University of Adelaide's AI literacy framework outlines four key areas: recognize and understand; use and apply; evaluate and critique; and reflect and respect. Complementing this, Barnard College offers a framework that progresses from understanding AI's benefits and limitations to creating novel applications (Hibbert et al., 2024). This helpful AI literacy infographic breaks down core elements of AI literacy, ranging from basic concepts to practical applications. Programs can use these frameworks to develop comprehensive approaches to AI education.
Rather than viewing AI literacy as endorsing AI use, programs can approach it as preparing students to critically engage with these tools—whether using them or responding to their impact on their field. Graduates need to be able to make informed decisions about AI use and understand its implications for their profession (Koh & Doroudi, 2023).
Program-Level Considerations
Programs should begin by mapping AI competencies across courses to ensure comprehensive coverage while identifying points in the curriculum where AI literacy naturally aligns with existing learning objectives. This process includes developing program-level learning outcomes related to AI literacy and creating scaffolded experiences that build AI competency throughout the program. When scaffolding desired outcomes across a curriculum, earlier courses might incorporate more learning-based assessments, focused on fundamental knowledge and skills, with more advanced courses using action-based assessments emphasizing the application of knowledge in real-world contexts (Kadel et al., 2024).
For example, one top MBA program has reimagined its curriculum around fundamental skills needed for working with AI, including strategic thinking, structured problem-solving, and human-centered communication skills (Westfall, 2024). While technical knowledge was once essential for complex problem-solving, programs can now focus on developing students' ability to effectively communicate with AI tools while learning technical details as needed. This approach can be thought of as similar to managing a team—success comes from knowing how to break down problems and give clear instructions to AI rather than knowing every technical detail (Clay, 2024).
The integration of AI literacy into academic programs requires thoughtful collaboration between administrators and faculty. Success depends on clear program-level objectives, consistent implementation across courses, regular assessment and adjustment, and ongoing professional development. To that end, training opportunities for faculty to develop AI literacy are critical (Moorhouse et al., 2023). We suggest, at minimum, providing training to develop the following competencies among instructing faculty:
- Recognize the implications of genAI for academic integrity, such as through attacking assessments (Furze, 2024).
- Become proficient with genAI tools used in your discipline (Zhai, 2022).
- Design assessment tasks that provide space for students to demonstrate learning while incorporating genAI tools in the assessment process.
- Communicate with students about the productive, responsible, and ethical use of genAI in assessment tasks (Su et al., 2023).
Departments should develop comprehensive assessment strategies that align with their AI literacy goals. This could include creating program-wide rubrics for evaluating AI literacy across courses and establishing clear benchmarks for competency at different stages of the program. Portfolio requirements can be particularly effective, allowing students to demonstrate their growing AI literacy through curated examples of their work and reflections on their learning journey.
By approaching AI literacy as a crucial component of modern education, programs can prepare students for an AI-enhanced future while maintaining the core values and skills of their disciplines.
Course-Level Implementation
When integrating genAI into a course, begin by critically examining its learning objectives and consider what skills and knowledge students need to be able to demonstrate with—and without—the assistance of genAI tools, in order to succeed in this evolving landscape (Koh & Doroudi, 2023). Consult with colleagues and industry professionals to identify the fundamental skills and knowledge essential to the discipline, regardless of AI capabilities. Thoroughly learning and independently demonstrating these foundational competencies ensures that students retain essential abilities, preventing overreliance on genAI.
After achieving these core skills, consider leveraging genAI to offload time-consuming aspects of tasks, allowing students to concentrate on more significant and complex elements (Lodge et al., 2023). Evaluate AI's potential impact on the field by considering which core responsibilities AI could handle independently, how AI tools might improve existing workflows, and what new possibilities emerge when integrating AI into this work. If students will be allowed or even expected to use genAI in their future workplaces, consider recreating similar conditions in courses to help boost students’ productivity and confidence in collaborating with technology (Smolansky et al., 2023). Focus on the quality of the deliverables students are able to produce, using whatever tools help them get to those outcomes (Bearman & Ajjawi, 2023).
Prioritize learning objectives that focus on higher-order thinking and soft skills, such as critical thinking, evaluation, problem-solving, creativity, innovation, teamwork, communication, emotional intelligence, ethical reasoning, reflection, and adaptability (Aoun, 2017; Kadel et al., 2024; Ratten & Jones, 2023; Rudolph & Tan, 2022; Saroyan, 2022). In crafting objectives, consider using genAI to help articulate clear, measurable behaviors that align with the highest levels of Bloom’s taxonomy (Bloom et al., 1956; DeMara et al., 2019), such as “Evaluate the ethical implications of X” or “Synthesize competing theories in the field of Y” (Salinas-Navarro et al., 2024; Thanh et al., 2023).
In evaluating learning objectives, identify which objectives should remain independent of AI, which may be moderately impacted by AI, and which could be significantly enhanced by AI. For example, objectives might include the following:
- Teaching students to craft effective prompts for generating and refining AI outputs
- Assessing the limitations and biases of AI-generated content
- Using AI tools responsibly for research and learning
- Collaborating with AI systems to enhance human capabilities rather than replace them
Consider adding course-level learning objectives explicitly related to AI literacy, especially involving practices that are (or are likely to become) relevant in the discipline. This will ensure students are prepared not only to navigate an AI-driven world but also to leverage these powerful tools for innovation and problem-solving across their academic and professional journeys. For examples of AI-incorporated learning objectives, read Guide: How to Train Students to Use AI.
Strategies for Enhancing AI Literacy
Whether focused on faculty development or student curricula, the same types of techniques can benefit both groups of learners in developing key components of AI literacy.
Incorporate learners’ perspectives.
Early on in a course or a training session, educators can use anonymous polls to assess students’ or faculty’s prior experience with genAI tools. This not only helps in understanding the baseline knowledge of the class but also highlights areas where further education is needed. Starting the conversation by asking students what they know about genAI can lead to surprising insights and set the stage for a more informed discussion (Fiore, 2023). Further along in the process, seeking learner feedback on assessments and incorporating their perspectives on genAI integration into the curriculum can also foster a sense of ownership and encourage responsible use (Bearman et al., 2023).
Teach domain-specific skills and practical application.
Introduce faculty and students to AI tools relevant to their discipline, focusing on effective prompt engineering and application (Zhai, 2022). Give examples of how you would prompt a genAI tool to assist in a real-life task you are working on. Teach learners how to ask the right questions to maximize the technology’s utility. Incorporate hands-on experiences and specific case studies highlighting both the benefits and ethical dilemmas of using genAI to help them identify practical applications (Meakin, 2024). For students, it's also important to teach what "good work" looks like in the field (Bearman & Ajjawi, 2023; Bearman et al., 2023). This helps them understand the standards they need to meet and lets them judge the quality of work done by AI.
Require critical thinking and evaluation.
Design assessments that require learners to analyze, synthesize, and evaluate AI-generated information (Lim et al., 2023). Teach faculty and students to recognize the benefits and limitations of AI tools, review AI-generated content for potential errors or biases, implement fact-checking and verification methods, and critique AI tools' creation, use, and application (Hibbert et al., 2024; Bearman & Ajjawi, 2023). Here are some sample assignments to help students learn to think critically about AI-generated output:
- Have faculty or students generate an essay using AI, and then "grade" it, looking for hallucinated information (e.g., fake quotes, fake sources, or real sources misunderstood or mischaracterized) and critiquing its analysis. Ask them to reflect on the reliability of genAI and how it can mislead them. Encourage them to describe if they were surprised and what potential fears or concerns they have about the future (e.g., mental atrophy, misinformation, etc.)
- Have learners use genAI to grade a first draft of an assignment, using a provided evaluation rubric. Ask them to evaluate the feedback received and document edits made to their assignment based on the AI’s assessment.
- Require students to revise AI-generated drafts, reflecting on improvements made (Chiu et al., 2023).
- Ask students to write a short text on a topic and also prompt a genAI tool to produce content on the same topic. Have students assess both texts against disciplinary guidelines and a provided rubric, developing their evaluative judgment abilities (Bearman & Ajjawi, 2023).
Engage AI learners in ethical discussions.
To promote AI literacy, it is crucial to teach both faculty and students to critically evaluate ethical issues related to generative AI, including the following:
- Biases in AI models that may produce discriminatory content (Bender et al., 2021; Hacker et al., 2024; Sun et al., 2023)
- Intellectual property concerns from using copyrighted material in training data sets (Ozcan et al., 2023; Perrotta et al., 2022)
- Labor exploitation and environmental impacts (Caplan, 2024; Selwyn, 2022)
- The risk of worsening digital inequality, particularly for low-income and non-native English-speaking students (Amano et al., 2023; Duah & McGivern, 2024)
Here are some ideas for developing users’ ethical awareness:
- Have learners discuss the ethical implications of AI use and the impacts of AI-generated content on society (Wu & Chang, 2023).
- Offer learners access to guest speakers, articles, and multimedia resources about genAI and its ethical implications.
- Facilitate debates on controversial AI topics to encourage critical thinking and consideration of multiple perspectives (Moorhouse et al., 2023).
- Assign reflective assignments on learners’ views about AI to help them articulate their thoughts and deepen their understanding.
- Invite learners to explore deeper questions about the role of genAI in human creativity and expression. Questions like "Why do we write?" or "What does it mean to cede our thinking and our voice to non-sentient machines?" can provoke thoughtful discussions about the nature of learning and the ethical use of technology. These conversations can help learners develop a nuanced understanding of AI's place in society (Fiore, 2023).
Acknowledge emotional factors.
In developing faculty and students’ evaluative judgment abilities, it is important to consider the role of emotions when navigating the ambiguous nature of working with genAI (Bearman & Ajjawi, 2023). Fostering "epistemic doubt" (a state of uncertainty about the completeness or accuracy of AI-generated information) can help students approach AI outputs more critically (Hoeyer & Wadmann, 2020). By articulating what they trust, doubt, and feel about AI interactions, both faculty and students can more effectively navigate the complexities of AI-assisted learning, balancing cognitive skills with emotional intelligence. When designing rubrics for an AI-mediated world, it's crucial to focus on evaluating students' ability to navigate ambiguity and complexity rather than simply categorizing information. This approach better prepares students for the uncertainties they'll face in real-world scenarios, emphasizing decision-making processes and ethical considerations over simplified checklists (Bearman & Ajjawi, 2023).
Conclusion
Integrating AI literacy considerations into program and course design can foster a learning environment where AI becomes a tool for empowerment rather than a shortcut to success. By thoughtfully considering which competencies students need to develop independently and which can be augmented by AI, and preparing faculty to help students develop those competencies, institutions and programs can prepare students to use emerging technologies responsibly in their professional and personal lives.
References
Amano, T., Ramírez-Castañeda, V., Berdejo-Espinola, V., Borokini, I., Chowdhury, S., Golivets, M., González-Trujillo, J. D., Montaño-Centellas, F., Paudel, K., White, R. L., & Veríssimo, D. (2023). The manifold costs of being a non-native English speaker in science. PLoS Biology, 21(7), Article e3002184.
Aoun, J. E. (2017). Robot-proof: Higher education in the age of artificial intelligence. MIT Press.
Bearman, M., & Ajjawi, R. (2023). Learning to work with the black box: Pedagogy for a world with artificial intelligence. British Journal of Educational Technology, 54(5), 1160–1173.
Bearman, M., Ajjawi, R., Boud, D., Tai, J. & Dawson, P. (2023). CRADLE Suggests… assessment and genAI. Centre for Research in Assessment and Digital Learning, Deakin University.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. ACM Digital Library.
Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook 1: Cognitive domain. David McKay Company, Inc.
Caplan, N. (2024, February 15). Resisting AI and refocusing on the human. Perspectives. TESOL Connections.
Chiu, T. K., Xia, Q., Zhou, X., Chai, C. S., & Cheng, M. (2023). Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Computers and Education: Artificial Intelligence, 4, Article 100118.
Clay, G. (2024, November 18). AI literacy and "technical" knowledge. AutomatED: Teaching Better with Tech.
DeMara, R. F., Tian, T., & Howard, W. (2019). Engineering assessment strata: A layered approach to evaluation spanning Bloom’s taxonomy of learning. Education and Information Technologies, 24(2), 1147–1171.
Demirci, O., Hannane, J., & Zhu, X. (2024, November 11). Research: How gen AI Is already impacting the labor market. Harvard Business Review.
Duah, J. E., & McGivern, P. (2024). How generative artificial intelligence has blurred notions of authorial identity and academic norms in higher education, necessitating clear university usage policies. International Journal of Information and Learning Technology, 41(2), 180–193.
Fiore, S. L. (2023). Survival guide to AI and teaching, pt. 10: Talking to your students about AI and learning. Center for the Advancement of Teaching, Temple University.
Freeman, J. (2024, February 1). Provide or punish? Students' views on generative AI in higher education. HEPI Policy Note 51. Higher Education Policy Institute.
Furze, L. (2024). GenAI strategy: Attack your assessments.
Hacker, P., Mittelstadt, B., Borgesius, F. Z., & Wachter, S. (2024). Generative discrimination: What happens when generative AI exhibits bias, and what can be done about it. arXiv:2407.10329.
Hibbert, M., Altman, E., Shippen, T., & Wright, M. (2024, June 3). A framework for AI literacy. EDUCAUSE Review.
Hoeyer, K., & Wadmann, S. (2020). ‘Meaningless work’: How the datafication of health reconfigures knowledge about work and erodes professional judgement. Economy and Society, 49(3), 433–454.
Johnston, H., Wells, R. F., Shanks, E. M., Boey, T., & Parsons, B. N. (2024). Student perspectives on the use of generative artificial intelligence technologies in higher education. International Journal for Educational Integrity, 20(2), Article 2.
Kadel, R., Mishra, B. K., Shailendra, S., Abid, S., Rani, M., & Mahato, S. P. (2024). Crafting tomorrow's evaluations: Assessment design strategies in the era of generative AI. arXiv:2405.01805.
Kelly, A., Sullivan, M., & Strampel, K. (2023). Generative artificial intelligence: University student awareness, experience, and confidence in use across disciplines. Journal of University Teaching and Learning Practice, 20(6), Article 12.
Koh, E., & Doroudi, S. (2023). Learning, teaching, and assessment with generative artificial intelligence: Towards a plateau of productivity. Learning: Research and Practice, 9(2), 109–116.
Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. The International Journal of Management Education, 21(2), Article 100790.
Lodge, J. M., Howard, S., & Bearman, M. (2023). Assessment reform for the age of artificial intelligence. Tertiary Education Quality and Standards Agency.
Meakin, L. A. (2024). Embracing generative AI in the classroom whilst being mindful of academic integrity. In S Mahmud (Ed.), Academic integrity in the age of artificial intelligence (pp. 58–77). IGI Global.
Moorhouse, B. L., Yeo, M. A., & Wan, Y. (2023). Generative AI tools and assessment: Guidelines of the world's top-ranking universities. Computers and Education Open, 5, Article 100151.
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, Article 100041.
Ozcan, S., Sekhon, J., & Ozcan, O. (2023). ChatGPT: What the law says about who owns the copyright of AI-generated content. The Conversation.
Perrotta, C., Selwyn, N., & Ewin, C. (2022). Artificial intelligence and the affective labour of understanding: The intimate moderation of a language model. New Media & Society, 26(3), 1585–1609.
Ratten, V., & Jones, P. (2023). Generative artificial intelligence (ChatGPT): Implications for management educators. The International Journal of Management Education, 21(3), Article 100857.
Rudolph, J., & Tan, S. (2022). The war in Ukraine as an opportunity to teach critical thinking. Journal of Applied Learning and Teaching, 5(1), 165–173.
Salinas-Navarro, D. E., Vilalta-Perdomo, E., Michel-Villarreal, R., & Montesinos, L. (2024). Using generative artificial intelligence tools to explain and enhance experiential learning for authentic assessment. Education Sciences, 14(1), Article 83.
Saroyan, A. (2022). Fostering creativity and critical thinking in university teaching and learning: Considerations for academics and their professional learning. OECD Education Working Papers, 280.
Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European Journal of Education, 57(4), 620–631.
Smolansky, A., Cram, A., Raduescu, C., Zeivots, S., Huber, E., & Kizilcec, R. F. (2023). Educator and student perspectives on the impact of generative AI on assessments in higher education. Proceedings of the tenth ACM conference on Learning @ Scale. ACM Digital Library.
Su, Y., Lin, Y., & Lai, C. (2023). Collaborating with ChatGPT in argumentative writing classrooms. Assessing Writing, 57, Article 100752.
Su, J., Zhong, Y., & Ng, D. T. K. (2022). A meta-review of literature on educational approaches for teaching AI at the K-12 levels in the Asia-Pacific region. Computers and Education: Artificial Intelligence, 3, Article 100065.
Sun, L., Wei, M., Sun, Y., Suh, Y. J., Shen, L., & Yang, S. (2023). Smiling women pitching down: Auditing representational and presentational gender biases in image generative AI. Journal of Computer-Mediated Communication, 29(1), Article zmad045.
Thanh, B. N., Vo, D. T. H., Nhat, M. N., Pham, T. T. T., Trung, H. T., & Xuan, S. H. (2023). Race with the machines: Assessing the capability of generative AI in solving authentic assessments. Australasian Journal of Educational Technology, 39(5), 59–81.
Westfall, C. (2024, November 8). MBA program of the year offers 10 career skills for working with AI. Forbes.
Wu, T., & Chang, M. (2023). Application of generative artificial intelligence to assessment and curriculum design for project-based learning. 2023 International Conference on Engineering and Emerging Technologies (ICEET). IEEE.
Zhai, X. (2022). ChatGPT user experience: Implications for education. Elsevier.