Three Principles for Integrating AI Tools into Pedagogy

Abhilash Panthagani, Associate Director Strategic Research, and Benjamin Zevallos, Research Analyst, at EAB identify three steps HE leaders could take to help incorporate new AI tools.

Posted by Abhilash Panthagani on

Higher education is closing in on a full academic year with generative AI tools. Students have incorporated AI tools into how they approach even their most basic assignments: over 89% report using ChatGPT to help with a homework assignment and 48% admit to using it for an at-home test or quiz. 

Sensibly, higher education leaders want to move past AI academic integrity concerns and integrate AI tools into teaching and learning. Many are currently in the process of revising or creating policies to address AI-related issues and provide a vision for integrating AI into pedagogy.

Our research team at EAB reviewed over 30 higher education AI academic guidelines – from informal documents released by teaching and learning centres to formal directives – and spoke with dozens of higher education institutions to identify three steps academic leaders must take to help instructors incorporate this new tool. 

1. Discourage the use of AI plagiarism detectors 

Misguided attempts to catch AI-enabled cheating have already led to nightmare scenarios for students and academics. AI detection software has proven unreliable, recording high rates of false positives and negatives and exhibiting bias against non-native English speakers. OpenAI scrapped its detection platform because of poor performance. Even if tools with 99% accuracy existed, any risk of false accusation is at odds with the spirit of education.

Academic leaders should keep the following points in mind when distancing themselves from AI detectors:

1)      The products are unreliable;

2)      Detection software is biased against specific segments of learners; and

3)      As AI changes, detection software cannot keep up.

2. Trust academics to define AI tool use within their modules but push them to set clear guidelines

Instructors must clearly state what constitutes appropriate AI use for students on a module-by-module or even assignment-by-assignment basis. Students are less likely to follow the rules regarding AI tool usage if they’re not clearly defined in module syllabi and assignment rubrics (eg, some students might not think using ChatGPT is plagiarism because it does not come from a real person or source).

As just one example, King’s College London provides AI usage guidelines at the macro-level (institution-wide), meso-level (department or programme), and micro-level (lecturer). KCL encourages academics to explore the AI landscape alongside students, set clear expectations for AI tool usage in the module assignments, and consider how AI could improve their productivity.

Access crowdsourced Syllabi Policies for AI Generative Tools, where instructors across institutions and disciplines share their syllabi policies.

3. Facilitate ongoing experimentation with AI in pedagogy

Firstly, as lecturers grapple with AI tools’ role in their modules, they should review student engagement and performance as reflected in summative (eg, essays, exams) and formative (eg, quizzes, in-class activities) assessments. Lecturers can also directly survey students to source feedback on how they could improve the way they accommodate and incorporate AI into assessments.

Secondly, universities should dedicate hands-on training time to build instructor comfort with teaching with AI. Below are three examples of institutions training lecturers to teach with AI tools:

o   AI Tools Coursework Consultations: The Business School’s IDEA Lab at Imperial College London works with instructors to stress test how AI tools could be used to complete coursework so lecturers can redesign assessments to accommodate effective and appropriate use of AI tools. The stress test evaluates the use of AI tools in instruction and assessments based on parameters such as accuracy, clarity, relevance, compliance, referencing, and ease of use.

o   Online Training Courses: Auburn University developed a Teaching with AI online Canvas course for lecturers, which over 600 have already completed. The self-paced course allows instructors to experiment with AI tools and redesign assessments for their own courses while receiving feedback.

o   Discipline-Specific Workshops: The University of Mississippi launched a paid ($500 stipend) three-day AI Summer Institute for Teachers of Writing. In this workshop, 18 lecturers attended sessions led by colleagues and produced deliverables to help them incorporate generative AI tools into their courses.

Finally, universities should provide seed funding to encourage instructors to experiment with AI tools in the classroom. As part of its initiative on pedagogical uses of artificial intelligence (IPAI), Georgetown University funded about 40 project proposals (chosen from 100 submitted) to develop and test innovative uses of AI in all types of classroom settings. 

Although generative AI’s influence on pedagogy is understandably a central focus of many higher education institutions, EAB’s AI infographic outlines twelve additional opportunities to unlock AI’s potential in and beyond the classroom. AI-driven innovations present numerous possibilities for higher education institutions, including providing personalised student support, supercharging academics and staff productivity, and optimising operations.

EAB is a proud sponsor of the AHUA Spring Conference 2024.

Preventing and Responding to Gender-Based Violence on Campus

Application deadline: Friday 10th January 2025

This free workshop is offered to those in senior roles to develop their understanding of gender-based violence; the requirements and expectations of Higher Education Institutions in preventing and responding to it; and to support the move from single initiatives to more integrated approaches.

Find out more