TL;DR Over 90% of schools and universities lack formal policies to govern the use of advanced AI tools in student assignments and assessments, a situation that potentially enables academic dishonesty and creates an unfair advantage. Establishing such policies would prevent misuse of AI technologies, promote fairness, and maintain the integrity of student evaluations.
While AI has many beneficial uses in education, its ability to generate undetectable answers for homework and exams poses a major threat to meaningful student assessment. Studies show AI can now ace most tests in an indistinguishable manner, bypassing "AI Detectors". This severely undermines the validity of evaluating student skills and learning through assignments and exams.
As AI's assessment capabilities rapidly accelerate, schools urgently need clear policies governing AI use on student work to preserve integrity in testing comprehension. Although AI offers positives, unchecked use for generating answers disrupts core evaluation. This issue represents one of the biggest concerns for schools as AI fundamentally jeopardizes the validity of student assessment.
What is an AI Policy for Assessment
An effective AI policy for student assessment clarifies the following points for both students and faculty:
- Is any AI use permitted when submitting assignments.
- What types of AI use are acceptable. AI comes in many forms, including:
- AI to refine existing ideas by improving sentence structure, grammar, etc.
- AI to generate entire answers by pasting questions into ChatGPT.
- AI to gather additional information to help answer questions.
- What is the allowed extent of AI use, such as the percentage of an assignment that can be AI-generated.
- How students can transparently share information about their AI use (e.g. prompts used) with instructors.
An explicit AI policy allows all parties to understand boundaries and expectations around AI in coursework. The policy should cover the spectrum of potential AI uses, ranging from minor editing to wholly AI-generated content, and specify what is permitted versus prohibited.
Why schools need an AI Policy for Assessment
Educational institutions need clear policies regarding student access to AI tools when completing assessments. Without explicit guidelines for students and faculty, unclear or ambiguous policies can lead to negative consequences such as:
- If the some instructors are okay with AI use but the the school does not have clear guidelines, some students will utilize AI while others will not, creating an unfair advantage based on AI access rather than mastery of course material. Students with the newest AI tools (e.g. GPT-4) could greatly outperform peers.
- If the some instructors prohibit or are unclear about AI use, students may be penalized for usage despite the policy not being clearly conveyed.
Schools require well-defined policies that explicitly state the allowed AI uses, if any, on student assessments. These policies must be clearly communicated to eliminate confusion and unintended violations. Ambiguous policies risk penalizing students, enabling AI-related academic dishonesty, and undermining the validity of student evaluation and grading.
How many schools have an AI Policy
A recent UNESCO global survey of over 450 schools and universities uncovered that less than 10% have established formal policies or guidance regarding the use of generative AI applications like ChatGPT. The comprehensive survey included a diverse sample of higher education institutions across multiple countries.
The results reveal a significant gap, with the vast majority (over 90%) of surveyed schools and universities lacking any official policies to address the rapid emergence of generative AI. Without clear institutional rules or procedures to guide responsible and ethical AI use in academics, students and faculty are operating in an ambiguity that risks misconduct.
How Rumi can Help
With Rumi we help instructors not only set AI policy for an assignment but also enact it. See demo below to understand how it works.