Glossary of Terms
AI Concepts
Artificial Intelligence (AI)
A set of computational techniques that enable machines to perform tasks typically requiring human intelligence, such as pattern recognition, prediction, classification, or language processing.
Machine Learning (ML)
A subset of AI that involves training computational models to detect patterns and make predictions based on data, improving performance over time without explicit programming.
Natural Language Processing (NLP)
An AI technique that enables machines to interpret, generate, and analyse human language. Tools that use NLP include chatbots and plagiarism detectors.
Large Language Model (LLM)
A type of AI model trained on massive datasets of text to predict and generate human-like language. LLMs power tools such as ChatGPT, Claude, and Gemini.
Generative AI (GenAI)
AI systems that produce new content (text, images, or code) based on patterns learned from large datasets. Examples include ChatGPT, Claude, and Gemini.
Hallucination
When an AI model generates incorrect or fabricated information that appears convincing.
Learning & Teaching
Adaptive Learning
An instructional approach in which digital platforms adjust pacing, difficulty, or content pathways based on a learner’s performance and engagement patterns.
Learning Analytics
The measurement, collection, and analysis of learner data to understand and improve learning processes.
Learning Management System (LMS) AI Features
AI enhancements embedded into LMS platforms (e.g., Moodle, Canvas, Blackboard) such as personalised recommendations, activity predictions, or automated reminders.
Teacher-AI Collaboration
The partnership between educators and AI systems where AI handles repetitive or analytical tasks, while teachers provide judgement, context, and relational guidance.
Automated Feedback Tools
AI systems that give students instant comments on writing, quizzes, or assignments, often highlighting errors or suggesting improvements.
Early Warning System
Predictive models that identify students at risk of disengagement or poor performance based on behaviour patterns or historical data.
Data & Governance
Data Privacy
Policies and practices that govern how student data is collected, stored, accessed, and protected in digital learning environments.
Data-Driven Insights
Findings generated from analysing large datasets (e.g., assessment logs, engagement patterns) to guide instructional or institutional decision-making.
Algorithmic Bias
Systematic errors in AI outputs caused by imbalanced, inaccurate, or unrepresentative training data, which may result in unfair treatment of certain groups or learners.
Ethical Data Governance
Policies and practices that ensure responsible collection, storage, analysis, and use of learner data, prioritising transparency, consent, security, and fairness.
Human Oversight
A principle that AI systems should be guided, verified, or supervised by humans.
Integrity & Assessment
Plagiarism Detection
The use of AI-based tools to identify copied, paraphrased, or machine-generated content by analysing linguistic patterns, similarity indices, and semantic features.
Academic Integrity
Standards of honesty, originality, and ethical behaviour in academic work. In AI-supported environments, this includes transparent authorship and responsible tool use.
AI Misuse
Using AI tools in ways that violate academic integrity guidelines, such as submitting uncredited AI-generated work or bypassing learning processes.
False Positives
Detection errors in which legitimate, original work is incorrectly flagged as plagiarised or AI-generated.
False Negatives
Detection errors in which plagiarised or AI-generated work is incorrectly classified as original.
Similarity Index
A numerical score indicating how closely a text matches other sources, used in similarity-detection systems.
Semantic Analysis
A method used by modern AI detectors to examine meaning and writing patterns instead of just wording, making it useful for detecting paraphrased or AI-generated content.