From Buzzwords to Building Blocks
Educators have heard the promises. AI will personalize learning, automate grading, and give teachers more time. They have also heard the fears. AI will replace creativity, distort truth, and erode trust.
This module is designed to cut through both. It explains what generative AI actually does, what it can and cannot do in education today, and how responsible leadership can turn hype into human-centred progress.Key Topics:
What is Generative AI, really?
Foundation models explained simply
Text, image, and multimodal AI in teaching and learning
Why education is a special case (children, duty of care, long-term impact)
Core concepts: prompting, grounding, training data, hallucinations
Examples: ChatGPT in English class, image generators in art projects, data patterning in assessment
Outcome: Leaders will be able to explain core genAI concepts clearly to staff, students, and parents, and begin identifying where the biggest opportunities, and risks, may lie.
Avoiding Costly Mistakes and Unproven Promises
AI is not a plug-and-play solution; it is a partnership. Every school that brings AI into its ecosystem becomes part of a complex web of data flows, vendor contracts, and implicit trust agreements. Understanding what lies behind each platform is no longer optional. It is a core leadership competency.
This module examines how to evaluate AI systems for educational use, how consent and data rights have changed under new global laws, and how to prevent the rise of unapproved “ghost AI” tools that can compromise both compliance and credibility.
Key Topics:
AI systems: infrastructure, models, agents, apps
Understanding how tools like Gemini, ChatGPT, Claude, and others fit in
Procurement red flags and the edtech marketing trap
Vendor accountability and compliance (EU AI Act, GDPR, COPPA, etc.)
Examples: The difference between a chatbot and a grounded tutoring system, how vendor hype can lead to tech debt
Outcome: Leaders will confidently engage with vendors, ask the right questions, and avoid procurement traps.
From Experiments to Educational Outcomes
AI in schools is entering its experimental phase. Teachers are trying prompts, administrators are drafting policies, and IT teams are wrestling with integration. What distinguishes early curiosity from professional innovation is one thing: reliability.
This module shows how to move from “trying AI” to using it responsibly for measurable learning outcomes. It explains how prompting works, how to ground AI in verified data, and how to avoid the well-known trap of hallucination. It ends with concrete examples that any school can reproduce safely.
Key Topics:
Prompt engineering basics: zero-shot, few-shot, role prompting, etc.
Grounding explained with education examples (e.g., RAG with school policies)
Avoiding hallucinations and maintaining accuracy
Tools and practices for safe deployment in schools (Vertex AI, Claude RAG, etc.)
Hands-on examples: Writing an accurate AI-generated school newsletter, grounding AI responses in local curriculum
Outcome: Leaders understand how to guide their teams in safe, effective use of AI and can design pilot projects with strong guardrails.
AI Leadership Is People Leadership
Driving Change Ethically, Collaboratively, and Sustainably
AI leadership is not about technology; it is about people. The future of AI in education will depend less on the power of algorithms and more on the strength of institutional culture - the ability of leaders to build trust, reduce fear, and align innovation with mission and values.
This module focuses on the human side of AI governance: how to create a culture that welcomes responsible innovation, how to manage resistance without coercion, and how to lead with clarity in a period of uncertainty.
Key Topics:
Digital culture, responsible innovation, and collective vision
Building trust and transparency across staff and students
Managing resistance and fear of AI
Aligning AI use with pedagogy, mission, and values
Examples: Leading whole-school innovation with teacher voice, student agency, and inclusive governance
Outcome: Leaders are equipped to drive change ethically, collaboratively, and sustainably.
Because AI Leadership Is Also Safeguarding
From Innovation Management to Institutional Accountability
For two decades, digital innovation in schools has focused on integration; adding tools, building systems, connecting classrooms. AI changes the equation. The question is no longer how to use technology, but how to govern it.
AI leadership is safeguarding. It requires a new literacy among leaders: understanding risk, accountability, and the human consequences of digital decisions. This module explores the legal, ethical, and structural foundations of AI governance in education, and what must now change in how institutions manage technology, privacy, and power.
Key Topics:
What leaders must know about the EU AI Act, GDPR, and child rights
Roles and responsibilities: who should sign off, monitor, and audit?
Risk management: data security, student privacy, explainability
Scenario walkthroughs: what to do when something goes wrong
Building a governance framework: the AIGO model
Outcome: Leaders understand their legal, ethical, and safeguarding duties and are prepared to implement governance structures.
Every generation of technology has tested education’s ability to protect students and staff. The internet brought cyberbullying, social media brought reputational harm, and AI now brings synthetic deception — images, voices, and videos that can manipulate truth itself.
Leaders must now prepare for a reality where not everything seen or heard can be trusted. This module explains the major emerging risks from generative AI, how these threats manifest in schools, and how to build a crisis and safeguarding response that keeps human verification at the center.
AI leadership does not require you to code; it requires you to understand enough to lead wisely.
This module demystifies the technology that drives today’s generative systems and explains how those components connect to your real responsibilities: governance, budgets, security, and strategy.
Final assessment.