In the first week of September 2025, it was held the UNESCO Digital Learning Week (https://www.unesco.org/en/weeks/digital-learning). The event aims to foster critical reflection, peer learning and policy dialogue, addressing today’s pressing challenges while shaping the future of education in an era of rapid technological change. This year, discussions explored whether and how Artificial Intelligence (AI) is disrupting education, navigating the complex dilemmas it presents, and identifying strategic directions to harness its potential responsibly.
Two members of our Digital First project, Lidija Kralj and Francisco Bellas, were invited to give a short talk about Explainable AI in this highly relevant event. Specifically, they presented the key ideas of the report they authored on this topic as members of the European Digital Education Hub, which has been recently published by the European Commission (https://data.europa.eu/doi/10.2797/6780469).

What is Explainable AI (XAI) in Education?
Artificial Intelligence (AI) is transforming the way we learn, teach, and make decisions. However, as AI systems become present in classrooms, through adaptive learning platforms, automated feedback tools, or lesson plan generators, one of the main challenges educators face is understanding how these systems actually make their decisions. This challenge is often referred to as the ‘black box’ problem. Explainable Artificial Intelligence (XAI) aims to address this by making AI’s decision-making processes transparent, understandable, and trustworthy. In education, XAI is not only a technical requirement but also a pedagogical necessity.
When students and teachers understand why an AI tool recommends a particular learning task or provides a recommendation, they can use that information critically. XAI thus promotes accountability, fairness, and human agency, allowing educators to maintain control over decisions while leveraging AI to personalise learning. For learners, explainability enhances self-awareness and reflection, helping them understand how their actions and data shape AI-driven outcomes. In short, XAI supports ethical, transparent, and inclusive use of AI in education.
From Digital First to Explainable AI: A Shared Vision
Our Digital First project proposes a shift from a purely structural view of informatics education, focused on technical rules and coding, to a functional view, where digital skills are used meaningfully to communicate, solve problems, and express creativity. This functionalist vision aligns naturally with XAI. Just as Digital First treats informatics as a ‘first language’ of the digital generation, XAI ensures that AI systems communicate their reasoning in ways that humans can understand.
Halliday’s seven functions of language (personal, representational, interactional, instrumental, imaginative, heuristic, and regulatory) offer an inspiring framework for connecting Digital First with XAI. Each function can correspond to a key educational value in AI literacy, for instance:
– Personal function: Students use AI tools to express their ideas and emotions while understanding how algorithms interpret them.
– Representational function: XAI allows learners to grasp how information is processed and represented by AI, fostering data literacy.
– Heuristic function: Learners use XAI to explore, question, and understand how AI models work, promoting curiosity and inquiry.
– Regulatory function: Understanding AI decision boundaries helps students develop digital ethics and responsibility.

In this sense, XAI becomes the interpretive bridge of the Digital First pedagogy: it empowers students not only to use AI but also to question, adjust, and act upon its outputs. This reinforces key AI competences identified by UNESCO and the European Digital Education Hub, such as “critical thinking”, “ethical awareness”, and “student agency”. Learners are no longer passive recipients of AI-driven recommendations, but active co-designers of their learning journeys.
Educators, too, gain agency through explainable systems. When AI platforms provide understandable feedback, why a student received a certain score, how a learning path was personalized, teachers can make informed pedagogical decisions. This transparency strengthens trust and collaboration between humans and intelligent systems, aligning with the principle of ‘trust by design’ in education.
Ultimately, integrating the Digital First approach with Explainable AI leads to an education that is both technologically advanced and deeply human-centred. It ensures that students learn not just to use AI tools but to understand and shape them, turning digital literacy into digital wisdom. Properly advancing in this goal implies a process of co-designing AI systems for education, where the XAI features consider all the implied actors.

