05012025
Last update: 04/29/2025 9:04

Newsletter

Here you may suscribe to the ENGLISH version of our newsletter. Please, tell us your field of business or interest, and your email adress:

Scientists and companies join forces to 'teach' AI to explain itself

The European H2020 project NL4XAI addresses the question of how the quality of algorithms in the emerging field of Explainable Artificial Intelligence (XAI), i.e. machines capable of justifying and reasoning their decisions, should be assessed. The aim is to train the first generation of experts in this field.

Image of the NL4XAI international meeting on 15-16 December in Utrecht (The Netherlands).Image of the NL4XAI international meeting on 15-16 December in Utrecht (The Netherlands).

How should the quality of algorithms be assessed? How does artificial intelligence explain itself? This is an emerging and cutting-edge area of research known as Explainable Artificial Intelligence (usually abbreviated as XAI), a field of study that faces the challenge of making AI systems self-explanatory.

Teams from the CSIC's Artificial Intelligence Research Institute (IIIA-CSIC) are participating in this project together with groups from European universities in the UK (Aberdeen), Ireland (Dublin) and the Netherlands (Tilburg, Utrecht), as well as representatives from industry (Philips and Trivago, among others).

"A self-explanatory AI is one that is able to justify its decisions or recommendations. Simply put, one that is able to answer questions like 'why...' in a human-understandable way," explains Carles Sierra, research professor at the IIIA-CSIC.

But this is not so easy. In the same way that humans do not always know how to explain what they know (due to the many implicit knowledge they acquire unconsciously through the environment or inheritance), something similar happens to Artificial Intelligence (AI) systems, which mostly learn automatically from data. They cannot justify why they know what they know.

However, the right to explanation is currently under debate, adds Carles Sierra, "as one of the digital rights to be recognised by legislation, especially when the decisions of an AI can significantly affect the lives of citizens".

"A self-explanatory AI is one that is able to justify its decisions or recommendations. Simply put, one that is able to answer questions like 'why...' in a human-understandable way,"

This is the aim of NL4XAI (which stands for interactive Natural Language Technology for Explainable Artificial Intelligence), a research project funded by the European Union's Horizon 2020 programme. The project brings together some of Europe's leading researchers (from both academia and industry) in each of the topics covered, and has created a high-quality joint training programme that aims to train the first generation of experts in the field of Explainable Artificial Intelligence across the European continent.

At the core of the project are 11 PhD students (also known as Early-Stage-Researchers or ESRs), who will be challenged to make Artificial Intelligence systems explain themselves in order to better exploit these emerging techniques. ESRs will also receive training in ethical and legal issues, as well as in cross-cutting skills.

Last December, the students met with scientists and industry professionals in Utrecht, the Netherlands, to receive training. The students were also given hands on experience with the design of evaluation experiments.

A central theme was the widely reported experience in experimental sciences that experiments can be extremely difficult to replicate; for example, because crucial details are missing from the research articles in which the experiments are reported, or because of problems with the statistics.

The network of NL4XAI brings together 19 beneficiaries and partners from six different European countries (France, Malta, Poland, Spain, The Netherlands, and UK). It is coordinated by the research team at the Research Centre in Intelligent Technology of the University of Santiago de Compostela (CiTIUS-USC), headed by Senén Barro.

The partners correspond to two national research institutions (IIIA-CSIC, CNRS), ten universities (University of Aberdeen, University of Dundee, L-Universitá ta’ Malta, Delft University of Technology, Utrecht University, University of Twente, Warsaw University of Technology, Université de Lorraine, Universidade de Santiago de Compostela, Universitat Autonòma de Barcelona, Maastricht University) and six private companies (Indra, Accenture, Orange, Wizenoze, Arria, InfoSupport).