Pader­born Uni­ver­sity leads EU re­search pro­ject on ex­plain­able ar­ti­fi­cial in­tel­li­gence

 |  Research

Artificial intelligence (AI) has become an integral part of our lives. It has given rise to smart assistants that take on tasks that would otherwise take humans a great deal of time and effort ¨C in medicine, business and industry, for example. To do this, smart assistants require vast amounts of data. ¡®Knowledge graphs¡¯ are one of the preferred mechanisms for representing data here, because they can be understood by both humans and machines and ensure that information is processed logically. They are considered key for a number of popular technologies such as Internet search engines and personal digital assistants. However, existing machine learning approaches for knowledge graphs still have some shortcomings, in particular with respect to scalability, consistency and completeness. A further problem is that they do not meet the human need for comprehensibility. Researchers at Paderborn University are now working on a large-scale research project to develop explainable machine learning for large-scale knowledge graphs. The National Center for Scientific Research ¡®Demokritos¡¯ in Greece, the European Union Satellite Centre (SatCen) in Spain, the University of Amsterdam in the Netherlands as well as the companies DATEV and webLyzard technology are also involved in the ENEXA* project. The research is being funded for a period of three years to the tune of around €4 million as part of the EU¡¯s Horizon Europe programme.

Explainability of artificial intelligence

¡°Current machine learning-based explanation approaches are often based on a one-off process in which the AI does not take into account whether the human receiving the explanation has really understood what is being explained,¡± says Professor Axel-Cyrille Ngonga Ngomo, who heads up the Data Science group at the department of Computer Science at Paderborn University. In other words: There is no conversation between sender and recipient. Ngonga adds: ¡°The problem can be overcome through the co-construction of explanations, whereby the addressees ¨C i.e. the humans ¨C are more involved in the AI-driven explanation process, with explanations not only produced for them, but with them.¡±

Human-centred: Machine learning for large-scale applications

The concept of co-construction has not yet been used for knowledge graphs. The researchers have therefore set themselves the goal of developing explainable machine learning approaches for particularly large knowledge graphs, with the focus on the rapid computation of models and human-centred explanations. Ngonga speaks of pioneering work: ¡°To achieve this goal, ENEXA will devise novel hybrid machine learning approaches that are able to exploit multiple representations of knowledge graphs in a concurrent fashion. The solutions developed will meet real-world runtime requirements and make explainable machine learning accessible for large-scale applications such as Internet search engines, accounting, brand marketing, and the predictive analysis of satellite imagery. By using hybrid machine learning for large knowledge graphs and for explaining these, ENEXA will be leading the way in the implementation of explanatory models from sociology and psychology in machine learning.¡± This is important because people often have to make decisions without always being clear on the facts, which can then have far-reaching consequences.

Benefits for industry