Ar­ti­fi­cial in­tel­li­gence: de­vel­op­ing ex­plan­a­tions to­geth­er

 |  ResearchCollaborative Research CentresTransferArtificial IntelligencePress releaseTRR 318 - Erkl?rbarkeit konstruierenFaculty of Arts and Humanities

After four years of intensive research, the Collaborative Research Centre/Transregio 318 "Constructing Explainability " is taking stock at the end of the first funding phase by the German Research Foundation (DFG). In this interview, the two spokespersons, Prof Dr Katharina Rohlfing and Prof Dr Philipp Cimiano, share their key findings: What have they learnt about how artificial intelligence (AI) explains things? What challenges did bringing together researchers from different disciplines entail? And how has technological progress, for example through large language models such as ChatGPT, changed their work?

In the Collaborative Research Centre/Transregio 318, researchers from Bielefeld and Paderborn Universities are investigating how artificial intelligence can be understood and explained and how users can be actively involved in the explanation process. Under the title "Constructing Explainability", researchers from various disciplines are working together in 20 projects and six synthesis groups. The first funding phase runs until the end of the year.

1. what were the most important new findings on explainability in AI?

Katharina Rohlfing:
A central point of our approach is the assumption that current "explainable AI" systems have a fundamental flaw: They treat explanations as a one-way street. In other words, the machine explains, the person listens. Our research has helped to make the recipient of the explanation visible as the addressee of the explanation, because in real life, understanding is a two-way process: people talk to each other, ask questions, nod, look confused or gesticulate to show whether they have understood something. For this reason, we have developed a new framework that views explanation as a two-way process, similar to a conversation that evolves over time and is co-created by the participants. We call these systems, developed according to our framework, "social explainable AI" or sXAI. They adapt their explanations in real time to the person depending on how they react or what they think is relevant.

2. how did you test whether this model actually works?
Katharina Rohlfing:
We analysed real conversations in detail, for example how people explain things in everyday situations. We saw that although explanations often begin with a monologue, the "explainee", i.e. the person receiving an explanation, is usually actively involved: they ask questions, appear confused or signal their progress in understanding. This means that explaining is often a dialogue, not a monologue.

When analysing the explanation process in more detail, we also looked at how people use language and gestures to show what they understand and what needs to be elaborated on. We identified certain patterns that show how people build understanding together. We also investigated how they "scaffold" each other, i.e. provide step-by-step support - like a temporary scaffold to help someone climb up. For example, you can first instruct how to do something and then explain what to avoid. Negative instructions can be a helpful scaffold.

Philipp Cimiano:
Our computational work has focused on implementing our framework in AI systems. These systems react to the person they are explaining something to. They take three central aspects into account: Co-operation (how well the interaction works), social appropriateness (how appropriately the system behaves) and understanding. A good example is the SNAPE system developed in project A01. It is sensitive to a person's reactions and adapts its explanation accordingly. It does not give the same explanation to everyone, but individualises it according to the situation.

3. have you developed new methods to investigate explanations more effectively?
Philipp Cimiano:
Yes, we have found new ways to better investigate how explanations work. For example, we have developed new instruments to measure whether someone has understood something through an explanation or is left confused. And we have not limited our investigations to the laboratory. It was important for us to see how explainability works in everyday life, with different people in different situations. For example, we asked what kind of AI systems they use on a daily basis and whether they would like explanations of their functions.

Katharina Rohlfing:
Our aim was to investigate how understanding develops - not just whether it happens. We therefore introduced a method in which the participants look back and describe their "aha moments" - in other words, the key moments when their understanding reached a turning point. We placed these moments at the centre of our analysis. Another method was to organise special workshops in which humans and AI work together to develop explanations. These new methods help us to gain deeper insights, not only into the process of explaining and understanding, but also into how to sponsor it and when explanations are really helpful.

4. what was particularly challenging in the first funding phase?
Philipp Cimiano:
The biggest challenge was bringing together people from very different disciplines such as computer science, linguistics and psychology. Each discipline has its own way of thinking and expressing itself. That's why we first had to develop a common language.

Another major challenge was the publication of ChatGPT. This has changed a lot of research in the field of technology development and opened up new possibilities for all users. We therefore quickly set up a group that focussed on these new developments and derived new research projects from them.

5. how well did the interdisciplinary collaboration work?
Katharina Rohlfing:
I am proud to say that we are strong in interdisciplinarity in many respects. Within individual projects, people from different specialisms work together. This gives the projects an interdisciplinary architecture. But we also work across projects, such as our first book on Social XAI, which will be published this year. In addition, we work in groups on current topics that we consider relevant to the CRC, such as large language models. Regular meetings such as our SFB conferences, writing retreats and the so-called "Activity Afternoons" have probably also strengthened our collaboration. Of course, it is not always easy to integrate new members into this established culture, but we have created formats that facilitate this process.

6. what major challenges do you see for the future?
Philipp Cimiano:
Large language models such as ChatGPT are powerful, but they also have limitations: they often don't take the specific situation into account. They may explain things, but they don't really understand who is asking or why. In future, we will need systems that can adapt flexibly to the situation at hand, systems that understand what is relevant at the moment.

Katharina Rohlfing:
We need to fundamentally change our view of explainability. It is not enough for an output to be understandable; systems need to create contexts in which users can help shape the interaction - so that people don't just passively receive information, but actively engage with it in order to arrive at a relevant understanding. This strengthens collaboration between humans and AI and ensures that technology remains understandable but is also useful.

This text was translated automatically.

 

Symbolic image (TRR 318)
Photo (TRR 318): Prof Dr Katharina Rohlfing (left) and Prof Dr Philipp Cimiano (right).

Contact

business-card image

Prof. Dr. Katharina Rohlfing

Transregional Collaborative Research Centre 318

Project Leader A01, A05, Z

Write email +49 5251 60-5717