10 November is all about science. "Building trust in science" - the theme of this year's UNESCO World Day of Science for Peace and Development is more relevant than ever. Forward-looking topics such as artificial intelligence (AI) in particular are moving an increasingly digitalised society. Scientists at Paderborn University are therefore not only researching the further development of AI systems, but are also aware that the research content must be communicated to society. Prof Dr Katharina Rohlfing, spokesperson of the Transregional Collaborative Research Centre (TRR) 318 "Constructing Explainability", and Prof Dr Britta Wrede, project manager of the TRR 318 public relations project, agree: transparent information about AI is crucial.
The theme of this year's "World Science Day" is "Building trust in science". How is trust built between science and society?
Rohlfing: Science endeavours to communicate its content to society in various formats. This means that the information should not only flow one-way, i.e. only what is happening in science is publicised. Because many topics are socially relevant, scientists are increasingly entering into dialogue with representatives of different social groups in order to gain impetus for their research. There are also new participatory methods of science that directly involve stakeholders from society in research. This participation is in line with the democratic values of our society.
Wrede: We offer co-construction workshops, for example, which are primarily aimed at pupils from neighbouring schools. But we also have other stakeholders in our sights, such as politicians, civil servants and other decision-makers who decide on the use and handling of AI in public administrations. We also contribute to teaching content for medical students to give them more background knowledge about explainable AI in medicine. The aim is always to strengthen people in their role as questioners: We want to empower them to ask questions to AI or to us as workshop teachers in the first place. And we see signs that we are actually able to change people's ideas about AI in such a way that they adopt a different attitude towards it. Our hope is that this will make them more critical, but also more trusting of AI. We see our role as scientists here as empowering members of society to deal with the revolutionary new technology of AI with the help of our research findings, thereby gaining autonomy and skills.
Rohlfing: However, we at TRR 318 have noticed that these forms of dialogue are making science communication a more challenging task for researchers. Paderborn University is therefore planning to establish a degree programme with a focus on science communication in order to strengthen the relevant skills on site.
The interdisciplinary research team of the Transregional Collaborative Research Centre (TRR) 318 is investigating the principles, mechanisms and social practices of explanation and how these can be taken into account in the design of AI systems. What advantages do you see in the use of artificial intelligence?
Rohlfing: The advantages are clear from the use of AI in everyday life: we can get from A to B in an unfamiliar city, we can have a menu translated into a foreign language or we can get a summary of a question without having to search through many websites ourselves. For many in our society, the question of whether AI should be used is therefore no longer an issue. Instead, in TRR 318 we are looking at the question of how AI should be used and how we can change its use in such a way that people retain decision-making power.
Critics claim that AI systems jeopardise our democratic values such as freedom. What would you say to them from a scientific perspective?
Rohlfing: Humans and machines are not simply opposed to each other. Rather, the machine has been developed by humans and therefore functions according to certain values that have been incorporated into the programming. Scientists and developers are becoming increasingly aware of this fact. In TRR 318, for example, we are trying to realise a new form of interaction, namely one that responds to people and not just provides them with information. This interaction follows certain values and seeks to empower people to act according to their questions and their need for understanding. This type of interaction is new because previous forms of interaction mainly see people as recipients of information and take little account of their understanding or lack of understanding. It is therefore important that Explainable AI becomes more social. We want to explain what we mean by this in a handbook for scientists and developers. We are convinced that this desired form of interaction will enable people to interact more confidently with machines.
Trust is based on understanding. TRR 318 recently launched its own research podcast. Do you see this as a great opportunity to educate people about your work?
Wrede: Ultimately, yes. However, the first episodes are aimed at the scientists at the TRR themselves. As we are an interdisciplinary organisation, we want to use the first episodes to shed light on the most important concepts that form the basis of our work at the TRR from the different disciplinary perspectives and explain them to each other, so to speak. For this reason, the current episodes are not yet so explicitly tailored to interested listeners, but we would of course be delighted if people outside our TRR also feel addressed by them! However, we want to develop the podcast further for people outside the TRR and bring our basic understanding of the underlying mechanisms of explanatory processes closer in a dialogue. On the one hand, we want to take a closer look at empirical results from experiments or field studies, but on the other hand we also want to discuss how we want to implement this in interactive AI applications that are intended to co-construct explanations together with the users.
This text has been translated automatically.