Look­ing at ex­plan­a­tions of AI in con­text

 |  DigitalizationResearchCollaborative Research CentresTransferArtificial IntelligenceEventsPress releaseTRR 318 - Erkl?rbarkeit konstruieren

Transregio 318 invites you to the 3rd conference

How can explanations of "artificially intelligent systems" be made comprehensible and what role does context play in this? These questions are the focus of the 3rd TRR 318 conference "Contextualizing Explanations", which will take place on 17 and 18 June in Bielefeld. The conference is being organised by the Collaborative Research Centre/Transregio (TRR) 318 of Bielefeld and Paderborn Universities. International scientists will share their current research work and approaches in the field of Explainable Artificial Intelligence. Further scientific impulses will come from renowned keynote speakers.

Artificial intelligence (AI) systems are increasingly being used in sensitive fields of application where wrong decisions can have serious consequences - for example in medicine or finance. By making AI systems transparent, effective control can be ensured, enabling users to scrutinise decisions based on AI. "For users to be able to act autonomously, explanations of AI decisions must be relevant and provide sufficient information," says Prof Dr Philipp Cimiano, deputy spokesperson of TRR 318. "Since no explanation meets all requirements, TRR 318 takes the approach of giving users the opportunity to actively shape AI explanations and control them according to their needs."

At the 3rd TRR 318 conference, the researchers will include the context in the explanation process. "Context is dynamic," explains co-organiser Prof. Dr. Anna-Lisa Vollmer. "Initially, it may be relevant for the explanation that a person has a different linguistic and cultural background. However, this can quickly become less important if, for example, it is more pressing that the person has little time and the explanation has to be shorter. The space in which the explanation is given can also sometimes be relevant."

Dr. Benjamin Paa?en adds: "By contextualising AI explanations, users gain more understanding and control over the processes. The presentations at the TRR 318 conference illustrate how diverse context can be and what impact it has on users."

Invited speakers

The conference will start on Tuesday with a keynote speech by Dr Kacper Sokol. He is a researcher in the Medical Data Science group at ETH Zurich. In his talk, he will draw on a broad spectrum of relevant interdisciplinary findings and propose supporting human decision-making in a data-driven way. The second keynote lecture will be given by Prof Dr Virginia Dignum from Ume? University via video link. She will present the current EU debates on AI regulation and discuss how to ensure that technological progress goes hand in hand with ethical and legal responsibility.

On the second day of the conference, participants can look forward to the keynote lecture by Prof Dr Angelo Cangelosi from the University of Manchester. His presentation is entitled "The Importance of Starting Small with Baby Robots". Cangelosi presents examples of how robots can learn language and discusses important principles such as "starting small", which means that robots start with simple concepts and tasks. He will also address the advantages and disadvantages of fundamental models in robotics as well as issues related to explainable AI (XAI) and trust.

The conference will conclude with a panel discussion in which members of other research networks will discuss different contexts for explanations.

This text was translated automatically.

Photo (TRR 318): How can an AI explain things in an understandable way while taking context into account? Questions that will be discussed at the 3rd TRR 318 conference.

Contact

business-card image

Linda Thom?en

Transregional Collaborative Research Centre 318

Employee ? - Research Communication

Write email