AI study shows: Warn­ings of­ten miss their tar­get

 |  ResearchArtificial IntelligencePress releaseTRR 318 - Erkl?rbarkeit konstruierenFaculty of Arts and HumanitiesKognitive Psychologie und Psychologiedidaktik

Paderborn research team investigates user behaviour in dealing with AI errors

Whether in image recognition, speech processing or route planning - systems with artificial intelligence (AI) do not always deliver correct results. In these situations, and especially when making recommendations for decisions, people should have a healthy distrust of AI applications. In their latest study, Paderborn researchers Tobias Peters and Prof Dr Ingrid Scharlau from the Transregional Collaborative Research Centre (TRR) 318 have investigated whether specifically promoted mistrust can improve the way people interact with such systems. The study entitled "Interacting with fallible AI: Is distrust helpful when receiving AI misclassifications?" has now been published in the journal "Frontiers in Psychology".

In two experimental scenarios for image classification, test subjects received recommendations from an AI whose accuracy deteriorated over the course of the experiment. They were asked to decide whether geometric shapes could be assigned to certain categories and whether images were real or AI-generated. During the experiment, the participants were regularly asked how much they mistrusted or distrusted the AI. Psychological methods were used to analyse the extent to which the test subjects were influenced by incorrect AI advice. The key result: encouraging scepticism does not improve performance, but actually tends to worsen it.

Prompting scepticism does not work as hoped

A particular focus was on whether an explicit request for scepticism - i.e. the instruction to critically question every AI suggestion - improves performance compared to a neutral instruction. Peters: "In the interaction with an AI that makes mistakes, our instruction to be sceptical surprisingly did not help. This means that this instruction had hardly any influence on the use of AI support."

In addition to the experimental investigation, he developed a Bayesian analysis based on signal detection theory together with Kai Biermeier, a technical employee at TRR 318. This also takes into account uncertainties in the data and measures how well the test subjects were able to distinguish between correct and incorrect AI advice. It became clear that they recognised and reacted to the AI's increasing errors: "As soon as the AI's advice got worse, the test subjects trusted the AI less and distrusted it more," explains Peters. "Even when the AI's performance improved again, trust and distrust did not return to the original level. This is consistent with previous trust research and a common pattern of findings that trust is easy to lose and hard to gain."

The methodological approach enables future studies to analyse trust and distrust in dealing with AI in a differentiated manner. "Our results provide important impulses for the current discourse on how to deal with error-proneness and mistrust of AI systems - especially with regard to warnings, so-called disclaimers, before using voice-based chatbots such as ChatGPT," concludes the Paderborn researcher.

To the study: https: //www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1574809/full

This text was translated automatically.

Contact

business-card image

Tobias Peters

Transregional Collaborative Research Centre 318

Doctoral Researcher C01

Write email +49 5251 60-4491
business-card image

Prof. Dr. Ingrid Scharlau

Transregional Collaborative Research Centre 318

Project Leader A05, C01, C04, RTG

Write email +49 5251 60-2900