How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency maps
Publikation: Beitrag in Fachzeitschrift › Übersichtsartikel (Review) › Beigetragen › Begutachtung
Beitragende
Abstract
Saliency maps can explain how deep neural networks classify images. But are they actually useful for humans? The present systematic review of 68 user studies found that while saliency maps can enhance human performance, null effects or even costs are quite common. To investigate what modulates these effects, the empirical outcomes were organised along several factors related to the human tasks, AI performance, XAI methods, images to be classified, human participants and comparison conditions. In image-focused tasks, benefits were less common than in AI-focused tasks, but the effects depended on the specific cognitive requirements. AI accuracy strongly modulated the outcomes, while XAI-related factors had surprisingly little impact. The evidence was limited for image- and human-related factors and the effects were highly dependent on the comparisons. These findings may support the design of future user studies by focusing on the conditions under which saliency maps can potentially be useful.
Details
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 1–32 |
Fachzeitschrift | International Journal of Human-Computer Interaction |
Publikationsstatus | Veröffentlicht - 2024 |
Peer-Review-Status | Ja |
Externe IDs
Scopus | 85199991000 |
---|
Schlagworte
Schlagwörter
- attribution methods, deep neural networks, Explainable artificial intelligence, human performance, image classification, saliency maps, user studies