We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.HC

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Human-Computer Interaction

Title: How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency maps

Authors: Romy Müller
Abstract: Saliency maps can explain how deep neural networks classify images. But are they actually useful for humans? The present systematic review of 68 user studies found that while saliency maps can enhance human performance, null effects or even costs are quite common. To investigate what modulates these effects, the empirical outcomes were organised along several factors related to the human tasks, AI performance, XAI methods, images to be classified, human participants and comparison conditions. In image-focused tasks, benefits were less common than in AI-focused tasks, but the effects depended on the specific cognitive requirements. Moreover, benefits were usually restricted to incorrect AI predictions in AI-focused tasks but to correct ones in image-focused tasks. XAI-related factors had surprisingly little impact. The evidence was limited for image- and human-related factors and the effects were highly dependent on the comparison conditions. These findings may support the design of future user studies.
Subjects: Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI)
Cite as: arXiv:2404.16042 [cs.HC]
  (or arXiv:2404.16042v2 [cs.HC] for this version)

Submission history

From: Romy Müller [view email]
[v1] Wed, 3 Apr 2024 21:46:25 GMT (2085kb)
[v2] Fri, 26 Apr 2024 04:25:12 GMT (2101kb)

Link back to: arXiv, form interface, contact.