We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.LG

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Machine Learning

Title: Interpretable Representations in Explainable AI: From Theory to Practice

Abstract: Interpretable representations are the backbone of many explainers that target black-box predictive systems based on artificial intelligence and machine learning algorithms. They translate the low-level data representation necessary for good predictive performance into high-level human-intelligible concepts used to convey the explanatory insights. Notably, the explanation type and its cognitive complexity are directly controlled by the interpretable representation, tweaking which allows to target a particular audience and use case. However, many explainers built upon interpretable representations overlook their merit and fall back on default solutions that often carry implicit assumptions, thereby degrading the explanatory power and reliability of such techniques. To address this problem, we study properties of interpretable representations that encode presence and absence of human-comprehensible concepts. We demonstrate how they are operationalised for tabular, image and text data; discuss their assumptions, strengths and weaknesses; identify their core building blocks; and scrutinise their configuration and parameterisation. In particular, this in-depth analysis allows us to pinpoint their explanatory properties, desiderata and scope for (malicious) manipulation in the context of tabular data where a linear model is used to quantify the influence of interpretable concepts on a black-box prediction. Our findings lead to a range of recommendations for designing trustworthy interpretable representations; specifically, the benefits of class-aware (supervised) discretisation of tabular data, e.g., with decision trees, and sensitivity of image interpretable representations to segmentation granularity and occlusion colour.
Comments: Published in the *Special Issue on Explainable and Interpretable Machine Learning and Data Mining* of the Springer *Data Mining and Knowledge Discovery* journal
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Machine Learning (stat.ML)
DOI: 10.1007/s10618-024-01010-5
Cite as: arXiv:2008.07007 [cs.LG]
  (or arXiv:2008.07007v4 [cs.LG] for this version)

Submission history

From: Kacper Sokol [view email]
[v1] Sun, 16 Aug 2020 21:44:03 GMT (2339kb,D)
[v2] Fri, 30 Sep 2022 19:40:32 GMT (2751kb,D)
[v3] Sat, 23 Dec 2023 14:00:43 GMT (2814kb,D)
[v4] Fri, 26 Apr 2024 09:22:34 GMT (2815kb,D)

Link back to: arXiv, form interface, contact.