We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

stat.ML

Change to browse by:

References & Citations

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Statistics > Machine Learning

Title: Model Transparency and Interpretability : Survey and Application to the Insurance Industry

Abstract: The use of models, even if efficient, must be accompanied by an understanding at all levels of the process that transforms data (upstream and downstream). Thus, needs increase to define the relationships between individual data and the choice that an algorithm could make based on its analysis (e.g. the recommendation of one product or one promotional offer, or an insurance rate representative of the risk). Model users must ensure that models do not discriminate and that it is also possible to explain their results. This paper introduces the importance of model interpretation and tackles the notion of model transparency. Within an insurance context, it specifically illustrates how some tools can be used to enforce the control of actuarial models that can nowadays leverage on machine learning. On a simple example of loss frequency estimation in car insurance, we show the interest of some interpretability methods to adapt explanation to the target audience.
Comments: Accepted to European Actuarial Journal
Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG); Other Statistics (stat.OT)
Cite as: arXiv:2209.00562 [stat.ML]
  (or arXiv:2209.00562v1 [stat.ML] for this version)

Submission history

From: Ly Antoine PhD [view email]
[v1] Thu, 1 Sep 2022 16:12:54 GMT (2329kb,D)

Link back to: arXiv, form interface, contact.