Artificial Intelligence: Can AI Go to Jail?

  • 584 views

  • 0 comments

  • 0 favorites

As the application of machine learning models in non-life insurance pricing gained momentum in recent years, the importance of explainability and interpretability increased significantly.
The talk will cover methods aiming to turn black box pricing models into white box ones including:

  • Use of rule-based and surrogate models 
  • Feature importance 
  • Partial dependence plots (PDPs)
  • Individual conditional expectations (ICE)

Practical examples will be presented using Python libraries that enable users to address both global and local model explainability such as LIME (local model explanations), SHAP (feature importance metrics), ELI5 (human readable model interpretations) and explainer dashboards (model explainability dashboards). 
Furthermore, methods to assess fairness of non-life pricing models are also important to avoid unintended discrimination and regeneration of biases prevalent in the data.  We will demonstrate how the aequitas and fairml libraries in Python can help understand such biases, compare biases between groups (bias disparity) and visualise bias metrics. We will conclude with a discussion on considerations for defining and setting fairness metrics for pricing models.

Tags:

More Media in "ACTUARIAL DATA SCIENCE"

0 Comments

There are no comments yet. Add a comment.