Explainable Artificial Intelligence – An Overview

What is Explainable AI (XAI)?

Explainable AI (XAI) leverages the AI techniques/algorithms/software to enable human users to understand/comprehend the decisions made by AI transparently. XAI is expected to develop trust in the minds of the human users. “XAI aims to provide justification, transparency, and traceability of black-box machine learning methods as well as testability of causal assumptions”.

Some academicians also use the word “Interpretable” to emphasize the ability of XAI to explain the decisions in a meaningful way for human comprehension.

Why is XAI so important?

“Explainable AI will be essential if users are to understand, appropriately trust, and effectively manage this incoming generation of artificially intelligent partners”

Reference – D. Gunning, Explainable artificial intelligence (xAI), Tech. rep., Defense Advanced Research Projects Agency (DARPA) (2017)

https://nsarchive.gwu.edu/sites/default/files/documents/5794867/National-Security-Archive-David-Gunning-DARPA.pdf

In recent past, we noticed not-so-desirable events that took place based out of AI-generated decisions. Appropriate use of XAI could have avoided such occurrences.

“In 2016, the AI software used by Amazon to determine the areas of the USA to which Amazon would offer free same-day delivery accidentally restricted minority neighborhoods from participating in the program (often when every surrounding neighborhood was allowed).”

Reference –http://www.techinsider.io/how-algorithms-can-be-racist-2016-4

Incidents like Uber Self-Driving car accidents in 2018 could have been analyzed with certainty to understand how the AI decisions went wrong to have that fatal accident if proper XAI was built-in intrinsically.

Reference –

Also, from legislation standpoint, as per General Data Protection Regulation (GDPR), human users possess the right to receive “meaningful explanations of the logic involved” when AI-based automated decisions are being utilized.

Explainable AI makes Artificial Intelligence impactful To society making it a trusted partner to human life.

For better analysis, let’s divide Explainable AI (XAI) into two categories –

  • Explainable Machine Learning
  • Explainable Deep Learning

The complexity for explain AI decisions varies with the algorithms used for the AI systems. The following diagram can broadly depict the overall relationship –

Image Reference – Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI – Alejandro Barredo Arrietaa and et al

Explainable Machine Learning

Overall methodology for Explainable Machine Learning

The overall methodologies for explainability of Machine Learning algorithms are described briefly as follows:

Linear/Logistic Regression – A human can simulate the standard decisions made using linear/logistic regression for understanding purpose. So post hoc analysis may not be needed.

Decision Trees – Similar to Linear/Logistic Regression above. Post hoc analysis may not be needed for explanation.

K-Nearest Neighbors – Post-hoc analysis may not be needed for explanation, but mathematical tools are important to understand the decisions.

Baysian Models – Similar to KNN above. Explanation may not need post hoc analysis, but may need mathematical knowledge/tools.

Support Vector Machines – Post-hoc analysis is needed for explanation. Model simplification or Feature relevance techniques may be adopted.

Details of Explainable Machine Learning

For understanding the nitty-gritty of the algorithms, please refer to the References given below.

The primary categories of methods for explainability of Machine Learning algorithms are described briefly as follows:

  • Transparent Explainability Methods
    • Linear/Logistic Regression
    • Decision Trees
    • K-Nearest Neighbors
    • Baysian Models
  • Post Hoc Explainability Methods
    • Model Agnostic
      • Model Simplification
      • Feature Relevance
      • Visual Explanation
    • Model Specific
      • Ensembles
      • Complex Classifier Systems
      • Support Vector Machines

Explainable Deep Learning

Overall methodology for Explainable DL

The overall methodologies for explainability of Deep Learning algorithms are described briefly as follows:

Post-hoc Mechanism:

  • Visualization Approach – Mostly, using Heatmaps or Saliency Maps. Two primary methods are Backpropagation Method and Perturbation Method.
  • Backpropagation Method
    • Activation maximization
    • Deconvolution
    • CAM and Grad-CAM
    • Layer-Wise Relevance Propagation
    • DeepLIFT
    • Integrated Gradients
  • Perturbation-based Method
    • Occlusion Sensitivity
    • Representation Erasure – Applicable for natural language input
    • Meaningful Perturbation
    • Prediction Difference Analysis
  • Model Distillation Approach – “Model distillation refers to a class of post-training explanation methods where the knowledge encoded within a trained DNN is distilled into a representation amenable for explanation by a user.” An example can be a Decision Tree.
    • Local Approximation
    • Model Translation

Intrinsic Mechanism – When the model architecture itself has the explanation mechanism inbuilt intrinsically.

  • Attention mechanisms to a DNN is used for attention visualization revealing inherent explainability.
  • Addition of additional explanation network to the original model architecture, along with joint training of the explanation network along with the original network.

Please refer to References for details.

Image Reference – Explainable Deep Learning: A Field Guide for the Uninitiated – Ras, Xie, Gerven, Doran

Details of Explainable DL

For understanding the nitty-gritty of the algorithms, please refer to the References.

The primary categories of methods for explainability of Deep Learning algorithms are described briefly as follows:

  • Explanation of Deep Network Processing
    • Salience Mapping, Linear proxy models etc.
  • Explanation of Deep Network Presentation
    • Explanation of role of layers, individual network nodes etc.
  • Explanation Producing Networks
    • Attention networks etc.
  • Hybrid – Transparent and Blackbox Methods
    • Relational reasoning, Case-based reasoning etc.

Python libraries for XAI

The list of currently available Python Libraries that can be useful for Explainable AI is given below:.

  • SHAP (SHapley Additive exPlanations) library for Python-based unified approach
  • LIME is for Local Interpretable Model-Agnostic Explanations
  • ELI5 is a Python-based library for explainable AI pipeline
  • Skater is an open-source unified framework for model interpretation

Conclusion

The following points could be the important upcoming research areas to make AI a trustworthy partner of human life in the days to come.

  • Machine to Human interaction is an important factor for trust-worthy explanation.
  • Explanation methods available till today are not enough explanatory to generate full trust from the humans. This needs to be addressed.
  • Most explanation lack higher order relationship among features and the decision.

References

  1. Explainable Deep Learning: A Field Guide for the Uninitiated – Ras, Xie, Gerven, Doran
  2. Explainable AI: current status and future directions – Prashant Gohel and et al
  3. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI – Alejandro Barredo Arrietaa and et al
  4. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning – Wojciech Samek and et al
  5. Explainable AI with Python – Leonida Gianfagna • Antonio Di Cecco
  6. Explainable AI Within the Digital Transformation and Cyber Physical Systems – Moamar Sayed-Mouchaweh
  7. Articles on Artificial Intelligence – https://www.enterprisetechmgmt.com/blogs/