Introduction
Explainable AI (XAI) tools help us interpret and understand machine learning and deep learning models. Python provides several open-source libraries for this purpose. Each library offers unique techniques for visualizing, explaining, and validating model decisions.
Moroccan Darija: كاينين بزاف ديال المكتبات فـ Python اللي كيساعدونا نفهمو كيفاش الموديل كيتصرف. كل مكتبة عندها طريقة خاصة باش تشرح وتفسر القرارات ديال الموديل.
Main Python Libraries for Explainable AI
1. LIME (Local Interpretable Model-agnostic Explanations)
LIME explains individual predictions by building a simple linear model around a specific instance. It works with any black-box model.
pip install lime
from lime import lime_tabular
explainer = lime_tabular.LimeTabularExplainer(training_data, feature_names=features)
explanation = explainer.explain_instance(sample, model.predict_proba)
explanation.show_in_notebook()
Darija: LIME كتفسر كل توقع بوحدو، وكاتخدم مع أي نوع ديال الموديل.
2. SHAP (SHapley Additive exPlanations)
SHAP uses game theory to assign importance values (Shapley values) to features. It supports deep learning, tree models, and tabular data.
pip install shap
import shap
explainer = shap.Explainer(model)
shap_values = explainer(X)
shap.summary_plot(shap_values, X)
Darija: SHAP كيعطي لكل ميزة رقم كيبين شحال ساهمات فالناتيج.
3. ELI5 (Explain Like I’m Five)
ELI5 helps visualize feature importance and weights in linear and tree-based models. It also integrates with scikit-learn.
pip install eli5
import eli5
eli5.show_weights(model, feature_names=feature_names)
Darija: ELI5 مكتبة بسيطة كتعرض شنو الميزات المهمة فالموديل بطريقة مفهومة.
4. InterpretML
Developed by Microsoft, InterpretML combines global and local explanation methods. It supports classical ML and deep learning models.
pip install interpret
from interpret.glassbox import ExplainableBoostingClassifier
model = ExplainableBoostingClassifier()
model.fit(X_train, y_train)
ebm_local = model.explain_local(X_test, y_test)
ebm_local.show()
Darija: InterpretML مكتبة من Microsoft كتسمح باش تشرح الموديلات الكلاسيكية والعميقة.
5. Captum
Captum is a PyTorch library for model interpretability. It provides methods like Integrated Gradients, DeepLIFT, and Saliency Maps for neural networks.
pip install captum
from captum.attr import IntegratedGradients
ig = IntegratedGradients(model)
attributions = ig.attribute(inputs, target=0)
Darija: Captum مكتبة ديال PyTorch كتعطي تفسيرات للموديلات العميقة بحال CNN وRNN.
6. Alibi
Alibi supports tabular, image, and text models. It provides counterfactual explanations, anchors, and saliency maps.
pip install alibi
from alibi.explainers import AnchorTabular
explainer = AnchorTabular(predict_fn=model.predict, feature_names=feature_names)
explainer.fit(X_train)
explanation = explainer.explain(X_test[0])
Darija: Alibi كتخدم مع النصوص، الصور، والجداول، وكتعطي شروحات بحال counterfactuals.
7. DALEX
DALEX (Descriptive mAchine Learning EXplanations) is used for model comparison and feature importance analysis. It supports Python and R.
pip install dalex
import dalex as dx
exp = dx.Explainer(model, X, y)
exp.model_parts().plot()
Darija: DALEX كتعاون باش نقارنو بين الموديلات ونشوفو الميزات اللي مؤثرة بزاف.
8. Skater
Skater provides model-agnostic interpretability for black-box models, supporting visualization and local explanations.
pip install skater
from skater.core.explanations import Interpretation
interpreter = Interpretation(X, feature_names=features)
interpreter.feature_importance(model.predict)
Darija: Skater كتخدم مع بزاف ديال أنواع الموديلات وكتعطي تفسيرات محلية وبصرية.
9. TensorFlow Explain
TensorFlow Explain integrates directly with Keras models to provide Grad-CAM, Occlusion Sensitivity, and SmoothGrad visualizations.
pip install tf-explain
from tf_explain.core.grad_cam import GradCAM
explainer = GradCAM()
grid = explainer.explain(validation_data, model, class_index=0)
explainer.save(grid, ".", "gradcam_result.png")
Darija: TensorFlow Explain كتعاون باش نفهمو الموديلات ديال الصور باستعمال Grad-CAM وطرق أخرى.
Comparison Table
| Library | Main Use | Supports Deep Learning |
|---|---|---|
| LIME | Local explanations | Yes |
| SHAP | Feature importance (global/local) | Yes |
| ELI5 | Linear/Tree models | No |
| InterpretML | Hybrid explanations | Yes |
| Captum | Neural network interpretation | Yes |
| Alibi | Counterfactual and anchors | Yes |
| DALEX | Model comparison | Yes |
| Skater | Black-box interpretation | No |
| TensorFlow Explain | Visual CNN explanations | Yes |
10 Exercises for Practice
- Install and test the LIME library with a small scikit-learn model.
- Use SHAP to explain predictions of a deep learning model.
- Visualize feature importance using ELI5.
- Train a simple classifier and explain it using DALEX.
- Use Captum with a PyTorch CNN and interpret a specific layer.
- Compare explanations from LIME and SHAP for the same dataset.
- Generate counterfactual examples using Alibi.
- Visualize Grad-CAM on a Keras CNN model using TensorFlow Explain.
- Explain a regression model using InterpretML.
- Build a dashboard comparing XAI results from multiple libraries.
Internal Linking Suggestions
[internal link: Explainable AI in Deep Learning]
[internal link: Deep Learning Basics]
[internal link: AI Ethics and Transparency]
Conclusion
Python offers a rich ecosystem for Explainable AI. Libraries like LIME, SHAP, and Captum help visualize and understand complex model behavior. Choosing the right tool depends on your model type and data domain.
Darija Summary: فـ Python كاينين بزاف ديال المكتبات اللي كيساعدونا نفسرو الموديلات. بحال LIME وSHAP وCaptum. كل وحدة عندها استعمال خاص حسب نوع البيانات والموديل.