Skip to content

An interpretability method for XGBoost and fault detection models

License

Notifications You must be signed in to change notification settings

amarion35/partenarial_explainer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Author

Alexis Marion, CEA List, September 2019
drawing

Partenarial Explainer

Partenarial Explainer is a method of interpretability based on the concept of partenarial examples. For a binary classification task, the method aims, for a selected input, to find the closest example in the other class. In a fault detection task this method helps to identify the actions to take to 'repair' a faulty example.

This method is applied to XGBoost models. The first step consist in approximating the XGBoost model with a differentiable one with a method called DFE (Differentiable Forest Estimator). Then we research a partenarial example with DDN (Decoupling Direction and Norm[1]).

Demo

demo.ipynb give you a quick example and results on 3 distincts datasets.

References

[1] Rony, J., Hafemann, L.G., Oliveira, L.S., Ayed, I.B., Sabourin, R., Granger, E., 2018. Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses. arXiv:1811.09600 [cs].