Alexis Marion, CEA List, September 2019
Partenarial Explainer is a method of interpretability based on the concept of partenarial examples. For a binary classification task, the method aims, for a selected input, to find the closest example in the other class. In a fault detection task this method helps to identify the actions to take to 'repair' a faulty example.
This method is applied to XGBoost models. The first step consist in approximating the XGBoost model with a differentiable one with a method called DFE (Differentiable Forest Estimator). Then we research a partenarial example with DDN (Decoupling Direction and Norm[1]).
demo.ipynb give you a quick example and results on 3 distincts datasets.
[1] Rony, J., Hafemann, L.G., Oliveira, L.S., Ayed, I.B., Sabourin, R., Granger, E., 2018. Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses. arXiv:1811.09600 [cs].