Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor names/folders/objects for better verbosity #5

Merged
merged 8 commits into from
Jun 6, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -37,6 +37,7 @@ otx_models/

*.jpg
*.jpeg
*.JPEG
*.png

html_build/
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -11,6 +11,7 @@

### What's Changed

* Refactor names/folders/objects for better verbosity by @GalyaZalesskaya in https://github.com/openvinotoolkit/openvino_xai/pull/5
* Support classification task by @negvet in https://github.com/intel-sandbox/openvino_xai/commit/dd5fd9b73fe8c12e2d741792043372bcd900a850
* Support detection task by @negvet in https://github.com/intel-sandbox/openvino_xai/commit/84f285f2f40a8b1fc50a8cd49798aae37afd58dc
* Support Model API as inference engine by @negvet in https://github.com/intel-sandbox/openvino_xai/commit/5f575f122dedc0461975bd58f81e730a901a69a6
50 changes: 25 additions & 25 deletions GETTING_STARTED.ipynb
Original file line number Diff line number Diff line change
@@ -272,7 +272,7 @@
"# This code returns gray-scale unprocessed saliency map\n",
"explanation = auto_explainer.explain(image)\n",
"logger.info(f\"Auto example: generated {len(explanation.map)} classification \"\n",
" f\"saliency maps of layout {explanation.layout} with shape {explanation.sal_map_shape}.\")\n",
" f\"saliency maps of layout {explanation.layout} with shape {explanation.shape}.\")\n",
"\n",
"# Save saliency maps stored in `explanation` object\n",
"output_dir = \"saliency_map/auto_explain/wo_postrocessing\"\n",
@@ -403,7 +403,7 @@
"explanation = explainer.explain(image)\n",
"logger.info(\n",
" f\"White-Box example w/o explain_parameters: generated {len(explanation.map)} classification \"\n",
" f\"saliency maps of layout {explanation.layout} with shape {explanation.sal_map_shape}.\"\n",
" f\"saliency maps of layout {explanation.layout} with shape {explanation.shape}.\"\n",
")\n",
"\n",
"# Save saliency maps stored in `explanation` object\n",
@@ -430,9 +430,9 @@
"`White Box explainer` can be configured with the following parameters:\n",
"- `target_layer` - specifies the layer after which the XAI nodes should be inserted (the last convolutional layer is a good default option). Example: `/backbone/conv/conv.2/Div`. This parameter can be useful if `WhiteBoxExplainer` fails to find a place where to insert XAI branch.\n",
"- `embed_normalization` - **default True** (for speed purposes), but you can disable embedding of normalization into the model.\n",
"- `explain_method_type` - **default reciprocam**:\n",
"- `explain_method` - **default reciprocam**:\n",
"\n",
" For Classification models `White Box` algorithm supports 2 `XAIMethodType`:\n",
" For Classification models `White Box` algorithm supports 2 `Method`:\n",
" - activationmap - returns a single saliency map regardless of the classes\n",
" - reciprocam - returns saliency maps for each class the model can detect\n",
"\n",
@@ -471,13 +471,13 @@
],
"source": [
"from openvino_xai.model import XAIClassificationModel\n",
"from openvino_xai.parameters import ClassificationExplainParametersWB, XAIMethodType\n",
"from openvino_xai.parameters import ClassificationExplainParametersWB, Method\n",
"\n",
"# Parametrize White Box Explainer\n",
"explain_parameters = ClassificationExplainParametersWB(\n",
" # target_layer=\"/backbone/conv/conv.2/Div\", # OTX mobilenet_v3\n",
" # target_layer=\"/backbone/features/final_block/activate/Mul\", # OTX efficientnet\n",
" explain_method_type=XAIMethodType.RECIPROCAM,\n",
" explain_method=Method.RECIPROCAM,\n",
")\n",
"\n",
"# Create an OpenVINO™ ModelAPI model wrapper for Classification model\n",
@@ -491,7 +491,7 @@
"explanation = explainer.explain(image)\n",
"logger.info(\n",
" f\"White-Box example w/ explain_parameters: generated {len(explanation.map)} classification \"\n",
" f\"saliency maps of layout {explanation.layout} with shape {explanation.sal_map_shape}.\"\n",
" f\"saliency maps of layout {explanation.layout} with shape {explanation.shape}.\"\n",
")\n",
"\n",
"# Save saliency maps stored in `explanation` object\n",
@@ -564,11 +564,11 @@
"source": [
"from openvino_xai.explain import WhiteBoxExplainer\n",
"from openvino_xai.model import XAIClassificationModel\n",
"from openvino_xai.parameters import PostProcessParameters\n",
"from openvino_xai.parameters import VisualizationParameters\n",
"from openvino_xai.saliency_map import TargetExplainGroup\n",
"\n",
"# Pass postprocessing parameters\n",
"post_processing_parameters = PostProcessParameters(overlay=True)\n",
"visualization_parameters = VisualizationParameters(overlay=True)\n",
"\n",
"# Create an OpenVINO™ ModelAPI model wrapper with XAI head inserted into the model graph\n",
"model = XAIClassificationModel.create_model(model_path, model_type=\"Classification\")\n",
@@ -578,11 +578,11 @@
"explainer = WhiteBoxExplainer(model)\n",
"explanation = explainer.explain(image, \n",
" TargetExplainGroup.PREDICTED_CLASSES, # default option, can be ommited\n",
" post_processing_parameters=post_processing_parameters\n",
" visualization_parameters=visualization_parameters\n",
" )\n",
"logger.info(\n",
" f\"White-Box example w/ postprocessing: generated {len(explanation.map)} classification \"\n",
" f\"saliency maps of layout {explanation.layout} with shape {explanation.sal_map_shape}.\"\n",
" f\"saliency maps of layout {explanation.layout} with shape {explanation.shape}.\"\n",
")\n",
"\n",
"# Save saliency maps stored in `explanation` object\n",
@@ -646,7 +646,7 @@
"from openvino_xai.saliency_map import TargetExplainGroup\n",
"\n",
"# Pass postprocessing parameters\n",
"post_processing_parameters = PostProcessParameters(overlay=True)\n",
"visualization_parameters = VisualizationParameters(overlay=True)\n",
"\n",
"# Create an OpenVINO™ ModelAPI model wrapper with XAI head inserted into the model graph\n",
"model = XAIClassificationModel.create_model(model_path, model_type=\"Classification\")\n",
@@ -666,7 +666,7 @@
"# TargetExplainGroup.PREDICTED_CLASSES)\n",
"logger.info(\n",
" f\"White-Box example w/ all_classes: generated {len(explanation.map)} classification \"\n",
" f\"saliency maps of layout {explanation.layout} with shape {explanation.sal_map_shape}.\"\n",
" f\"saliency maps of layout {explanation.layout} with shape {explanation.shape}.\"\n",
")\n",
"\n",
"# Save saliency maps stored in `explanation` object\n",
@@ -752,7 +752,7 @@
],
"source": [
"from openvino.model_api.models import ClassificationModel\n",
"from openvino_xai.parameters import PostProcessParameters\n",
"from openvino_xai.parameters import VisualizationParameters\n",
"from openvino_xai.explain import RISEExplainer\n",
"\n",
"# Create an OpenVINO™ ModelAPI model wrapper for Classification model\n",
@@ -763,15 +763,15 @@
")\n",
"\n",
"# Pass postprocessing parameters\n",
"post_processing_parameters = PostProcessParameters(overlay=True)\n",
"visualization_parameters = VisualizationParameters(overlay=True)\n",
"\n",
"# Explainer initialization and explanation\n",
"explainer = RISEExplainer(model)\n",
"# This code returns colored saliency map after processing\n",
"explanation = explainer.explain(image, post_processing_parameters=post_processing_parameters)\n",
"explanation = explainer.explain(image, visualization_parameters=visualization_parameters)\n",
"logger.info(\n",
" f\"Black-Box example w/ post_processing_parameters: generated {len(explanation.map)} \"\n",
" f\"classification saliency maps of layout {explanation.layout} with shape {explanation.sal_map_shape}.\"\n",
" f\"Black-Box example w/ visualization_parameters: generated {len(explanation.map)} \"\n",
" f\"classification saliency maps of layout {explanation.layout} with shape {explanation.shape}.\"\n",
")\n",
"\n",
"# Save saliency maps stored in `explanation` object\n",
@@ -828,19 +828,19 @@
],
"source": [
"explainer = RISEExplainer(model, num_cells=4, num_masks=1000)\n",
"explanation = explainer.explain(image, post_processing_parameters=post_processing_parameters)\n",
"explanation = explainer.explain(image, visualization_parameters=visualization_parameters)\n",
"explanation.save(\"saliency_map/black_box/4_cells_1000_masks\", image_name)\n",
"\n",
"explainer = RISEExplainer(model, num_cells=8, num_masks=5000)\n",
"explanation = explainer.explain(image, post_processing_parameters=post_processing_parameters)\n",
"explanation = explainer.explain(image, visualization_parameters=visualization_parameters)\n",
"explanation.save(\"saliency_map/black_box/8_cells_5000_masks\", image_name)\n",
"\n",
"explainer = RISEExplainer(model, num_cells=16, num_masks=10000)\n",
"explanation = explainer.explain(image, post_processing_parameters=post_processing_parameters)\n",
"explanation = explainer.explain(image, visualization_parameters=visualization_parameters)\n",
"explanation.save(\"saliency_map/black_box/16_cells_10000_masks\", image_name)\n",
"\n",
"explainer = RISEExplainer(model, num_cells=24, num_masks=15000)\n",
"explanation = explainer.explain(image, post_processing_parameters=post_processing_parameters)\n",
"explanation = explainer.explain(image, visualization_parameters=visualization_parameters)\n",
"explanation.save(\"saliency_map/black_box/24_cells_15000_masks\", image_name)"
]
},
@@ -1083,12 +1083,12 @@
"\n",
"# Explainer initialization and explanation\n",
"auto_explainer = ClassificationAutoExplainer(model)\n",
"post_processing_parameters = PostProcessParameters(overlay=True)\n",
"visualization_parameters = VisualizationParameters(overlay=True)\n",
"\n",
"# This code returns gray-scale unprocessed saliency map\n",
"explanation = auto_explainer.explain(image, post_processing_parameters=post_processing_parameters)\n",
"explanation = auto_explainer.explain(image, visualization_parameters=visualization_parameters)\n",
"logger.info(f\"Auto example: generated {len(explanation.map)} classification \"\n",
" f\"saliency maps of layout {explanation.layout} with shape {explanation.sal_map_shape}.\")\n",
" f\"saliency maps of layout {explanation.layout} with shape {explanation.shape}.\")\n",
"\n",
"# Save saliency maps stored in `explanation` object\n",
"output_dir = \"saliency_map/timm_model\"\n",
15 changes: 7 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -46,9 +46,9 @@ To explain [OpenVINO™](https://github.com/openvinotoolkit/openvino) Intermedia
preprocessing function (and sometimes postprocessing).

```python
explainer = Explainer(
explainer = xai.Explainer(
model,
task_type=TaskType.CLASSIFICATION,
task=xai.Task.CLASSIFICATION,
preprocess_fn=preprocess_fn,
)
explanation = explainer(data, explanation_parameters)
@@ -66,9 +66,8 @@ import cv2
import numpy as np
import openvino.runtime as ov

from openvino_xai.common.parameters import TaskType
from openvino_xai.explanation.explainer import Explainer
from openvino_xai.explanation.explanation_parameters import ExplanationParameters
import openvino_xai as xai
from openvino_xai.explainer.explanation_parameters import ExplanationParameters


def preprocess_fn(x: np.ndarray) -> np.ndarray:
@@ -82,9 +81,9 @@ def preprocess_fn(x: np.ndarray) -> np.ndarray:
model = ov.Core().read_model("path/to/model.xml") # type: ov.Model

# Explainer object will prepare and load the model once in the beginning
explainer = Explainer(
explainer = xai.Explainer(
model,
task_type=TaskType.CLASSIFICATION,
task=xai.Task.CLASSIFICATION,
preprocess_fn=preprocess_fn,
)

@@ -95,7 +94,7 @@ explanation_parameters = ExplanationParameters(
)
explanation = explainer(image, explanation_parameters)

explanation: ExplanationResult
explanation: Explanation
explanation.saliency_map: Dict[int: np.ndarray] # key - class id, value - processed saliency map e.g. 354x500x3

# Saving saliency maps
52 changes: 25 additions & 27 deletions docs/Usage.md
Original file line number Diff line number Diff line change
@@ -21,9 +21,11 @@ Content:
## Explainer - interface to XAI algorithms

```python
explainer = Explainer(
import openvino_xai as xai

explainer = xai.Explainer(
model,
task_type=TaskType.CLASSIFICATION,
task=xai.Task.CLASSIFICATION,
preprocess_fn=preprocess_fn,
)
explanation = explainer(data, explanation_parameters)
@@ -43,9 +45,8 @@ import cv2
import numpy as np
import openvino.runtime as ov

from openvino_xai.common.parameters import TaskType
from openvino_xai.explanation.explainer import Explainer
from openvino_xai.explanation.explanation_parameters import ExplanationParameters
import openvino_xai as xai
from openvino_xai.explainer.explanation_parameters import ExplanationParameters


def preprocess_fn(x: np.ndarray) -> np.ndarray:
@@ -59,9 +60,9 @@ def preprocess_fn(x: np.ndarray) -> np.ndarray:
model = ov.Core().read_model("path/to/model.xml") # type: ov.Model

# Explainer object will prepare and load the model once in the beginning
explainer = Explainer(
explainer = xai.Explainer(
model,
task_type=TaskType.CLASSIFICATION,
task=xai.Task.CLASSIFICATION,
preprocess_fn=preprocess_fn,
)

@@ -90,11 +91,9 @@ import cv2
import numpy as np
import openvino.runtime as ov

from openvino_xai.common.parameters import TaskType, XAIMethodType
from openvino_xai.explanation.explainer import Explainer
from openvino_xai.explanation.explanation_parameters import ExplainMode, ExplanationParameters, TargetExplainGroup,
PostProcessParameters
from openvino_xai.insertion.insertion_parameters import ClassificationInsertionParameters
import openvino_xai as xai
from openvino_xai.explainer.parameters import ExplainMode, ExplanationParameters, TargetExplainGroup, VisualizationParameters
from openvino_xai.inserter.parameters import ClassificationInsertionParameters


def preprocess_fn(x: np.ndarray) -> np.ndarray:
@@ -111,13 +110,13 @@ model = ov.Core().read_model("path/to/model.xml") # type: ov.Model
insertion_parameters = ClassificationInsertionParameters(
# target_layer="last_conv_node_name", # target_layer - node after which XAI branch will be inserted
embed_normalization=True, # True by default. If set to True, saliency map normalization is embedded in the model
explain_method_type=XAIMethodType.RECIPROCAM, # ReciproCAM is the default XAI method for CNNs
explain_method=xai.Method.RECIPROCAM, # ReciproCAM is the default XAI method for CNNs
)

# Explainer object will prepare and load the model once in the beginning
explainer = Explainer(
explainer = xai.Explainer(
model,
task_type=TaskType.CLASSIFICATION,
task=xai.Task.CLASSIFICATION,
preprocess_fn=preprocess_fn,
explain_mode=ExplainMode.WHITEBOX,
insertion_parameters=insertion_parameters,
@@ -131,7 +130,7 @@ explanation_parameters = ExplanationParameters(
target_explain_group=TargetExplainGroup.CUSTOM,
target_explain_labels=[11, 14], # target classes to explain, also ['dog', 'person'] is a valid input
label_names=voc_labels,
post_processing_parameters=PostProcessParameters(overlay=True), # by default, saliency map overlay over image
visualization_parameters=VisualizationParameters(overlay=True), # by default, saliency map overlay over image
)
explanation = explainer(image, explanation_parameters)

@@ -154,9 +153,8 @@ import cv2
import numpy as np
import openvino.runtime as ov

from openvino_xai.common.parameters import TaskType
from openvino_xai.explanation.explainer import Explainer
from openvino_xai.explanation.explanation_parameters import ExplainMode, ExplanationParameters
import openvino_xai as xai
from openvino_xai.explainer.explanation_parameters import ExplainMode, ExplanationParameters


def preprocess_fn(x: np.ndarray) -> np.ndarray:
@@ -176,9 +174,9 @@ def postprocess_fn(x: ov.utils.data_helpers.wrappers.OVDict):
model = ov.Core().read_model("path/to/model.xml") # type: ov.Model

# Explainer object will prepare and load the model once in the beginning
explainer = Explainer(
explainer = xai.Explainer(
model,
task_type=TaskType.CLASSIFICATION,
task=xai.Task.CLASSIFICATION,
preprocess_fn=preprocess_fn,
postprocess_fn=postprocess_fn,
explain_mode=ExplainMode.BLACKBOX,
@@ -212,9 +210,9 @@ Note: original model outputs are not affected and the model should be inferable

```python
import openvino.runtime as ov
import openvino_xai as ovxai
from openvino_xai.common.parameters import TaskType, XAIMethodType
from openvino_xai.insertion.insertion_parameters import ClassificationInsertionParameters

import openvino_xai as xai
from openvino_xai.inserter.parameters import ClassificationInsertionParameters


# Creating model
@@ -224,13 +222,13 @@ model = ov.Core().read_model("path/to/model.xml") # type: ov.Model
insertion_parameters = ClassificationInsertionParameters(
# target_layer="last_conv_node_name", # target_layer - node after which XAI branch will be inserted
embed_normalization=True, # True by default. If set to True, saliency map normalization is embedded in the model
explain_method_type=XAIMethodType.RECIPROCAM, # ReciproCAM is the default XAI method for CNNs
explain_method=xai.Method.RECIPROCAM, # ReciproCAM is the default XAI method for CNNs
)

# Inserting XAI branch into the model graph
model_xai = ovxai.insert_xai(
model_xai = xai.insert_xai(
model=model,
task_type=TaskType.CLASSIFICATION,
task=xai.Task.CLASSIFICATION,
insertion_parameters=insertion_parameters,
) # type: ov.Model

4 changes: 2 additions & 2 deletions docs/api/source/api.rst
Original file line number Diff line number Diff line change
@@ -22,9 +22,9 @@ To explain the model (getting saliency maps), use openvino_xai.explanation
Algorithms
----------

To access/modify implemented XAI methods, use openvino_xai.algorithms
To access/modify implemented XAI methods, use openvino_xai.methods

.. automodule:: openvino_xai.algorithms
.. automodule:: openvino_xai.methods
:members:

Insertion
Loading