Skip to content

Commit

Permalink
examples: update some examples to use the new task execution engine
Browse files Browse the repository at this point in the history
The Python scripts for the following 3 examples have been converted into the new declarative YAML
file format:

- tensorflow-mnist-classifier
- tensorflow-mnist-model-inversion
- tensorflow-mnist-pixel-threshold

The original demos that used Python scripts have been moved into "legacy" folders in the example
directory.

Closes #103
  • Loading branch information
jtsextonMITRE authored and jkglasbrenner committed Sep 8, 2023
1 parent 4eafbd6 commit c2a6b73
Show file tree
Hide file tree
Showing 29 changed files with 3,832 additions and 124 deletions.
94 changes: 94 additions & 0 deletions examples/task-plugins/dioptra_custom/evaluation/mlflow.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# This Software (Dioptra) is being made available as a public service by the
# National Institute of Standards and Technology (NIST), an Agency of the United
# States Department of Commerce. This software was developed in part by employees of
# NIST and in part by NIST contractors. Copyright in portions of this software that
# were developed by NIST contractors has been licensed or assigned to NIST. Pursuant
# to Title 17 United States Code Section 105, works of NIST employees are not
# subject to copyright protection in the United States. However, NIST may hold
# international copyright in software created by its employees and domestic
# copyright (or licensing rights) in portions of software that were assigned or
# licensed to NIST. To the extent that NIST holds copyright in this software, it is
# being made available under the Creative Commons Attribution 4.0 International
# license (CC BY 4.0). The disclaimers of the CC BY 4.0 license apply to all parts
# of the software developed or licensed by NIST.
#
# ACCESS THE FULL CC BY 4.0 LICENSE HERE:
# https://creativecommons.org/licenses/by/4.0/legalcode
"""A task plugin module for using the MLFlow model registry."""

from __future__ import annotations

from typing import Optional

import mlflow
import structlog
from mlflow.entities.model_registry import ModelVersion
from mlflow.tracking import MlflowClient
from structlog.stdlib import BoundLogger

from dioptra import pyplugs

LOGGER: BoundLogger = structlog.stdlib.get_logger()


@pyplugs.register
def add_model_to_registry(
name: str, model_dir: str
) -> Optional[ModelVersion]:
"""Registers a trained model logged during the current run to the MLFlow registry.
Args:
active_run: The :py:class:`mlflow.ActiveRun` object managing the current run's
state.
name: The registration name to use for the model.
model_dir: The relative artifact directory where MLFlow logged the model trained
during the current run.
Returns:
A :py:class:`~mlflow.entities.model_registry.ModelVersion` object created by the
backend.
"""
if not name.strip():
return None

active_run = mlflow.active_run()

run_id: str = active_run.info.run_id
artifact_uri: str = active_run.info.artifact_uri
source: str = f"{artifact_uri}/{model_dir}"

registered_models = [x.name for x in MlflowClient().list_registered_models()]

if name not in registered_models:
LOGGER.info("create registered model", name=name)
MlflowClient().create_registered_model(name=name)

LOGGER.info("create model version", name=name, source=source, run_id=run_id)
model_version: ModelVersion = MlflowClient().create_model_version(
name=name, source=source, run_id=run_id
)

return model_version


@pyplugs.register
def get_experiment_name() -> str:
"""Gets the name of the experiment for the current run.
Args:
active_run: The :py:class:`mlflow.ActiveRun` object managing the current run's
state.
Returns:
The name of the experiment.
"""
active_run = mlflow.active_run()

experiment_name: str = (
MlflowClient().get_experiment(active_run.info.experiment_id).name
)
LOGGER.info(
"Obtained experiment name of active run", experiment_name=experiment_name
)

return experiment_name
22 changes: 22 additions & 0 deletions examples/tensorflow-mnist-classifier-legacy/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Tensorflow MNIST Classifier demo (legacy)

This example demonstrates how to run a simple experiment on the transferability of the fast gradient method (FGM) evasion attack between two neural network architectures.
The demo can be found in the Jupyter notebook file [demo.ipynb](demo.ipynb).

## Running the example

To prepare your environment for running this example, follow the linked instructions below:

1. [Create and activate a Python virtual environment and install the necessary dependencies](../README.md#creating-a-virtual-environment)
2. [Download the MNIST dataset using the download_data.py script.](../README.md#downloading-datasets)
3. [Follow the links in these User Setup instructions](../../README.md#user-setup) to do the following:
- Build the containers
- Use the cookiecutter template to generate the scripts, configuration files, and Docker Compose files you will need to run Dioptra
4. [Edit the docker-compose.yml file to mount the data folder in the worker containers](../README.md#mounting-the-data-folder-in-the-worker-containers)
5. [Initialize and start Dioptra](https://pages.nist.gov/dioptra/getting-started/running-dioptra.html#initializing-the-deployment)
6. [Register the custom task plugins for Dioptra's examples and demos](../README.md#registering-custom-task-plugins)
7. [Register the queues for Dioptra's examples and demos](../README.md#registering-queues)
8. [Start JupyterLab and open `demo.ipynb`](../README.md#starting-jupyter-lab)

Steps 1–4 and 6–7 only need to be run once.
**Returning users only need to repeat Steps 5 (if you stopped Dioptra using `docker compose down`) and 8 (if you stopped the `jupyter lab` process)**.
Loading

0 comments on commit c2a6b73

Please sign in to comment.