-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable unit test for wasi-nn WinML backend. #8442
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before a detailed review, we need to determine if adding these dependencies are ok.
crates/test-programs/Cargo.toml
Outdated
@@ -20,3 +20,6 @@ futures = { workspace = true, default-features = false, features = ['alloc'] } | |||
url = { workspace = true } | |||
sha2 = "0.10.2" | |||
base64 = "0.21.0" | |||
# image and ndarray are used by nn_image_classification_onnx for image preprocessing. | |||
image = { version = "0.24.6", default-features = false, features = ["jpeg"] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We had avoided adding image and ndarray dependencies in favor of using the raw image used in the openvino test. What you did with the dog image is pretty close to what I had originally. @abrown, WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Image processing and post-processing are copied from your classification-component-onnx example. If new dependencies are not allowed, we can do preprocessing offline, just put a RGB file here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally, I like showing the image prep / processing as it provides guidance to folks learning how to get started. In fact, it'd probably be helpful to add more comments / documentation to samples to explain what is happening. I think we, as practitioners, may be too accustomed to the transformations we regularly do to realize how odd they may appear to the naive observer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My original idea was to use an image as input for all backends, so we can cross check the results for correctness.
To avoid introducing extra dependencies, the image is replaced with processed tensor data (000000062808.rgb). I believe this is the same image for openvino backend (tensorization for openvino).
cd8785a
to
f1b030c
Compare
7e52717
to
78f0450
Compare
This test was disabled because GitHub Actions Windows Server image doesn't have desktop experience included. But it looks like we can have a standalone WinML binary downloaded from ONNX Runtime project. Wasi-nn WinML backend and ONNX Runtime backend now share the same test code as they accept the same input, and they are expected to produce the same result. This change also make wasi-nn WinML backend as a default feature. prtest:full
let mut results: Vec<InferenceResult> = probabilities | ||
.iter() | ||
.skip(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing this line because it's likely to be a workaround for this specific openvino model only. If mobilenet-v1-0.25-128 is the model for openvino test, it may have an additional class 0 for background. The shape (1, 1001) also shows it has one more value than ONNX model (1, 1000).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Tests results are the same across the impls.
Thank you for documenting how the raw image was created. Great work!
@abrown tag, you're it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
This test was disabled because GitHub Actions Windows Server image doesn't have desktop experience included. But it looks like we can have a standalone WinML binary downloaded from ONNX Runtime project.
Wasi-nn WinML backend and ONNX Runtime backend now share the same test code since they accept the same input, and they are expected to produce the same result. Pre-processing and post-processing are added to nn_image_classification_onnx for improving its accuracy.
This change also make wasi-nn WinML backend as a default feature as it's covered by test now.
Fixes #8391.