Skip to content

Commit

Permalink
Use torch_tensorrt.Device instead of torch.device in trt compile (#8051)
Browse files Browse the repository at this point in the history
Fixes #8050

### Description

A few sentences describing the changes proposed in this pull request.

### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [x] Non-breaking change (fix or new feature that would not break
existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing
functionality to change).
- [ ] New tests added to cover the changes.
- [ ] Integration tests passed locally by running `./runtests.sh -f -u
--net --coverage`.
- [ ] Quick tests passed locally by running `./runtests.sh --quick
--unittests --disttests`.
- [ ] In-line docstrings updated.
- [ ] Documentation updated, tested `make html` command in the `docs/`
folder.

Signed-off-by: YunLiu <[email protected]>
  • Loading branch information
KumoLiu authored Aug 29, 2024
1 parent b6d6d77 commit 29ce1a7
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion monai/networks/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -851,7 +851,7 @@ def _onnx_trt_compile(
# wrap the serialized TensorRT engine back to a TorchScript module.
trt_model = torch_tensorrt.ts.embed_engine_in_new_module(
f.getvalue(),
device=torch.device(f"cuda:{device}"),
device=torch_tensorrt.Device(f"cuda:{device}"),
input_binding_names=input_names,
output_binding_names=output_names,
)
Expand Down

0 comments on commit 29ce1a7

Please sign in to comment.