Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Fix torch bloat16 -> numpy float32 conversion for compile max-autotune
Tested on A6000 ADA, Torch 2.2.0.dev20231026. When running with the compiled, mixed precision model, the iou_predictions are in `bfloat16` which are not automatically castable to a numpy array. As a workaround, I propose to cast those tensors to float before passing them to numpy, so you can avoid the problem. Please note that float32 is more compatible than half, due to its representation limits. Regards.
- Loading branch information