-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pipe.enable_model_cpu_offload() makes every image after first distorted #23
Comments
Hi @ledrose , We tested the issue you mentioned and found that the possible cause of this issue is a conflict between the cpu offloa and our implementation of deepcache (hook a new forward function). When the model's submodules are called, the forward function of the original model is loaded, while the one needed by DeepCache is removed. You can try our old version of the code here(implemented differently, just rewrite the forward function). We tested it and it is compatible with enable_model_cpu_offload(). But you need to downgrade the version of diffusers to ~0.24.0 (Higher version may have some import error since the API in diffusers changed ) |
test result shows deepcache should be loaded before cpu_offloading if not, may cause issues like horseee/DeepCache#23 vladmandic/automatic#2888 Signed-off-by: wxiwnd <[email protected]>
test result shows deepcache should be loaded before cpu_offloading if not, may cause issues like horseee/DeepCache#23 vladmandic/automatic#2888 Signed-off-by: wxiwnd <[email protected]>
test result shows deepcache should be loaded before cpu_offloading if not, may cause issues like horseee/DeepCache#23 vladmandic/automatic#2888 Signed-off-by: wxiwnd <[email protected]>
test result shows deepcache should be loaded before cpu_offloading if not, may cause issues like horseee/DeepCache#23 vladmandic/automatic#2888 Signed-off-by: wxiwnd <[email protected]>
Found a weird problem with DeepCache. If you use cpu offload with DeepCache first generated batch will be normal (expected speedup and quality), but all next batches will be very distorted (and generation is speed up even more). It does not happen with pipe.enable_sequential_cpu_offload() and when using without offload.
It can be fixed by helper.disable() after every batch and helper.enable() again before next batch generation but as it is not required in pipeline without cpu offload I decided not consider it as expected behavior.
Code for recreating this issue and example of distortion here: https://www.kaggle.com/code/ledrose/deepcache-cpu-offload-bug
The text was updated successfully, but these errors were encountered: