Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pipe.enable_model_cpu_offload() makes every image after first distorted #23

Open
ledrose opened this issue Feb 19, 2024 · 1 comment
Open
Labels
bug Something isn't working

Comments

@ledrose
Copy link

ledrose commented Feb 19, 2024

Found a weird problem with DeepCache. If you use cpu offload with DeepCache first generated batch will be normal (expected speedup and quality), but all next batches will be very distorted (and generation is speed up even more). It does not happen with pipe.enable_sequential_cpu_offload() and when using without offload.
It can be fixed by helper.disable() after every batch and helper.enable() again before next batch generation but as it is not required in pipeline without cpu offload I decided not consider it as expected behavior.
Code for recreating this issue and example of distortion here: https://www.kaggle.com/code/ledrose/deepcache-cpu-offload-bug

@horseee
Copy link
Owner

horseee commented Feb 27, 2024

Hi @ledrose ,

We tested the issue you mentioned and found that the possible cause of this issue is a conflict between the cpu offloa and our implementation of deepcache (hook a new forward function). When the model's submodules are called, the forward function of the original model is loaded, while the one needed by DeepCache is removed.

You can try our old version of the code here(implemented differently, just rewrite the forward function). We tested it and it is compatible with enable_model_cpu_offload(). But you need to downgrade the version of diffusers to ~0.24.0 (Higher version may have some import error since the API in diffusers changed )

frostyplanet pushed a commit to frostyplanet/inference that referenced this issue Sep 14, 2024
test result shows deepcache should be loaded before cpu_offloading
if not, may cause issues like
horseee/DeepCache#23
vladmandic/automatic#2888

Signed-off-by: wxiwnd <[email protected]>
frostyplanet pushed a commit to frostyplanet/inference that referenced this issue Sep 14, 2024
test result shows deepcache should be loaded before cpu_offloading
if not, may cause issues like
horseee/DeepCache#23
vladmandic/automatic#2888

Signed-off-by: wxiwnd <[email protected]>
frostyplanet pushed a commit to frostyplanet/inference that referenced this issue Sep 21, 2024
test result shows deepcache should be loaded before cpu_offloading
if not, may cause issues like
horseee/DeepCache#23
vladmandic/automatic#2888

Signed-off-by: wxiwnd <[email protected]>
frostyplanet pushed a commit to frostyplanet/inference that referenced this issue Sep 27, 2024
test result shows deepcache should be loaded before cpu_offloading
if not, may cause issues like
horseee/DeepCache#23
vladmandic/automatic#2888

Signed-off-by: wxiwnd <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants