-
-
Notifications
You must be signed in to change notification settings - Fork 423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: maybe openvino_disable_model_caching should be true by default #3441
Comments
Model caching reduces the startup time from minutes to a few seconds if a cache for the model already exists. There is a wiki page for Model Caching: https://github.com/vladmandic/automatic/wiki/OpenVINO#model-caching |
i'm open to suggestions, but disabling cache by default is not a likely one. |
I will come with more data since I didn't remember I noticed unusual latency with openvino cached disabled. Are we doing any kind of GC for the cache? Or maybe just check the free disk size and don't write cache if the free disk is less than some reasonable value? Or just give some warning on the UI to let they know? I noticed the disk was full just by an error message that the generated image can't be save because of disk full which |
Did a quick test with one computer, hardware: CPU: AMD 3700X GPU: Intel ARC A580 Fresh start Run it multiple times, it always looks like this: real 0m19.838s Then set cache in program to "True" and run real 0m14.622s The difference is only 5 seconds. Also, actually the first time run with cache on is always around 24 seconds, I think the reason is that it need time to write the disk, the generated cache dir is around 7G for the 2G checkpoint used in this test. #!/home/leptonwu/automatic/venv_openvino/bin/python
import torch
import openvino.torch
from openvino.runtime import Core
from openvino import properties
from diffusers import StableDiffusionPipeline
device='GPU'
cache=True
Core().set_property({properties.cache_dir: "cache"})
pipe = StableDiffusionPipeline.from_single_file(
"realisticVisionV60B1_v51HyperVAE.safetensors")
pipe.unet = torch.compile(
pipe.unet,
backend="openvino",
options={"device": device, "model_caching": cache},
)
torch.xpu.manual_seed_all(0)
image = pipe("black cat", guidance_scale=1.5, width=512, height=512, num_inference_steps=6)
image.images[0].save("out.jpg") |
Issue Description
I am trying with cpu diffusion and it seems openvino give me best performance, basically I can generate a 512x512 image with recommended settings from Realistic Vision V6.0 B1 for around 40 seconds on a 5700G. The only issues is when I am trying different models and configuration, the cache directory keep increasing and finally I am runnning out of disk space. (I had around 65G free space after installation and cache increased to 65G). I did some investigation and it seems by default the disk cache is on for openvino. I am not sure if the cache really matter a lot since I didn't notice obvious performance issue after I disable caching.
Maybe we should disable openvino model cache by default. Or at least we should say it explicitly somewhere to remind users it could increase rapidly if users are keep trying different models etc.
Version Platform Description
No response
Relevant log output
No response
Backend
Diffusers
UI
Standard
Branch
Master
Model
StableDiffusion 1.5
Acknowledgements
The text was updated successfully, but these errors were encountered: