Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running with local optimized installation #15

Open
igoralvarezz opened this issue Sep 2, 2022 · 7 comments
Open

Running with local optimized installation #15

igoralvarezz opened this issue Sep 2, 2022 · 7 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@igoralvarezz
Copy link

igoralvarezz commented Sep 2, 2022

First of all, great work with the plugin! It really makes the process of working with img2img more frictionless and enjoyable. Thanks for the effort!

I'd like to ask if you think that a option to run with a local installation (not diffusers-based, like the default one from CompVis) would be feasible on the current state of the plugin.
I ask because my local setup is pretty limited (GTX 1050ti) but I've manage to run the optimized version without issues on my machine (which is basically like the original one but with the precision of the inference set at half value, some options on this may be basujinda optimized version, Waifu Diffusion or hlky one) and I'd like to continue using it as the backend for koi (and, as it also have options to other sample methods, I think it would solve the #6 issue as well, and also make it possible to use offline).

If you think that's possible, even with some work from my side, I'm willing to try. Just let me know :)

@nousr
Copy link
Owner

nousr commented Sep 3, 2022

It should totally be possible! there is a plan to eventually move to CompVis-based support. Like you said this should also allow different samplers.

If you are interested in tackling a migration, I recommend starting a branch and going with CompVis support as it will be the basis for anything else. From there we can branch out and look to support more optimized methods--or even start our own optimizations...

if you end up starting this, feel free to open a draft PR with some basic work and we can discuss details further!

@nousr nousr added enhancement New feature or request help wanted Extra attention is needed labels Sep 3, 2022
@nousr
Copy link
Owner

nousr commented Sep 3, 2022

let me know if you would like me to assign you to this task 👍

@igoralvarezz
Copy link
Author

igoralvarezz commented Sep 5, 2022

Sure, I can try that.
I already have a colab with the official version working and some versions of optimized code working as well (they all work using the official repo as a base).
I only need to know how to implement them with the flask backend (I only know Django for now, didn't learn flask yet). Is the backend receiving info via API and running a py script with the received data as parameters? If that's the case it would be pretty straightforward to change to other implementations. as most of them have a optimized_txt2img.py and optimized_img2img.py files to run the program.

Let me know and I can put something together for us to start :)

@papersplease
Copy link

papersplease commented Sep 6, 2022

Currently, the most useful memory-optimized version is this one:
https://github.com/AUTOMATIC1111/stable-diffusion-webui
It has it's own implementation and optimized attention, so it can even run on 2GB VRAM GPUs, albeit slowly. Also able to render huge resolutions or huge batches.
(also, the other Krita plugin is based on it)

@tocram1
Copy link
Contributor

tocram1 commented Sep 6, 2022

it can even run on 2GB VRAM GPUs

Ohh? 👀👀👀

@papersplease
Copy link

Ohh? 👀👀👀

With --lowvram --opt-split-attention, it fits into about 1.2GB VRAM for a 512x512 render. I think it can even run on Kepler if you fiddle with older pytorch versions. But the inference in lowvram more is several times slower and underutilizes the GPU, --medvram --opt-split-attention is advised.

Having something like this in Krita would be great.

@RandomLegend
Copy link

That other krita plugin sadly doesn't work with automatic1111 original webui using gradio api.

So it would be absolutely fantastic to have a krita plugin that can hook into the gradio api on sd-webui and / or automatic1111 webui.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

5 participants