Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supporting partial image sampling like in img2img #11

Open
reesekneeland opened this issue Mar 15, 2023 · 0 comments
Open

Supporting partial image sampling like in img2img #11

reesekneeland opened this issue Mar 15, 2023 · 0 comments

Comments

@reesekneeland
Copy link

In the original stable diffusion and other diffusion models, there is a feature of the image sampler to give an original base image (used as the latent Z vector), and a strength parameter (between 0 and 1), and a new prompt, which then instructs the model to add noise to the Z vector in accordance with the strength parameter, and denoise it conditioned on the clip of the given prompt.

I was hoping this model would have similar functionality but with the ability to perform that last denoising step conditioned on both the image clip AND the text clip, so you would give it a base image, which would be converted to Z and img_clip, a prompt, and a strength parameter, and it would add noise to Z, and denoise it conditioned on both img_clip and prompt_clip.

Is a feature like this possible given the current noise scheduling system?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant