You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But when I tried training an inpainting model for the same dataset with the default inpainting flags, it gives garbage.
(I made a small change: used --use_template="object" so that --placeholder_token_at_data="<krk>|<s1><s2>" does not get rid of the custom tokens and uses the object text templates)
Without lora patching it looks fine - prompt="photo of <s2>"
But as soon as I patch this model it gives garbage outputs - lora_scale=0.0, prompt="photo of <s1>" lora_scale=0.0 prompt="photo of <s2>" lora_scale=0.5, prompt="photo of <s1><s2>" lora_scale=0.5 prompt="photo of <s1>" lora_scale=0.5 prompt="photo of <s2>"
I also tried varying the lora_scale, but that doesn't help (as seen in the variation from 0 to 0.5). I also tried different prompts and that also didn't help.
The text was updated successfully, but these errors were encountered:
Can someone please help me figure out why inpainting is not working for me while basic image generation seems to be working?
I have a folder with 500 images of an identity sampled from a few videos.
I tried training with the basic lora model with the following flags and it works
lora_scale=0.5, prompt="style of <s1><s2>"
But when I tried training an inpainting model for the same dataset with the default inpainting flags, it gives garbage.
(I made a small change: used
--use_template="object"
so that--placeholder_token_at_data="<krk>|<s1><s2>"
does not get rid of the custom tokens and uses the object text templates)inputs:
Without lora patching it looks fine -
prompt="photo of <s2>"
But as soon as I patch this model it gives garbage outputs -
lora_scale=0.0, prompt="photo of <s1>"
lora_scale=0.0 prompt="photo of <s2>"
lora_scale=0.5, prompt="photo of <s1><s2>"
lora_scale=0.5 prompt="photo of <s1>"
lora_scale=0.5 prompt="photo of <s2>"
I also tried varying the lora_scale, but that doesn't help (as seen in the variation from 0 to 0.5). I also tried different prompts and that also didn't help.
The text was updated successfully, but these errors were encountered: