Text_encoder_monkey_patching #51
abhibeats95
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi there! I wanted to let you know that there might be a problem with your code. When training with Lora and setting train_text_encoder=False, the text encoder won't be fine-tuned. However, during inference, there's a step called "monkey patching" of the text encoder that expects it to be fine-tuned. If it hasn't been fine-tuned, the model pipeline should not be expecting it. Correcct me if I am wrong.
#markdown Enable text encoder training?
TRAIN_TEXT_ENCODER = False
TRAIN_TEXT_ENCODER_FLAG=""
if TRAIN_TEXT_ENCODER:
TRAIN_TEXT_ENCODER_FLAG="--train_text_encoder"
Beta Was this translation helpful? Give feedback.
All reactions