We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
请问下作者, 为什么要用在低清图上finetune的encoder呢? 我的感觉是,原始高清图重建的encoder来做base, 然后control net来控制(以及弥补)低清图encode不准. 这样的好处是高清图的特征和低清去噪的过程都能保留. 如果encoder用低清图finetuning的东西,然后controlnet继续在 低清图finetuning 的encoder上继续'去噪',感觉有点奇怪啊, 不知道是不是我的理解有点偏差了, 期待作者能回复我一下,或者有相关的ablation study就更好了, 感谢~~
The text was updated successfully, but these errors were encountered:
sry我好像记错了,finetune的是supir这篇文章.....
Sorry, something went wrong.
SUPIR的VAE encoder应该也是用lq-hq数据对来finetune的,目的也是为了去除退化,跟DiffBIR的stage 1模型是类似的~
No branches or pull requests
请问下作者, 为什么要用在低清图上finetune的encoder呢? 我的感觉是,原始高清图重建的encoder来做base, 然后control net来控制(以及弥补)低清图encode不准. 这样的好处是高清图的特征和低清去噪的过程都能保留. 如果encoder用低清图finetuning的东西,然后controlnet继续在 低清图finetuning 的encoder上继续'去噪',感觉有点奇怪啊, 不知道是不是我的理解有点偏差了, 期待作者能回复我一下,或者有相关的ablation study就更好了, 感谢~~
The text was updated successfully, but these errors were encountered: