You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
WGAN training becomes unstable at times when one uses a momentum based optimizer such as Adam [8] (with B1 > 0) on the critic, or when one uses high learning rates
You advocate using RMSProp for the discriminator instead. Yet in the implementation, although RMSProp is the default, there is an option to use Adam (line 144). Is this included for consistency with your evaluation, or have you found settings for which Adam is effective with the WGAN?
The text was updated successfully, but these errors were encountered:
In the paper, you report a negative result that
You advocate using RMSProp for the discriminator instead. Yet in the implementation, although RMSProp is the default, there is an option to use Adam (line 144). Is this included for consistency with your evaluation, or have you found settings for which Adam is effective with the WGAN?
The text was updated successfully, but these errors were encountered: