Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How the encoder, target classifier and adv in the code reverse the gradient? #2

Open
xyz321123 opened this issue Apr 17, 2023 · 1 comment

Comments

@xyz321123
Copy link

In the src/training_loops/simple_loop.py file, I know that the first red box is the adv loss, and the second red box is the total loss for the encoder and target classifier, but why only the total loss is backward in the third red box, what about adv?
image

@xyz321123 xyz321123 reopened this Apr 17, 2023
@saist1993
Copy link
Owner

Hey! In line 74 we are adding the main loss and the aux loss (adversarial loss). Thus when the backward is called at line 85, the gradients are propagated to both adversarial and classifier branch.

As for how does the gradient get reversed. In the model file , there is a gradient reversal function which reverses the gradient.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants