Skip to content

clarification on augreg2 models #2420

Answered by rwightman
mueller-mp asked this question in Q&A
Discussion options

You must be logged in to vote

@mueller-mp yes, I re-did the fine-tune from the original in21k checkpoint, mostly to show lucas that they could be better :)

The biggest difference was that these fine-tunes used the timm scripts & augmentations (original pretrained & fine-tunes were using the google jax train code). Using layer-wise LR decay was the biggest single hparam change, will see if I have that config files somewhere...

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@rwightman
Comment options

@mueller-mp
Comment options

@mueller-mp
Comment options

Answer selected by mueller-mp
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants