Replies: 1 comment
-
Hi @GorkemP, Thank you for your interest in the FeTS Challenge! At this time, the model code that is running for the challenge requires a minimum of 11G of dedicated GPU memory, and I don't believe this can be changed right now. Cheers, |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Dear organizers,
I am trying to run the FeTS_Challenge.ipynb with minimal configurations that you provided in order to see the overall pipeline.
aggregation_function = weighted_average_aggregation
choose_training_collaborators = all_collaborators_train
training_hyper_parameters_for_round = constant_hyper_parameters
validation_functions = [('sensitivity', sensitivity), ('specificity', specificity)]
Even if I set
db_store_rounds
to 1, I get CUDA out of memory error. I am using an RTX2080 that has 8GB RAM. I get the error just after the first collaborator starts to train (in round 0, after the initial validation results are reported for the first collaborator).Is there another way to decrease GPU RAM consumption (other than
db_store_rounds
)? I think decreasing batch size would work but I haven't seen the parameter for that.Best
Beta Was this translation helpful? Give feedback.
All reactions