You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we hard code a chunk size of 1000000 in the storage send_sample_id_and_label function. We need to chunk the number of files since potentially the number of files we get as input is huge. However, we do not really have a good parameter in the configuration (I think) to reflect the number here. Should we add one? Can we use some other parameter like sample_dbinsertion_batchsize?
The text was updated successfully, but these errors were encountered:
MaxiBoether
changed the title
Do not hardware chunk size in send_sample_id_and_label
Do not hardcode chunk size in send_sample_id_and_labelApr 30, 2024
Currently, we hard code a chunk size of 1000000 in the storage
send_sample_id_and_label
function. We need to chunk the number of files since potentially the number of files we get as input is huge. However, we do not really have a good parameter in the configuration (I think) to reflect the number here. Should we add one? Can we use some other parameter likesample_dbinsertion_batchsize
?The text was updated successfully, but these errors were encountered: