You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
numpy.core._exceptions._ArrayMemoryError: Unable to allocate 2.73 TiB for an array with shape (124994103978,) and data type [('sample_id', '<u8'), ('ptr', '<u8'), ('size', '<u8’)]
#21
i wanna train on imagenet21k, not imagenet1k. so i downloaded the imagenet21k(winter) on the official site.(imagenet official site)
and then i ran the "write_imagenet.sh" with default argument value (500 0.50 90) , setting the dataset imagenet21k(winter).
finally, i run the train_imagenet.py. and there is an error.
"numpy.core._exceptions._ArrayMemoryError: Unable to allocate 2.73 TiB for an array with shape (124994103978,) and data type [('sample_id', '<u8'), ('ptr', '<u8'), ('size', '<u8’)] "
what is that error? why does that error occur? (when i make a dataset with imagenet1k and run the train, there is not any error.)
Is there any way to learn with imagenet21k?
(ref : The ffcv dataset that is made by "write_imagenet.sh" size is 2.73 TB.)
The text was updated successfully, but these errors were encountered:
i wanna train on imagenet21k, not imagenet1k. so i downloaded the imagenet21k(winter) on the official site.(imagenet official site)
and then i ran the "write_imagenet.sh" with default argument value (500 0.50 90) , setting the dataset imagenet21k(winter).
finally, i run the train_imagenet.py. and there is an error.
"numpy.core._exceptions._ArrayMemoryError: Unable to allocate 2.73 TiB for an array with shape (124994103978,) and data type [('sample_id', '<u8'), ('ptr', '<u8'), ('size', '<u8’)] "
what is that error? why does that error occur? (when i make a dataset with imagenet1k and run the train, there is not any error.)
Is there any way to learn with imagenet21k?
(ref : The ffcv dataset that is made by "write_imagenet.sh" size is 2.73 TB.)
The text was updated successfully, but these errors were encountered: