You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 4, 2020. It is now read-only.
The current version of CUDA is 9.2, while for instance Google Cloud only supports CUDA 9.0 on K80 instances. But e.g. CUDA 9.2 apparently bring some speedup for convolutions, so CUDA Version (plus the rest of the CUDA stack like CuDNN or CuBLAS) can influence results of the benchmark.
Or for instance caffe (not that we support that for now) has some problems with CUDA 9 and is best used with CUDA 8.0.
Different CUDA versions are a good candidate for a dimension that we want to cover with our benchmarking framework, to allow for comparisons.
At the same time, we probably don't want to have 4+ different base images, each of has 2 or more worker images depending on them, Since that would become difficult to maintain and test.
The current version of CUDA is 9.2, while for instance Google Cloud only supports CUDA 9.0 on K80 instances. But e.g. CUDA 9.2 apparently bring some speedup for convolutions, so CUDA Version (plus the rest of the CUDA stack like CuDNN or CuBLAS) can influence results of the benchmark.
Or for instance caffe (not that we support that for now) has some problems with CUDA 9 and is best used with CUDA 8.0.
Different CUDA versions are a good candidate for a dimension that we want to cover with our benchmarking framework, to allow for comparisons.
At the same time, we probably don't want to have 4+ different base images, each of has 2 or more worker images depending on them, Since that would become difficult to maintain and test.
A possible solution could be to install multiple CUDA versions side-by-side, e.g. https://blog.kovalevskyi.com/multiple-version-of-cuda-libraries-on-the-same-machine-b9502d50ae77
But then we also have to have multiple versions of pytorch compiled for each CUDA version, which might blow up the image size considerably.
For now we'll support CUDA 9.0, since that's supported by all major clouds.
This ticket is about figuring out a good strategy to cover multiple CUDA versions and implementing it.
The text was updated successfully, but these errors were encountered: