-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CPU vs. GPU speed #1
Comments
Hi, I did some tests on my laptop: with i7-7700HQ CPU: 4.72it/s and 40s/epoch; It was tested using HeLa data (mod.txt), what's your training file? On CPU you may set batch size to be very large e.g. set BATCH_SIZE = 2000 (if you have large enough memory), which will be much faster. |
I'm using a big dataset with around 300,000 peptides. |
I see. With such a big dataset, I think a subset with high quality (say q-value<0.001 or smaller) can be chosen to train the model. Or there are also some GPU cloud servers. Hope that may help. |
I'm training with a i5-2500 quad-processor. It does 4it/s and 1:15h/epoch. Approximately, how fast would the training be with a GPU?
The text was updated successfully, but these errors were encountered: