Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPU vs. GPU speed #1

Open
bfurtwa opened this issue Sep 20, 2018 · 3 comments
Open

CPU vs. GPU speed #1

bfurtwa opened this issue Sep 20, 2018 · 3 comments

Comments

@bfurtwa
Copy link

bfurtwa commented Sep 20, 2018

I'm training with a i5-2500 quad-processor. It does 4it/s and 1:15h/epoch. Approximately, how fast would the training be with a GPU?

@horsepurve
Copy link
Owner

horsepurve commented Sep 20, 2018

Hi, I did some tests on my laptop:

with i7-7700HQ CPU: 4.72it/s and 40s/epoch;
with GTX-1070 GPU: 35.50it/s and 5s/epoch;

It was tested using HeLa data (mod.txt), what's your training file?

On CPU you may set batch size to be very large e.g. set BATCH_SIZE = 2000 (if you have large enough memory), which will be much faster.

@bfurtwa
Copy link
Author

bfurtwa commented Sep 21, 2018

I'm using a big dataset with around 300,000 peptides.
So each iteration is one batch? That means I did 4*16=64 samples/s with batch size 16. With batch size 2000 I do 74 samples/s. Unfortunately that's not a big speedup.

@horsepurve
Copy link
Owner

horsepurve commented Sep 21, 2018

I see. With such a big dataset, I think a subset with high quality (say q-value<0.001 or smaller) can be chosen to train the model. Or there are also some GPU cloud servers. Hope that may help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants