Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up Embedding process while CUDA devices are available #537

Open
Snow9666 opened this issue Jul 3, 2024 · 0 comments
Open

Speed up Embedding process while CUDA devices are available #537

Snow9666 opened this issue Jul 3, 2024 · 0 comments

Comments

@Snow9666
Copy link

Snow9666 commented Jul 3, 2024

Is your feature request related to a problem? Please describe.
It seems vanna only train question-sql pairs one by one while use vn.train(question='', sql=''). Do we have batch application version of this function?

Describe the solution you'd like
I want to input question-sql pairs to vn.train() function and set batch-size while using some embedding model which support CUDA, so that it can boost the training speed.

Help! Please.

@Snow9666 Snow9666 changed the title speed up Embedding speed while CUDA devices are available speed up Embedding process while CUDA devices are available Jul 3, 2024
@Snow9666 Snow9666 changed the title speed up Embedding process while CUDA devices are available Speed up Embedding process while CUDA devices are available Jul 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant