Tuning advices for GPU computation wanted #521
Replies: 3 comments 3 replies
-
Looking forward to hearing from you, @kaigai |
Beta Was this translation helpful? Give feedback.
-
Once a workload is moved to GPU, it is enought parallelized on the device side. On the other hand, we don't recommend to use many PostgreSQL backend concurrently Sorry for my late response, due to the funneral of my father last week. Best regards, |
Beta Was this translation helpful? Give feedback.
-
Do you have any specific recommendation about concurrent requests ? like how many is too much / too little. Or how much is appropriate? Currently, I set this concurrent number by the number of CPU cores. Would this be recommended on GPU Postgres? Too much or too less? |
Beta Was this translation helpful? Give feedback.
-
a question about tuning. For original CPU-based Postgres (9+), parallel connections cost the same work_mem and resources as normal connections, and maximum parallel workers is suggested to be set as the number of CPU cores. So we know how to calculate work_mem and nums of total connections and parallel workers.
When it comes to GPU Postgres, I guess those rules might be different. As the founder of this GPU-based extension, do you have any tuning advice or any instruction about configurations in postgresql.conf ?
Just so you know, my case is a postgres 12 used to do heavy geospatial-analytics queries. Currently our CPU-based Postgres VM with 28GB RAM has given 103MB for work_mem and 100 total connections.
Beta Was this translation helpful? Give feedback.
All reactions