You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some of the etl-queries tables are pretty big now, particularly challenge_receipts_parsed
Easy things might be tuning indexes. That one has both address and name for witness and transmitter, could just be address and always use subselects for names
Would more recent postgres versions or alternate database storage solutions help with others?
I would suggest 2 things, before going into index tunning: timescaledb extension and table partitioning. I found these 2 to be more helpful with big tables, than indexes which are also growing in size as time passes.
Some of the
etl-queries
tables are pretty big now, particularlychallenge_receipts_parsed
Easy things might be tuning indexes. That one has both address and name for witness and transmitter, could just be address and always use subselects for names
Would more recent postgres versions or alternate database storage solutions help with others?
Related here is the
pg_stat
results on old DeWi ETL:https://etl.dewi.org/question/117-slow-queries-pg-stat-statements + xlsx dump
query_result_2021-08-07T10 45 38.985426-04 00.xlsx
Here are all table sizes
The text was updated successfully, but these errors were encountered: