You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As reported by @SystemKeeper the unified search providers seem to not properly support limiting and paginating the results.
I checked and the page content provider has some limit from the full text search index, but that is applied per collective, the other providers are just returning all results. We can probably do a quick fix to just skip results before the cursor and after the limit, but would still need to fetch most of the results in the backend. Otherwise we need to rethink the logic to collect search results.
We'd need to run a query directly towards the filecache there, but should be a lot faster to do something like:
select f.fileid, f.name from oc_filecache f LEFT JOIN oc_collectives_pages c ON f.fileid = c.file_id where storage = 2 AND path like 'appdata_ocacwrxokhrf/collectives/1/%' AND name like '%' LIMIT 5;
We probably should exclude Readme.md files as they would be covered already by the folder name.
Some similar query that could be reused, maybe making it more generic and moving it somewhere else.
As reported by @SystemKeeper the unified search providers seem to not properly support limiting and paginating the results.
I checked and the page content provider has some limit from the full text search index, but that is applied per collective, the other providers are just returning all results. We can probably do a quick fix to just skip results before the cursor and after the limit, but would still need to fetch most of the results in the backend. Otherwise we need to rethink the logic to collect search results.
Filing this issue to not forget
Talk PR for inspiration nextcloud/spreed#14024
The text was updated successfully, but these errors were encountered: