You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using Miniflux v2.2.1 with ~30 feeds on an old Raspberry Pi. Typically I have some 150 MB of RAM available.
It used to run fine, but recently the OS has to kill the program regularly:
[1383230.652600] Out of memory: Killed process 16052 (miniflux.app) total-vm:554492kB, anon-rss:137368kB, file-rss:4kB, shmem-rss:0kB, UID:985 pgtables:168kB oom_score_adj:0
I suspect that these crashes may be related to having set some feeds to "fetch original content" - which I only recently learned about. I'm not sure though.
As a first mitigation attempt I set BATCH_SIZE=1, but that didn't prevent the OOM's.
The text was updated successfully, but these errors were encountered:
For me miniflux in docker consistently uses 30-50 mb ram, while postgres can go over 150 when refreshing lots of feeds at once. I have little under 400 feeds.
Maybe it would be interesting to find where this huge memory consumption come from. A way to to that would be to use the go profiler (https://go.dev/blog/pprof)
I'm using Miniflux v2.2.1 with ~30 feeds on an old Raspberry Pi. Typically I have some 150 MB of RAM available.
It used to run fine, but recently the OS has to kill the program regularly:
I suspect that these crashes may be related to having set some feeds to "fetch original content" - which I only recently learned about. I'm not sure though.
As a first mitigation attempt I set
BATCH_SIZE=1
, but that didn't prevent the OOM's.The text was updated successfully, but these errors were encountered: