-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: add a memory leak test #51
base: master
Are you sure you want to change the base?
Conversation
This is great @okdistribute. I poked at this a bit and wrote a patch! I noticed something big first: my machine can't even allocate 500 multifeeds in memory. Allocating and syncing everything at once also overwhelmed my computer, so I changed I also made the numbers a bit smaller, and used From there, I noticed that the memory use goes up non-linearly with the number of syncs. For example, 10 syncs uses 34mb of heap, 20 uses 114mb, and 40 syncs uses 447mb. I didn't want to push on top of your work, and have been wanting to mess with non-github workflows, so here's a patch you can apply locally to see if you like it:
|
Thanks @noffle! Feel free to push the patch here, I agree about smaller increments, I picked a big number out of laziness 😅 |
@okdistribute this is a great example of how easy it is to bloat a I did some poking around and think this test might be what you want to test as it simulates multiple peers in one program (and hence one heap). So the final @noffle was right that the memory goes up non-linearly with the number of
See [1] for how Suggested Changes to TestI believe that not all multifeeds should be included in the memory results and instead each console.log('pre-sync heap', process.memoryUsage().heapUsed / 1000000, 'mb')
series(replications, (err) => {
if (err) throw err
console.log('replicated! everything!')
console.log('pre-closing heap', process.memoryUsage().heapUsed / 1000000, 'mb')
let keep = feeds[0]
let closures = []
for (let i = feeds.length - 1; i > 0; i--) {
closures.push((done) => {
let a = feeds[i]
delete feeds[i]
a.close(done)
})
}
series(closures, (err) => {
if (err) throw err
global.gc()
console.log('keep.feeds().length', keep.feeds().length)
console.log('post-closing heap', process.memoryUsage().heapUsed / 1000000, 'mb')
debugger
t.end()
})
}) This requires the
Concluding RemarksSorry for the tome, but I hope this helps! Right now having a middleware layer would be awesome as weeding out feeds that a peer doesn't want to host would be awesome. Alternatively, people can save a bit on memory if the storage is file based (though I do believe memory still increases over time with Footnote[1] The number of The |
mafintosh says:
|
& a comment from @RangerMauve
|
This is a failing test for the memory leak. System will print
Killed
after less than 30 seconds. Check/var/log/syslog
for memory leak message. Related to cabal-club/commons#15If the issue is having too many hypercores open at once, an approach like a manifest and request middleware might also fix the memory leak issue for most production use cases: #26
Or, some way to replicate in batches, and close the feeds that aren't actively replicating.
To mimic real life scenario for cabal, about 70% of the feeds are empty in this test -- only 30% of them have 1 log entry.