-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Max emitters test and fix #21
Max emitters test and fix #21
Conversation
Note: In this PR I used |
Thanks for this @DougAnderson444. I don't think this is an emitter leak, since the Definitely wouldn't want to do that if this indicated a leak, but since the resources are cleaned up on close, I think it'll suffice. |
Ok cool @andrewosh thank you Andrew for your response! In terms of stress, how many stores (+ corresponding discoveryKeys) is it reasonable to have open and swarming at the same time? Dozens? Hundreds? Thousands? Millions...? I guess it would only be limited by how much memory you have? I could tweak the test to track memory too I suspose. What MaxListeners do you think it's safe to set it to? Should we use this as a guide as to how many stores + topics to have open in one node at the same time? I'm thinking in the case of kappa-multifeed, using the store + discoveryKey are tied. So for each user, you add one of each. I'm wondering how many mutlifeeds I could be swarming on at once if you catch my drift. Interested to hear your thoughts on this. I'd happily read that blog post! |
Yep it certainly deserves more explanation in the README, but we currently do garbage collection of cores by tracking the number of Corestores (which typically translates to namespaces) they're currently open in. If any store opens a core, that core will be considered "active" until it's either a) closed itself or b) the store is closed. Corestore also distinguishes between "active" cores (cores that have been requested through A bit more detail about the guts: Internally all open cores are managed by a single RefPool, and the ref count for each core is bumped each time it's opened in a new Corestore (and decremented when the store is closed). There are a number of variables at play when considering how many cores you can have open at once. The memory consumption per-core is super variable (depends on the replication states of its peers), but it can range from the 10s of KBs to the low MBs. In practice we've definitely had Corestores with thousands of active cores open at once, but millions is probably beyond reach for now :) The bigger limitation is the number of concurrent connections being managed by @corestore/networker. If you have thousands of discovery keys, and each one connects to a non-overlapping set of peers, then you'd have thousands of active connections, which would challenge most home routers. This problem's made trickier by the fact that we generally maintain "live" connections to peers, meaning once we connect to a peer, that connection will be maintained even if it's been inactive for an extended period. Figuring out how to properly garbage collect swarm connections is still a TODO (and it's been tricky!), but it's something we're actively thinking about. |
Awesome, thanks for the insight @andrewosh!! |
Do not think this is relevant for v6 |
PR Addressing: #20
I wrote this test to make a couple dozen namespaced corestores, like what happens in kappa-multifeed (kappa-db/multifeed#45).
With the current
inner.on.('feed'
setup it emits theMaxListener
warning due to ever increasing event listeners on'feed'
and'error'
.When I add the unlisten event after
'feed'
is emitted (since it appears to only emit once), the events array doesn't build up and the Max emitters and memory leak is fixed. I thought about also addingunlisten
after'error'
is emitted, but was unsure whether that might break something elsewhere. But it is an option too in case there are a lot of errors on many corestores?For your consideration