-
Notifications
You must be signed in to change notification settings - Fork 591
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Job recorder memory leak #1437
Comments
Thank you for reporting this issue! As Laravel is an open source project, we rely on the community to help us diagnose and fix issues as it is not possible to research and fix every issue reported to us via GitHub. If possible, please make a pull request fixing the issue you have described, along with corresponding tests. All pull requests are promptly reviewed by the Laravel team. Thank you! |
Given this fails, I think this is more a problem with the fact telescope will store unlimited amounts of entries before running store. In my case, I often have to queue 20k jobs in local development. The job dispatching often runs out of memory around 10k jobs. Would a max queue length make sense for telescope? |
I have solved it myself via :
It does seem like a bit of a footgun that wouldn't hurt to have some rails on it. Don't mind writing the pull request if you want a max queue size variable. |
This should be implemented by a config @nick-potts |
Gonna close this since you found a solution. Thanks |
What solution? Disable job watcher? |
I don't think the above scenario is a realistic one for a real-world application sorry. Dispatching 5000 jobs in a row most likely is better refactored to a different way where you can spread out the dispatching over multiple requests. We're still open to PR's to improve this. |
We dispatch way less jobs and our app still goes OOM when telescope is attached. I don't see why should we change how many jobs one command (not request) creates |
Another, more real-world, scenario where php can crash due to running out of memory is the query watcher. If you run many queries in the same request/job, it'll never flush and can run through the available memory quite quickly. I can potentially write a PR, just not sure how best to implement it. |
Thanks for this, just had to use this myself in a real-world application scenario... We have some long-running processes which dispatch synchronous jobs periodically (sub-minute), used for communicating with a plethora of external APIs. We don't need to destroy and create new processes for these as that adds unnecessary overhead. This worked fine for us until we added Telescope to increase observability, which is when this issue raised its head. While this probably is uncommon, it'd still be nice to have a config option and perhaps a note in the docs. It's also possible that other people have been impacted by this, but they're unaware of the cause of their growing memory footprint. It may also be valuable to anyone who wishes to tune their application's memory usage, which could be the difference between which tier of VM they choose and thus have a real-world cost impact etc. It's trivial to suggest that users re-architect their codebases, but that shouldn't be a concern of this package's development. In any case, there is a solution which works, and which we're now using, but it'd be nicer if this was more readily supported OOTB. |
A good solution. |
Telescope Version
4.17
Laravel Version
10.45.1
PHP Version
8.3
Database Driver & Version
No response
Description
Dispatching a job in a loop leads to a memory leak due to the telescope job recorder.
Steps To Reproduce
https://github.com/nick-potts/bug-report
Run the tests.
https://github.com/nick-potts/bug-report/blob/main/tests/Feature/ExampleTest.php
The text was updated successfully, but these errors were encountered: