You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Runtime instrumentation error. Attempt to drop the root when it is not the current trace. Report this issue to New Relic support.
File "/app/.heroku/python/bin/gunicorn", line 8, in <module>
sys.exit(run())
File "/app/.heroku/python/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py", line 67, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/app/.heroku/python/lib/python3.10/site-packages/gunicorn/app/base.py", line 231, in run
super().run()
File "/app/.heroku/python/lib/python3.10/site-packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
File "/app/.heroku/python/lib/python3.10/site-packages/gunicorn/arbiter.py", line 211, in run
self.manage_workers()
File "/app/.heroku/python/lib/python3.10/site-packages/gunicorn/arbiter.py", line 551, in manage_workers
self.spawn_workers()
File "/app/.heroku/python/lib/python3.10/site-packages/gunicorn/arbiter.py", line 622, in spawn_workers
self.spawn_worker()
File "/app/.heroku/python/lib/python3.10/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker
worker.init_process()
This is happening in newrelic/core/trace_cache.py in complete_root at line 339:
"drop the root when it is not the current "
"trace. Report this issue to New Relic support.\n%s",
"".join(traceback.format_stack()[:-1]),
)
raise RuntimeError("not the current trace")
Looks like it's happening when gunicorn tries to kill timed out workers?
Expected Behavior
Don't see this error.
Steps to Reproduce
Hard to say unfortunately.
Your Environment
Python 3.10.6
newrelic==8.5.0
gunicorn==20.1.0
django==3.0.4
Running Django with gunicorn and using sentry as an error tracker.
The text was updated successfully, but these errors were encountered:
You mentioned that it looks like it’s happening when a worker gets killed. We've not been able to reproduce this as a result of worker timeouts. What makes you think that a worker is getting killed? Is there a log message to that effect?
Do you have any other information about your environment that might be problematic?
Is there by chance anything async going on such as async views timing out?
If this is a worker being shutdown, what you're seeing is not particularly problematic for the health of the agent at least, you're seeing the final request's data being lost. That shouldn't be problematic. You can safely ignore the errors, I believe there's a way to do that in Sentry to filter out specific errors although I'm not sure where it might be.
Description
We frequently see this in our Sentry:
This is happening in
newrelic/core/trace_cache.py
in complete_root at line 339:Looks like it's happening when gunicorn tries to kill timed out workers?
Expected Behavior
Don't see this error.
Steps to Reproduce
Hard to say unfortunately.
Your Environment
Python 3.10.6
newrelic==8.5.0
gunicorn==20.1.0
django==3.0.4
Running Django with gunicorn and using sentry as an error tracker.
The text was updated successfully, but these errors were encountered: