-
-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implementing graceful shutdown on inactivity #1291
Comments
It seems that version 2.15 had this https://github.com/procrastinate-org/procrastinate/blob/2.15.1/procrastinate/worker.py via the timeout parameter. Could that be added back with something like "timeout_pull_tasks"? waiting on the "listen_notify" until sigterm only might lead to a restricted flexibility in behavior imo |
Hi @saro2-a. I think we only renamed |
Yep, I think this might actually be something else that could be helped by the introduction of middleware (as we may be able to track when a worker starts or stops a job). |
So, is our release note wrong here? ("The
I don't see where middleware helps here, as it is only called when a new job is fetched for processing. If I understand the issue correctly, the worker should be stopped when there is no job to process after a specified time period. |
No sorry, I meant:
|
the reason I thought it was related was:
so do you think we can't have this configuration within the release? I think it is a fairly common feature in job processors as we want to avoid to spin up/destroy them all the time |
What we're heading towards, I think, if that's ok with you:
Here's a very rough look at what it may look like, provided the middleware API isn't finalized: from __future__ import annotations
import asyncio
import functools
import my_project
async def middleware(
process_job,
context: job_context.JobContext,
*,
current_tasks: list,
event: asyncio.Event,
):
current_tasks.append(context)
event.set()
result = await process_job()
current_tasks.remove(context)
return result
async def run_worker():
current_tasks = []
event = asyncio.Event()
worker = asyncio.create_task(
my_project.app.run_worker_async(
middleware=[
functools.partial(middleware, current_tasks=current_tasks, event=event)
],
)
)
while True:
done, _ = await asyncio.wait(
[worker, asyncio.create_task(event.wait())],
timeout=10 * 60,
return_when=asyncio.FIRST_COMPLETED,
)
if not done and not current_tasks:
print("No task for 10 minutes, exiting")
break
event.clear()
if worker in done:
break
worker.cancel()
await worker
if __name__ == "__main__":
asyncio.run(run_worker()) |
@ewjoachim I think asyncio.wait confusingly doesn't raise Alternatively, another option is asyncio.timeout and to reschedule it everytime a job completes. By the way, until a middleware feature is available, it is already possible to implement a middleware. |
(Ha strange, I was re-reading my comment exactly when you commented) |
in a elastic environment, we might spin up more worker and scale down. How to graceful shutdown after 60 seconds of no tasks on the current worker?
it would be nice if the "wait" parameter was a scalar, (or for backward compatibility a new param was added)
https://github.com/procrastinate-org/procrastinate/blob/68debead597724633f7ff1946788e9586b33b045/procrastinate/worker.py#L34C1-L48C7
wait_before_shutdown_seconds=None
The text was updated successfully, but these errors were encountered: