You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Both Throttling and Rate-Limiting are designed to limit count of processes at a given time. However, Rate-Limiting rejects the processes exceeding a limit and Throttling queues/pauses processes until current ones have completed.
Rate limiting protects a system by applying a hard limit on its access. Throttling shapes a system by smoothing spikes in traffic.
A background processor shouldn't reject exceeding tasks queued. That's best handled at load-balancer layer. For these reasons, in Goose, we'll use the term Throttling, and not Rate-Limiting.
Why the need to Throttle Jobs?
Often 3rd party APIs will enforce a rate limit. Ergo, the count of Jobs executing at a given time shouldn't exceed this limit.
Patterns of Throttling
As elaborated here, Throttling can be done in 4 ways:
Concurrent: Only N jobs can execute at a given time.
Token Bucket: Like Concurrent, but resource pool is limited to N and grows at a fixed rate, which might be higher/lower than Job completion rate.
Leaky Bucket: Like Token Bucket, but allows bursts of jobs in a small time-interval. Resource pool can stay fixed like Concurrent or increase at a fixed rate like Token Bucket.
Fixed Window: In a given time-frame, only N jobs can execute.
Sliding Window: Like Fixed Window, but with a rolling window of time that moves the cursor to next executed job in a time-frame.
Nuances of Throttling for a background processor
Make note of these things when implementing this feature:
Since executing Jobs will acquire a lock, have a lock_timeout to ensure crashed processes do not hold a lock forever
Have a wait_timeout to ensure workers aren't waiting forever to acquire a lock. Upon timing out, User can configure to publish a metric, raise an alert or discard a Job altogether
Implementation Details
This is a complex feature to build. Some ideas after initial investigation:
A persistent store will be required to store count of executing jobs. Hence, this feature can exist for a message-broker like Redis and Postgres, but not for RabbitMQ.
If the message-broker has built-in support for expiry, that'll be helpful. Else, a separate thread will have to do garbage collection.
The text was updated successfully, but these errors were encountered:
Difference between Throttle and Rate-Limit
Both Throttling and Rate-Limiting are designed to limit count of processes at a given time. However, Rate-Limiting rejects the processes exceeding a limit and Throttling queues/pauses processes until current ones have completed.
Rate limiting protects a system by applying a hard limit on its access. Throttling shapes a system by smoothing spikes in traffic.
A background processor shouldn't reject exceeding tasks queued. That's best handled at load-balancer layer. For these reasons, in Goose, we'll use the term Throttling, and
not Rate-Limiting.Why the need to Throttle Jobs?
Often 3rd party APIs will enforce a rate limit. Ergo, the count of Jobs executing at a given time shouldn't exceed this limit.
Patterns of Throttling
As elaborated here, Throttling can be done in 4 ways:
Nuances of Throttling for a background processor
Make note of these things when implementing this feature:
lock_timeout
to ensure crashed processes do not hold a lock foreverwait_timeout
to ensure workers aren't waiting forever to acquire a lock. Upon timing out, User can configure to publish a metric, raise an alert or discard a Job altogetherImplementation Details
This is a complex feature to build. Some ideas after initial investigation:
The text was updated successfully, but these errors were encountered: