You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, we're using Google Cloud Storage as a database by reading the bucket to fetch the status.
For a quick approximation (as of January 2025), over 24 hours we receive at most 30 jobs that take at most an hour each. For each job, we poll every minute so there are 1800 polls. Each poll involves a list blob call and a download call, so 3600 Class A operations per day. For ~1.3M Class A operations per year, this will cost at most $8/year but this can grow substantially as we need to scale.
It also creates added latency in the endpoint.
The text was updated successfully, but these errors were encountered:
Also unify job IDs into a single format (suggest to use the Vertex AI Pipeline compliant one, but what happens when requests are sent too quickly for the same job0
Also unify job IDs into a single format (suggest to use the Vertex AI Pipeline compliant one, but what happens when requests are sent too quickly for the same job0
If the same job is guaranteed to have the same jobId, the queue will handle this case. It's impossible to have more than one job with the same id.
Currently, we're using Google Cloud Storage as a database by reading the bucket to fetch the status.
For a quick approximation (as of January 2025), over 24 hours we receive at most 30 jobs that take at most an hour each. For each job, we poll every minute so there are 1800 polls. Each poll involves a list blob call and a download call, so 3600 Class A operations per day. For ~1.3M Class A operations per year, this will cost at most $8/year but this can grow substantially as we need to scale.
It also creates added latency in the endpoint.
The text was updated successfully, but these errors were encountered: