You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note the numbers don't add up because we had a big month for BwdServer due to an anomaly.
To address this:
use TraceRatio samplers one each service (20% for BwdServer, 20% for QW, 100% for others)
write code
merge to dark repo
backport to classic-dark repo
merge & deploy
add flags to LaunchDarkly.
add flags
BwdServer
Queueworker
check it works
Reduce plan
use honeycomb sampling for garbagecollector (5% should be fine, I'd be surprised if we ever look at this again)
merge change
check it worked
disable k8s metrics (we get this from google cloud anyway)
merge change
check it worked
Overall, this should reduce us from 1.8B in march to:
BwdServer: 121M
QueueWorker: 71M
ApiServer: 67M
CronChecker: 39M
kubernetes-bwd-ocaml other: 6M
garbagecollector: 18M
Overall around 350M
The text was updated successfully, but these errors were encountered:
Our OpenTelemetry provider is putting their prices up, so we should reduce how much we use.
Currently, we're using about 1.2B events and the next lowest threshold is 450M.
They are currently split:
cloudsql-proxy 0.11%
kubernetes-bwd-nginx 0.15%
kubernetes-bwd-ocaml 57.03% (1.13B)
kubernetes-garbagecollector 38.02% (376M)
kubernetes-metrics 4.69% (45M)
Among kubernetes-bwd-ocaml, they are split:
BwdServer | 608,015,209
QueueWorker | 354,919,048
ApiServer | 66,742,393
CronChecker | 38,742,278
other | 5,528,954
Note the numbers don't add up because we had a big month for BwdServer due to an anomaly.
To address this:
Overall, this should reduce us from 1.8B in march to:
BwdServer: 121M
QueueWorker: 71M
ApiServer: 67M
CronChecker: 39M
kubernetes-bwd-ocaml other: 6M
garbagecollector: 18M
Overall around 350M
The text was updated successfully, but these errors were encountered: