-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add periodic checking of stream/consumers #95
Comments
if i understand this right, then this periodic checking would also solve the issue where a newly added (qa/review) controller currently does not pick up existing streams or consumers (it only looks for newly created ones) |
Could this also fix the problem we have? we are currently in a testing phase and sometimes our VMs with k3s gets restarted, after the restart all services etc. are running again but we are missing our jetstreams that we created and we have to create them manually again or redo the whole deployment... so we have the creation of the jetstream via a helm templates file and it obviously gets executed while deployment but not after a restart. Or do we explicitly have to create the streams via |
I'm wondering if you would be open to a PR refactoring the code to use https://pkg.go.dev/sigs.k8s.io/[email protected] ? This would also assist in running retries/continuous reconciliation. https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile#Result |
This might also help an issue we have seen when using memory storage. If all pods in a NATS cluster go down, and the stream is lost, the NACK stream controller will not recreate the stream in the new servers. The only way to fix this is to redeploy NACK. |
Beside this issue, NACK should periodically check the streams that are defined on the NATS cluster. If a stream removed from the NATS cluster, NACK didn't figure it out. |
I'm willing to work on this. You can assign it to me if you want |
I've fixed the re-creation problem for streams. The issue is related only to the code conditions, but I think adding the configuration checking should be a good next step. I can help @JorTurFer. |
When a stream/consumer ends up in Error state due to an API call failing, the actual JS might actually be fine so need to retry when that happens to clear the Error state from the CRD.
The text was updated successfully, but these errors were encountered: