Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sharing watch streams and caches between Controllers #1080

Open
5 of 7 tasks
clux opened this issue Nov 15, 2022 · 2 comments
Open
5 of 7 tasks

Sharing watch streams and caches between Controllers #1080

clux opened this issue Nov 15, 2022 · 2 comments
Assignees
Labels
runtime controller runtime related umbrella major roadmap tracking issue

Comments

@clux
Copy link
Member

clux commented Nov 15, 2022

Would you like to work on this feature?

maybe

What problem are you trying to solve?

Currently, our Controller machinery creates watch streams and a single reflector for the main watch stream only, and these streams/caches are fully managed and internal (except for the single reflector reader).

This means there is no good way to share streams between other controllers (because the other controller would similarly set up its own watches). This means currently kube_runtime is best suited for smaller controllers and not larger controller manager style ones we see in go.

I would like to let users configure the watch stream(s) themselves so the streams and caches can be shared between controller instances as a minimum, and try to do this with the least amount of ergonomic pain points in the existing Controller Api. It is currently possible to do this with the applier, but the applier is certified hard mode for most users.

This has come up before in #824

Progress

NB: there was an early experiment for managing multiple watcher streams and caches in #1147, and parts of it has merged in #1131.

Original Idea Sketches

Collapsible Set of Ideas

This issue aims to start the conversation with 3 ideas:

  1. lift the creation of the controller's QueueStream into a separate builder that can take arbitrary watches
  2. create a full StreamCache container for various watch-listparam pairs
  3. change Controller::watches and Controller::owns to take a watchstream rather than an Api

NB: A non-goal of this issue is solving the much harder problem of two controllers watching the same api with different ListParams (one might be less strict, so one watch could be a subset of another). This is very hard to do because the ListParam is intrinsic to the watch, and you can have totally orthogonal ListParams that watches only certain labels. It is potentially possible to analytically figure out the largest subset of a full watch, and then somehow filter events down the relevant events locally (using some kind of watcher interface), but that would be quite hard to do, so not going to talk about that at all here - feel free to write an issue for it!

1. QueueStream Abstraction

A first (bad) idea I had; take the QueueStream builder and make it top-level to try to re-use it between controllers (pseudo-code):

let qs = QueueStream::for(mainapi, lp1).owns(ownedapi, lp2).watches(watchedapi, lp3, mapper);
let ctrl = Controller::from(&qs).run(...)
// re-using
let stream1 = qs.get_stream(typemeta_for_main_api);
let stream2 = qs.get_stream(typemeta_for_owned_api);
// create another queuestream from these underlying streams
let qs2 = QueueStream::for_stream(steam2).watches_stream(stream1, mapper)

This feels very awkward. Internally it needs to know the params, and relations to apply to each api before it can be passed to the Controller, but the stream it needs to have exposed is the stream before it applies any mapping relations. It also needs two sets of constructors (one for streams and one for api-lp pairs).

2. StreamCache Abstraction

A possibly more direct translation of what we have in go? We make a literal map of streams that can be used in many controllers:

// init with api-lp pairs
let scache = StreamCache::new().add(api1, lp1).add(api2, lp2).add(api3, lp3);
// grab one watch stream
let crdstream: impl Stream = scache.get(typemeta_for_api1).unwrap();
// get a cache for it
let (crdreader, crdwriter) = reflector::store::<MyCrd>();
let crd_cache = reflector(crdwriter, crdstream);
// get another stream
let blahstream = scache.get(typemeta_for_api2).unwrap();

// create a controller using these two watch streams:
let ctrl1 = Controller::for(crdstream, crd_cache).owns(blahstream)

// get the third stream with cache
let crdstream2 = scache.get(typemeta_for_api3);
let (crdreader2, crdwriter2) = reflector::store::<My2ndCrd>();
let crd_cache2 = reflector(crdwriter2, crdstream2);

// create a controller for the second crd (api3) but re-using the watch from api2 with a different relation
let ctrl2 = Controller::for(crdstream2, crd_cache2).watches(bladstream, mapper)

I'm not a huge fan of this because the the cache struct currently does not do much. It just stores the streams, but the user has to do all of the reflector mapping themselves.

Maybe it is possible here to do a StreamCache::into_queuestream and have that be accepted by a Controller, but having difficulties envisioning this as the right path.

3. Controller only takes streams, no cache struct (chosen)

If we are already changing the controller to take streams rather than api-lp pairs, we should possibly encourage direct ownership for now and just teach users to deal with the streams. We have already done so much work on WatchStreamExt anyway.

let api1: Api<Kind1> = ...
let api2: Api<Kind2> = ...
let api3: Api<Kind3> = ...
let (reader1, writer1) = reflector::store::<Kind1>();
let (reader3, writer3) = reflector::store::<Kind3>();
let watch1 = reflector(writer1, watcher(api1, lp1));
let watch2 = watcher(api2, lp2); // this one we don't need a cache for
let watch3 = reflector(write3, watcher(api3, lp3));

let ctrl1 = Controller::for(api1, writer1).owns(watch2);
let ctrl2 = Controller::for(api3, writer3).watches(watch2, mapper);

This feels pretty natural, all ownership is managed in main, but it leaves a lot up to the user (in terms of gluing), so there is a lot of chances for the user to map the wrong listparams to the wrong type (with potentially hard to decipher errors if we don't introduce more telemetry), so it could be a somewhat painful journey.

I think this is ultimately the most sensible starting point though.

Maybe this can be done with some helper struct that can be fed into the Controller in some way that minimises potential user errors. Ideas welcome.

Documentation, Adoption, Migration Strategy

Will write a guide for this on kube.rs once we have something.

Target crate for feature

kube-runtime

@clux clux added runtime controller runtime related help wanted Not immediately prioritised, please help! question Direction unclear; possibly a bug, possibly could be improved. labels Nov 15, 2022
@clux clux changed the title Sharing reflector caches between Controllers Sharing watch streams and caches between Controllers Nov 15, 2022
@danrspencer
Copy link
Contributor

danrspencer commented Jan 13, 2023

Just chiming it to say this looks really useful and addresses exactly some of the problems we're looking to solve in our own usage of KubeRS. The third option looks the best to me from the ones presented above.

I'm playing around with a store factory at the moment with a pattern that could address some of the concerns, e.g.

// very very rough example of the API I'm considering
fn example(pod_store_factory: StoreFactory<Pod>) {
    let example_pod_store = pod_store_factory::all(ListParams::default().labels("example=test"));
}

The aim is to keep declaration of selectors close to their point of ownership, but if another piece of code asks for the same store (e.g. same scope and list params) the factory will serve a clone of the first one created so that a 2nd stream isn't created.

@clux clux removed the question Direction unclear; possibly a bug, possibly could be improved. label Apr 7, 2023
@clux clux added the umbrella major roadmap tracking issue label Apr 7, 2023
@github-project-automation github-project-automation bot moved this to Defining in Kube Roadmap Sep 13, 2023
@clux clux moved this from Defining to In Progress in Kube Roadmap Sep 13, 2023
@clux clux moved this from In Progress to Blocked in Kube Roadmap Sep 13, 2023
@clux clux linked a pull request Apr 17, 2024 that will close this issue
@clux clux removed a link to a pull request Apr 17, 2024
@clux clux removed the help wanted Not immediately prioritised, please help! label Apr 19, 2024
@clux
Copy link
Member Author

clux commented May 9, 2024

As an update to the people subscribed to this issue, this is now at a point where it should have a functional start under an unstable feature in 0.91: https://github.com/kube-rs/kube/releases/tag/0.91.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
runtime controller runtime related umbrella major roadmap tracking issue
Projects
Status: Blocked
Development

No branches or pull requests

3 participants