Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for dynamic postgres connection config #219

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

titanous
Copy link

This adds support for providing a dynamic configuration for postgres pools that can provide a unique configuration for each new connection.

The motivating use case for this is AWS RDS IAM authentication, which uses a token that has a 15 minute expiry as the password. I have implemented this algorithm with the new hook provided in this PR and am currently testing it out.

This PR also depends on a small change in tokio_postgres to allow passing errors through cleanly. Once the approach in this PR has been reviewed, I'll submit that PR for review upstream.

I'm pretty new to Rust, so please let me know if I'm approaching this wrong.

@bikeshedder
Copy link
Owner

Interesting idea. It flips the responsibilities around: Rather than changing the configuration a callback can be configured for providing the manager configuration. I do like the idea of that "configuration provider" but at the same time I wonder if it's really the right approach.

e.g. let's say the database is being migrated, the connection URL has changed and connections using the old URL should be dropped asap. With that approach only new connections get to know about that configuration change and existing connections will stay pooled and reused over and over again. Unless the recycle method also calls get_config and checks for equality with the last known configuration dropping objects with a different configuration the existing connections will stay connected.

I have to play with this a bit. I've toyed with replacing the entire manager but that would be a rather heavy change as it would mean that manager implementations tracking the created objects (such as the postgres one) would either need to reattach connections from other manager instances or upon changing the manager all existing connections need to be dropped when returned to the pool. I'm undecided if I'd rather make the manager non-replaceable and leave it to the manager implementation to support config changes.

See also

@titanous
Copy link
Author

Unless the recycle method also calls get_config and checks for equality with the last known configuration dropping objects with a different configuration the existing connections will stay connected.

Hmm, for that use case the first thing that comes to mind is to put an optional "generation number" (u64) in the config and then have a recycle method that recycles everything before a specific generation. That way the config infrastructure can manage things however it wants and the Pool doesn't need to be aware of the implementation details.

@fiadliel
Copy link

I was looking for this kind of change (with Google Cloud SQL in my case). I personally don't think there's a single policy that works for whether a config update should cause existing connections to be removed; this should be signalled separately, either here or by another mechanism.

I was considering the possibility of just making the DB config be a Mutex, and clone it before connecting. The advantage of the approach in this PR (over the mutex approach) seems to be that there's never a case where you might make a connection with a stale configuration, as you always "read through" what the current value should be.

@bikeshedder
Copy link
Owner

Great insights. There doesn't seam to be one "right way" that should be hard coded.

I think the best approach would be to let the object reference the configuration and/or manager that was used to create it. The pre_recycle hook could then compare the current configuration and the configuration that was used to create the object and discard the objects if needed. Deadpool currently doesn't come with any premade hooks. It probably makes sense to write some hooks to cover the most common use cases. e.g. max_age, max_use_count, drop_on_manager_change, etc.

I'm still intrigued by the idea of replacing the manager rather than just replacing the config. This would mean that managers like the one from deadpool-postgres need to support some kind of handover mechanism as it keeps track of the objects it has created in order to access the statement caches of the objects. Alternatively the logic of tracking objects could be moved to the deadpool core. Even though it's currently only needed by deadpool-postgres it could allow features like accessing object metrics of handed out objects for debugging and introspection purposes.

@bikeshedder
Copy link
Owner

Related to:

If this lands the Manager could just be swapped making configuration changes for all deadpool-* crates available as part of the core crate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants