-
Notifications
You must be signed in to change notification settings - Fork 259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Key-Value Swallows Write Errors When Backing Impl Fails #2952
Comments
FWIW, it looks like this was an intentional tradeoff: spin/crates/factor-key-value/src/util.rs Lines 75 to 80 in 1f2269b
|
After some further investigation, it looks like some of the newer KV implementations (Azure, AWS) are using lazy connections that cannot fail on |
I'm the one who implemented Unfortunately, the consistency and durability model was apparently never documented in key-value.wit, and that's definitely a problem. I'll open a PR to address that. FWIW, the |
@dicej thanks for the context! The linked
I assume that these docs are not only talking about subsequent reads in the same component instance but also across component instances. Let me know if this assumption is incorrect. With that assumption, in the current implementation, this guarantee is not upheld. I can do a write operation and then do a later read operation (in another instance) and get an old value. My understanding of the docs is that all writes must be guaranteed to be reflected eventually. So if I do a write of a value (and no other writes ever happen) all clients will eventually see that write. That's not the case here. I'd love your clarification on this so we can either update the wasi docs to reflect this or update the Spin implementation. |
My reading is different, but I think that's because the docs aren't clear enough about the terms "client", "guest", and "caller" in the paragraph following the one you quoted. I've put up a PR that reflects my interpretation; I'm happy to refine and/or rewrite it based on feedback. I think the key question we need to answer here is whether a host implementation of KV backed by an eventually-consistent distributed system spread across multiple failure domains with asynchronous, "best-effort" durability is (A) valid and (B) normal. For example, if we think it's valid but not normal, we can make the consistency and durability guarantees for both Spin KV and wasi-keyvalue much stronger, and create a new, niche interface which has weaker guarantees in exchange for wider applicability. And regardless of how we answer that question, we should make sure the docs are crystal clear so there are no surprises. |
Another way to address the original issue: define |
Would it be desirable to implement this as a runtime config (i.e. in spin.toml)? That would allow users to flexibly set strong/weak durability without changing the KV interface. Though perhaps it is better to split the interface as you suggest and encourage users to be explicit (and allow choice of durability for each database call in a component) |
The
CachingStore
completely swallows errors when the inner implementation fails on writes.spin/crates/factor-key-value/src/util.rs
Line 190 in 1f2269b
This can lead to writes appearing like they succeed (even being reflected by reads within the same process) but not actually working.
The text was updated successfully, but these errors were encountered: