Replies: 2 comments 2 replies
-
/cc @benjie @robrichard curious if you have any initial thoughts since I chatted with both of you a bit about this in broad terms at GraphQLConf. |
Beta Was this translation helpful? Give feedback.
-
Hi Michael, thanks for sharing these detailed thoughts! It looks to me like you're taking the Some questions:
My main question though relates to the guiding principle "Favor no change": can't this all be achieved via cursor pagination without requiring any changes to the specification? Perhaps we should focus on standardizing cursor pagination across the ecosystem rather than adding another mechanism? |
Beta Was this translation helpful? Give feedback.
-
Defer and stream offer fantastic primitives for incremental delivery on a single chunked request, and I'm looking forward to using them. I also think the new wire protocol is well-crafted to enable pretty broad use case scenarios, which is what I wanted to bring up here.
I'm interested in GraphQL having better capabilities for common patterns like pagination without needing to rely on bespoke interfaces and per-application implementation at the schema level. I believe the incremental delivery protocol offers an interesting avenue to such things. To solve for these, GraphQL could support generic "resumable" operations.
I'm writing this up to see if there's broader interest in exploring this idea further -- if this seems promising, I can put together a more formal proposal and/or prototype a simple client/server implementation to demonstrate the concept.
New directive:
@resumable
Let's imagine a schema like the following:
With a corresponding query document that looks something like:
The
@resumable
directive can be placed on any field definition so long as the field takes aresume: ResumeToken
argument. It can also be placed on any query field so long as the corresponding field is marked as resumable.When both client and server have opted into resumable operations, the server will return additional metadata in the response following a similar pattern to the
pending
response field on@defer
/@stream
responses:This response indicates that there are additional results for the
posts
field that can be fetched by resuming the operation. If the server evaluates the query and does not find additional results, it returns a"complete"
entry in resumable instead of pending.The
token
value is an opaque cursor that the server will use to determine how to resume the operation on a subsequent request. In the example above, the token might be the ID of the last post that was returned.The client can then make a subsequent request to the server to resume the operation:
The server will receive this request and execute resolvers for the equivalent query of:
However, the results of the resolver will be delivered in the
incremental
field of the response, mapped to the same path as the resumable field:The client can use the result of this followup request to append new items to the end of the previous result set, following the same mechanics as incremental delivery from
@defer
/@stream
.Not Just for Pagination
While the example above is probably the most common use case, the specific behavior of how to turn a
resume
cursor into incremental updates is left to the server implementation.Let's imagine a very different scenario: a breaking news alerts system.
In this case, "resuming" the query doesn't mean fetching an additional page of existing results, but rather checking to see if there are any new alerts that have been created since the last time the client queried the server.
For this use case, the resumable is never "complete", because there is always a chance that new alerts will be created. A subsequent response might look like:
You'll note that the
incremental
response includes aprependItems
field. This is an addition to the@defer
/@stream
protocol to account for the fact that incremental data may modify existing data in more ways than just appending to the end of the list.Further Exploration Required
There's significantly more thought required to turn this into a full specification; however, I was surprised at how well this concept seemed to fit with the existing
@defer
/@stream
primitives. While there's a fair amount of "client smarts" needed to do this properly, it seems pretty tractable since the actual updating of the data will be compatible with@defer
/@stream
.I'd love to hear early reactions from this group to know if this is something worth my time to pursue further.
Thanks all!
Beta Was this translation helpful? Give feedback.
All reactions