[PECO-1532] Ignore the excess records in query results #239
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PECO-1532
Note for reviewers: This PR contains some refactoring needed to implement the fix, so probably it's easier to review commit by commit
When client library executes query and wants an Arrow-based or Cloudfetch results - server will return records as Arrow batches. Batch size may vary, server makes the decision on that depending on count of records, record size, etc. But usually all batches will have the same size, with the only exception - the last batch, which usually contains less records. And there are two possibilities:
rowCount
field which defines how may "valid" records are in the batch. Client should use only that records and discard the remaining ones.(I guess that different workspaces may be configured differently and will behave either as described in scenario 1 or 2)
Nodejs connector doesn’t use value from
rowCount
and therefore returns that extra records to user. This behavior is wrong, and this PR fixes it.