Fix TokenBuffer._copyBufferValue floating point values #2982
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
regressed handling inside polymorphic types. The TokenBuffer could
hold values as BigDecimal objects which are heavy on heap, but
critically also cannot be used to deserialize to a less precise
target, e.g. double. This is the inverse problem of #2644 where
not enough precision was retained, and reintroduces precision loss
while buffering floating point contents.
Alternative to #2978 however due to the way floating point values work, implicit coercion from BigDecimal to double should still be supported as proposed in the other PR.
The issue appears to be that we've attempted to map encoded floating point numbers into java types, which aren't an exact match. Perhaps the TokenBuffer should support an alternative encoding which preserves the raw input value? This would be difficult and implementation-specific, for example cbor floating point values wouldn't be stored in a char buffer, but json might.