Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
https://aws.amazon.com/s3/storage-classes/express-one-zone/ is a new type of s3 bucket which offers better performance, lower cost for some use cases. It's mostly compatible with regular s3 api with a few caveats:
This PR fixes both by:
lmk if I missed anything in the PR, or anything you'd like changes.
I did change gradle/verification-metadata.xml by running
gradle xyz --write-verification-metadata sha256
but idk if that was the right thing to do, I might have not studied the gradle manual well enough (I thought the update is legit and caused by bumping the version of aws sdk as perlibs.versions.toml
)Lastly, wanted to ask about error handling in the s3 cache and what's the rationale behind current implementation, which logs cache load/store errors (basically fails silently) instead of relying on throwing BuildCacheExceptions (and or other classes) and depending on gradle to fail the build/disable the cache as needed.
I'm referring to the following parts of gradle default http cache implementation:
lmk if this should be a separate issue, and or whether contributions are welcome, but I was thinking if it'd be ok to switch to using those. (I'd prefer explicit errors when storing objects fails for let's say auth reasons, but the same error handling pattern is present in gcp plugin so I'm not sure if there's a good reason behind that and whether you'd be fine with s3 handling it different than gcp)