Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thanos compact leaves partial blocks on S3 when doing deletes #7840

Open
grimz-ly opened this issue Oct 17, 2024 · 4 comments
Open

Thanos compact leaves partial blocks on S3 when doing deletes #7840

grimz-ly opened this issue Oct 17, 2024 · 4 comments

Comments

@grimz-ly
Copy link

grimz-ly commented Oct 17, 2024

Thanos, Prometheus and Golang version used:

Thanos: v0.36.1
Prometheus: v2.54.1 (using sidecar)
Golang: go1.22.6

Object Storage Provider: S3

We've been leveraging Thanos for our long term metrics for about 13 months now. One thing I've noticed, but never addressed, is the partial blocks count constantly increasing as time goes on. I didn't notice any major upload issues from our sidecar instances but figured out it was deleted blocks. Every block deleted has its 'folder' (ie 01JA5SKZ82CJNVMHAK6ZR9RE6D) and a 'subfolder' chunks left behind.

No obvious issues w/ compaction, downsampling, or retention. Here is a current log showing such

compactor | ts=2024-10-17T17:40:47.800815487Z caller=fetcher.go:623 level=info component=block.BaseFetcher msg="successfully synchronized block metadata" duration=7.532362539s duration_ms=7532 cached=758 returned=758 partial=24872

What you expected to happen:

I'd expect Thanos also remove these directory objects so it doesn't output thousands of info files in docker logs like this every 5m

compactor | ts=2024-10-17T17:45:47.116627349Z caller=clean.go:59 level=info msg="deleted aborted partial upload" block=01J9T5X3SP0ANNJH80RK7J1DQ2 thresholdAge=48h0m0s

compactor | ts=2024-10-17T17:45:47.116681196Z caller=clean.go:49 level=info msg="found partially uploaded block; marking for deletion" block=01J9KQYSSANGK37W371JWDZKTR

compactor | ts=2024-10-17T17:45:47.121624247Z caller=clean.go:59 level=info msg="deleted aborted partial upload" block=01J9KQYSSANGK37W371JWDZKTR thresholdAge=48h0m0s

compactor | ts=2024-10-17T17:45:47.121679123Z caller=clean.go:49 level=info msg="found partially uploaded block; marking for deletion" block=01J9W3BY9STJXSX54TTX273NXM

compactor | ts=2024-10-17T17:45:47.126779616Z caller=clean.go:59 level=info msg="deleted aborted partial upload" block=01J9W3BY9STJXSX54TTX273NXM thresholdAge=48h0m0s

compactor | ts=2024-10-17T17:45:47.126802449Z caller=clean.go:49 level=info msg="found partially uploaded block; marking for deletion" block=01J9YGRC91YCZES4JV4758NAMF

Additional Notes:

We send metrics to two primary S3 buckets. I've cleaned one of the two using MinIO client by

  1. getting a list of all blocks at top level
  2. iterating through list and performing a rm like so

for i in cat /tmp/bucket-ls; do docker-compose exec mc mc rm thanos/store-metrics/$i; done

This effectively removes the empties and leaves those with any metadata in them because a basic rm cannot delete anything with metadata in it.

Is this normal behavior? If so, could an extra step be added to the delete code to perform another rm on the empty block dir object?

Copy link

dosubot bot commented Oct 17, 2024

Hey there, @grimz-ly! I'm here to assist you with any bugs, questions, or contributions. Let's tackle this issue together!

I found a similar closed issue that might be relevant to your situation: compact: Thanos Compactor doesn't delete blocks which are marked for deletion. The issue was resolved by setting a lower value for the --delete-delay flag. This flag controls the delay before directories inside object storage without meta.json are deleted, which might help in managing the partial blocks you're encountering.

Thanos determines which blocks are considered 'partial' and should be deleted based on the absence of a fully loadable meta.json file after a certain period, typically 15 minutes, configured by syncDelay. If the meta.json is missing or malformed after this period, the block is assumed to be partially uploaded and can be removed by the compactor [1][2].

Additionally, Thanos provides logging and metrics related to the deletion of partial blocks. The BestEffortCleanAbortedPartialUploads function attempts to clean up aborted partial uploads and logs the attempts, successes, and failures [3][1].

You might want to check your configuration for the --delete-delay and ensure that it aligns with your expectations for block deletion. Adjusting this setting could help in automatically cleaning up the partial blocks without manual intervention.

To continue talking to Dosu, mention @dosu.

@grimz-ly
Copy link
Author

@dosu This is not the case. As I mentioned, the block files are getting deleted once --delete-delay time is reached. It's just leaving the block folders behind.

Here are the command options being used in our docker compose service config

      - compact
      - --data-dir=/data/compact
      - --log.level=info
      - --objstore.config-file=/bucket.yml
      - --http-address=0.0.0.0:10922
      - --disable-admin-operations
      - --web.external-prefix=/
      - --web.route-prefix=/
      - --consistency-delay=30m
      - --retention.resolution-raw=400d
      - --retention.resolution-5m=400d
      - --retention.resolution-1h=750d
      - --block-files-concurrency=8
      - --compact.concurrency=8
      - --downsample.concurrency=4
      - --wait

Copy link

dosubot bot commented Oct 18, 2024

The issue you're experiencing, where block folders are left behind after the block files are deleted, might be related to how Thanos handles the deletion process. Thanos uses the Delete function to remove block directories, ensuring that all objects prefixed with the directory are deleted from the bucket. This includes using the deleteDirRec function to recursively delete files and directories, which should also remove empty directories [1].

If the block files are being deleted but the folders remain, it could be due to a specific condition or error in the deletion process that prevents the deleteDirRec function from completing its task. You might want to check the logs for any errors or warnings related to the deletion process that could provide more insight into why the folders are not being removed.

Additionally, ensure that there are no permissions or configuration issues with your S3 bucket that might prevent the deletion of empty directories. If everything seems correct and the issue persists, it might be worth investigating further into the specific implementation of the deleteDirRec function and any related code paths that handle directory deletion.

To continue talking to Dosu, mention @dosu.

@grimz-ly
Copy link
Author

Some additional information.

I copied a block marked for deletion to test with. A recursive rm does delete all of the objects with /<block_id> as well as from /<block_id>/chunks

But this leaves the 'folders' behind. Going back w/ just rm command will clean these up as I noted above. Compact sees these as partial blocks and it's rather annoying seeing the output in the docker logs on the 5m cleanup intervals when the count gets large.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant