You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using smart_open to write a big file directly into S3, and for that it has been working great. Thank you for the great package!
The issue is that sometimes the Python process gets killed and I'm left with an incomplete multi-part upload. At the moment I have the S3 bucket lifecycle rule to clean these up after a day, but ideally I wanted to resume the upload instead of just starting from scratch, making the most use of multi-part upload and saving some compute & storage.
After a quick glance at MultipartWriter.__init__, this looks impossible as self._client.create_multipart_upload is always invoked. I wanted to gauge how big of an effort would it be to support such scenario if I somehow am able to supply an UploadId to be used.
The issue is that sometimes the Python process gets killed and I'm left with an incomplete multi-part upload.
fwiw, if the process receives SIGTERM, the with-statement will exit with an exception and thus the terminate() method will be called to abort the multipart upload. so I guess we're only talking about SIGKILL here (e.g. kubernetes/docker killing a subprocess to prevent container PID 1 from going out of memory).
From my experience, recovering failed uploads is too much of a pain in the arse. It is far simpler to restart the upload from scratch. Of course, it's a waste of CPU time and bandwidth, but it's better than a waste of developer and maintainer effort.
That said, if you did want to recover a failed upload, you'd need to reconstruct the following:
The UploadId
The IDs of successfully uploaded parts
The position in the input byte stream from which to resume (so what got uploaded successfully, minus what was in the internal buffers at the time of the crash)
Of these, 3 is the trickiest, because it's not explicitly exposed anywhere. You can't infer 3 from 2 because the uploaded parts may be compressed differently to the input stream. Theoretically, you could have smart_open dump this information periodically to e.g. /tmp/smart_open/uploadid.json, as it uploads each part. If you're interested in a PR to do this, I'm open to that.
The somewhat ugly part would be implementing resume functionality as part of the API. How would this look in Python code?
Problem description
I'm using smart_open to write a big file directly into S3, and for that it has been working great. Thank you for the great package!
The issue is that sometimes the Python process gets killed and I'm left with an incomplete multi-part upload. At the moment I have the S3 bucket lifecycle rule to clean these up after a day, but ideally I wanted to resume the upload instead of just starting from scratch, making the most use of multi-part upload and saving some compute & storage.
After a quick glance at
MultipartWriter.__init__
, this looks impossible asself._client.create_multipart_upload
is always invoked. I wanted to gauge how big of an effort would it be to support such scenario if I somehow am able to supply anUploadId
to be used.Versions
Please provide the output of:
Checklist
Before you create the issue, please make sure you have:
Provided a minimal reproducible example, including any required dataThe text was updated successfully, but these errors were encountered: