You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to copy an OS image file, image.raw.gz from web server (http://server/image.raw.gz) to an unzipped destination in s3 image.raw (s3://bucket/folder/image.raw)
The process seems to work but it consumes a lot of memory.
How can I make it use less memory to perform the copy "on the fly"?
i tried to do it by declaring a buffer at 1024 but it didn't work
the function is like:
def copy_files_to_bucket(file_urls, bucket, prefix):
for file_url in file_urls:
file=os.path.basename(file_url)
print ("copy " + file_url + " to " + bucket + prefix + file)
with open(file_url, 'rb', transport_params=dict(buffer_size=1024)) as fin:
with open("s3://" + bucket +"/" + prefix + "image.raw", 'bw', encoding=None, transport_params=dict(buffer_size=1024)) as fout:
for line in fin
fout.write(line)
The text was updated successfully, but these errors were encountered:
almartinez123
changed the title
Copiyng huge files on the fly
Copiyng and decompressing huge files on the fly
Jan 26, 2023
I am trying to copy an OS image file, image.raw.gz from web server (http://server/image.raw.gz) to an unzipped destination in s3 image.raw (s3://bucket/folder/image.raw)
The process seems to work but it consumes a lot of memory.
How can I make it use less memory to perform the copy "on the fly"?
i tried to do it by declaring a buffer at 1024 but it didn't work
the function is like:
The text was updated successfully, but these errors were encountered: