Based off Greg's answers to my questions I think the following will work best:
First you'll need something to wrap your open file so that it limits how much data can be read:
class FileLimiter(object):
def __init__(self, file_obj, read_limit):
self.read_limit = read_limit
self.amount_seen = 0
self.file_obj = file_obj
# So that requests doesn't try to chunk the upload but will instead stream it:
self.len = read_limit
def read(self, amount=-1):
if self.amount_seen >= self.read_limit:
return b''
remaining_amount = self.read_limit - self.amount_seen
data = self.file_obj.read(min(amount, remaining_amount))
self.amount_seen += len(data)
return data
This should roughly work as a good wrapper object. Then you would use it like so:
with open('my_large_file', 'rb') as file_obj:
file_obj.seek(my_offset)
upload = FileLimiter(file_obj, my_chunk_limit)
r = requests.post(url, data=upload, headers={'Content-Type': 'application/octet-stream'})
The headers are obviously optional, but when streaming data to a server, it's a good idea to be a considerate user and tell the server what the type of the content is that you're sending.
Answer from Ian Stapleton Cordasco on Stack Overflowpython - requests - how to stream upload - partial file - Stack Overflow
Can you upload large tarballs using Python3 requests by chunking the file?
Requests - How to upload large chunks of file in requests? - Stack Overflow
python - How do I upload files to the server using the drf-chunked-upload library? - Stack Overflow
Based off Greg's answers to my questions I think the following will work best:
First you'll need something to wrap your open file so that it limits how much data can be read:
class FileLimiter(object):
def __init__(self, file_obj, read_limit):
self.read_limit = read_limit
self.amount_seen = 0
self.file_obj = file_obj
# So that requests doesn't try to chunk the upload but will instead stream it:
self.len = read_limit
def read(self, amount=-1):
if self.amount_seen >= self.read_limit:
return b''
remaining_amount = self.read_limit - self.amount_seen
data = self.file_obj.read(min(amount, remaining_amount))
self.amount_seen += len(data)
return data
This should roughly work as a good wrapper object. Then you would use it like so:
with open('my_large_file', 'rb') as file_obj:
file_obj.seek(my_offset)
upload = FileLimiter(file_obj, my_chunk_limit)
r = requests.post(url, data=upload, headers={'Content-Type': 'application/octet-stream'})
The headers are obviously optional, but when streaming data to a server, it's a good idea to be a considerate user and tell the server what the type of the content is that you're sending.
I'm just throwing 2 other answers together so bear with me if it doesn't work out of the box—I have no means of testing this:
Lazy Method for Reading Big File in Python?
http://docs.python-requests.org/en/latest/user/advanced/#chunk-encoded-requests
def read_in_chunks(file_object, blocksize=1024, chunks=-1):
"""Lazy function (generator) to read a file piece by piece.
Default chunk size: 1k."""
while chunks:
data = file_object.read(blocksize)
if not data:
break
yield data
chunks -= 1
requests.post('http://some.url/chunked', data=read_in_chunks(f))
» pip install drf-chunked-upload
» pip install django-chunked-upload