it may because you use multipart/form to upload file.try use data like code below
data = open(localFilePath, 'rb').read()
headers = {
"Content-Type":"application/binary",
}
upload = requests.put(uploadUrl,data=data,headers=headers)
Answer from shutup on Stack OverflowHow do I upload binary files using Python?
rest - Python POST binary data - Stack Overflow
python ftplib upload binary file - Stack Overflow
How to upload binary file with ftplib in Python? - Stack Overflow
Videos
Basically what you do is correct. Looking at redmine docs you linked to, it seems that suffix after the dot in the url denotes type of posted data (.json for JSON, .xml for XML), which agrees with the response you get - Processing by AttachmentsController#upload as XML. I guess maybe there's a bug in docs and to post binary data you should try using http://redmine/uploads url instead of http://redmine/uploads.xml.
Btw, I highly recommend very good and very popular Requests library for http in Python. It's much better than what's in the standard lib (urllib2). It supports authentication as well but I skipped it for brevity here.
import requests
with open('./x.png', 'rb') as f:
data = f.read()
res = requests.post(url='http://httpbin.org/post',
data=data,
headers={'Content-Type': 'application/octet-stream'})
# let's check if what we sent is what we intended to send...
import json
import base64
assert base64.b64decode(res.json()['data'][len('data:application/octet-stream;base64,'):]) == data
UPDATE
To find out why this works with Requests but not with urllib2 we have to examine the difference in what's being sent. To see this I'm sending traffic to http proxy (Fiddler) running on port 8888:
Using Requests
import requests
data = 'test data'
res = requests.post(url='http://localhost:8888',
data=data,
headers={'Content-Type': 'application/octet-stream'})
we see
POST http://localhost:8888/ HTTP/1.1
Host: localhost:8888
Content-Length: 9
Content-Type: application/octet-stream
Accept-Encoding: gzip, deflate, compress
Accept: */*
User-Agent: python-requests/1.0.4 CPython/2.7.3 Windows/Vista
test data
and using urllib2
import urllib2
data = 'test data'
req = urllib2.Request('http://localhost:8888', data)
req.add_header('Content-Length', '%d' % len(data))
req.add_header('Content-Type', 'application/octet-stream')
res = urllib2.urlopen(req)
we get
POST http://localhost:8888/ HTTP/1.1
Accept-Encoding: identity
Content-Length: 9
Host: localhost:8888
Content-Type: application/octet-stream
Connection: close
User-Agent: Python-urllib/2.7
test data
I don't see any differences which would warrant different behavior you observe. Having said that it's not uncommon for http servers to inspect User-Agent header and vary behavior based on its value. Try to change headers sent by Requests one by one making them the same as those being sent by urllib2 and see when it stops working.
This has nothing to do with a malformed upload. The HTTP error clearly specifies 401 unauthorized, and tells you the CSRF token is invalid. Try sending a valid CSRF token with the upload.
More about csrf tokens here:
What is a CSRF token ? What is its importance and how does it work?
I actually just fought through this issue. I had to close the file that was being written before I opened it to upload to the FTP server.
out2 = open('file.csv')
for r1 in cursor:
out2.write(str(r1))
out2.close()
ftp_census = file_loc
stor_census = str("STOR egocensus_" + demoFileDate + ".csv")
fc = open(ftp_census, 'rb')
ftp.storbinary(stor_census, fc, 1024)
Once I closed the file the file size on the FTP server was correct. I also edited the original answer to show the code better. I probably could code this better but it is working....
Several hard tests have been made before I understood why. First, this is not a bug in python. Because the file to be transferred hasn't been flushed to disk from memory,for this:
f.retrbinary('RETR '+filename, filehandler.write, bufsize)
After doing this, file handler is not closed explicitly, retrbinarydoesn't close it, so storing it immediately will lose some bytes which are in memory.
So if we close the file handler explicitly after storing it, like this:
f.retrbinary('RETR '+filename, filehandler.write, bufsize)
filehandler.close()
then we got all bytes, for more detail, see: 'http://blog.csdn.net/hongchangfirst'
The issue is not with the command argument, but with the the file object. Since you're storing binary you need to open file with 'rb' flag:
>>> ftp.storbinary('STOR myfile.txt', open('myfile.txt', 'rb'))
'226 File receive OK.'
APPEND to file in FTP.
Note: it's not SFTP - FTP only
import ftplib
ftp = ftplib.FTP('localhost')
ftp.login ('user','password')
fin = open ('foo.txt', 'r')
ftp.storbinary ('APPE foo2.txt', fin, 1)
Ref: Thanks to Noah
The earlier posts didn't really helped but I figured it out by referring the original doc of requests: Streaming uploads. Check doc here.
As my file was huge around 1.9 GB, so session was breaking in between the upload process, hence giving error as "Internal error".
As its huge file I streamed and upload it by providing a file-like object in my function:
def upload_code():
jsonheaderup={'Content-Type': 'application/octet-stream'}
with open('/root/ak-nas-2013-06-05-7-18-1-1-3-nd.pkg.gz', 'rb') as file:
requests.post("%s/api/system/v1/updates" % (url), data=file, auth=zfsauth, verify=False, headers=jsonheaderup, timeout=None)
At first glance, I can not see any mistake.
Did you see this: Python 3 script to upload a file to a REST URL (multipart request) ?
import io # Part of core Python
import requests # Install via: 'pip install requests'
# Get the data in bytes. I got it via:
# with open("smile.gif", "rb") as fp: data = fp.read()
data = b"GIF89a\x12\x00\x12\x00\x80\x01\x00\x00\x00\x00\xff\x00\x00!\xf9\x04\x01\n\x00\x01\x00,\x00\x00\x00\x00\x12\x00\x12\x00\x00\x024\x84\x8f\x10\xcba\x8b\xd8ko6\xa8\xa0\xb3Wo\xde9X\x18*\x15x\x99\xd9'\xad\x1b\xe5r(9\xd2\x9d\xe9\xdd\xde\xea\xe6<,\xa3\xd8r\xb5[\xaf\x05\x1b~\x8e\x81\x02\x00;"
# Hookbin is similar to requestbin - just a neat little service
# which helps you to understand which queries are sent
url = 'https://hookb.in/je2rEl733Yt9dlMMdodB'
in_memory_file = io.BytesIO(data)
response = requests.post(url, files=(("smile.gif", in_memory_file),))
This (small) library will take a file descriptor, and will do the HTTP POST operation: https://github.com/seisen/urllib2_file/
You can pass it a StringIO object (containing your in-memory data) as the file descriptor.
uploadstep2.request("PUT", myUniqueUploadUrl, body="C:\Test.mp4", headers=headers)
will put the actual string "C:\Test.mp4" in the body of your request, not the content of the file named "C:\Test.mp4" as you expect.
You need to open the file, read it's content then pass it as body. Or to stream it, but AFAIK http.client does not support that, and since your file seems to be a video, it is potentially huge and will use plenty of RAM for no good reason.
My suggestion would be to use requests, which is a way better lib to do this kind of things:
import requests
with open(r'C:\Test.mp4'), 'rb') as finput:
response = requests.put('https://grabyo-prod.s3-accelerate.amazonaws.com/youruploadpath', data=finput)
print(response.json())
I do not know if it is useful for you, but you can try to send a POST request with requests module :
import requests
url = ""
data = {'title':'metadata','timeDuration':120}
mp3_f = open('/path/your_file.mp3', 'rb')
files = {'messageFile': mp3_f}
req = requests.post(url, files=files, json=data)
print (req.status_code)
print (req.content)
Hope it helps .
As per python-requests documentation (Here), if you specify files parameter in requests.post, it will automatically set requests content-type as 'multipart/form-data'. So, you just need to omit the headers part from your post request as:
import requests
file = requests.get('{FILE_URL}', stream=True)
with open({TEMP_FILE}, 'wb') as f:
for chunk in response.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
binaryContent = open({TEMP_FILE}, 'rb')
a = requests.post('{POST_URL}', data={'key': binaryContent.read()})
Update: You can first try downloading your file locally and posting its binary data to your {POST_URL} service.
Here's a simple function that can appropriately send your file to any endpoint you want:
def upload_file(file_path, token):
"""Upload the file and return the response from the server."""
headers = {'Authorization': f'Bearer {token}'}
with open(file_path, "rb") as file:
files = {'file': (os.path.basename(file_path), file, 'application/pdf')}
response = requests.post(post_url, headers=headers, files=files)
response.raise_for_status()
return response
Python will automatically pick the multipart/form-data content type for you. All you need to do is supply the file correctly. Using the Authorization is optional, but I added it in case the API you want to post to requires one.