Basically I am willing to upload files directly to S3 via browser i.e without any web server acting as a middle-ware or proxy like this.
So I am generating pre-signed URL using boto3 library like this:
def put_url(self, key):
url = self.client.generate_presigned_url(
ClientMethod="put_object",
Params={
"Bucket": "visweswaran",
"Key": key
}
)
return url
and this returns a pre-signed URL which is completely fine. I am using JQuery to make ajax PUT request to the S3 to upload my file.
let file_data = document.getElementById("file_data").files[0];
var form = new FormData();
form.append("", file_data, "test.txt");
var settings = {
"url": url,
"method": "PUT",
"timeout": 0,
"processData": false,
"mimeType": "multipart/form-data",
"contentType": "text/plain",
"beforeSend": function(xhr){xhr.setRequestHeader('Content-Disposition', 'attachment');},
"data": form
};
$.ajax(settings).done(function (response) {
location.reload();
});
The file gets uploaded to the S3 successfully via browser. But when I open the file I see strange meta data getting added to the top of the file like this,
-----------------------------33057860671031084693134041830 Content-Disposition: form-data; name="name"
test.txt
-----------------------------33057860671031084693134041830 Content-Disposition: form-data; name="file"; filename="test.txt"
Content-Type: text/plain
I have also tried a more formal solution like Pluploader (https://www.plupload.com/) and I am facing the same problem. I would like somebody to point me in the right direction to fix it. Ant help is much appreciated.
References:
https://softwareontheroad.com/aws-s3-secure-direct-upload/
How to upload to AWS S3 directly from browser using a pre-signed URL instead of credentials?
Working Solution
I have tested with a video and you don't need a form. Just send the data directly
let video = document.getElementById("video_file").files[0];
var settings = {
"url": url,
"method": "PUT",
"timeout": 0,
"processData": false,
"data": video
};
$.ajax(settings).done(function (response) {
location.reload();
});
I have tried uploading a txt file with a presigned-put-url using two approaches:
Sending A form data: (this is used with POST urls not PUT)
This actually add the content-disposition header to the final file as mentioned in the question.
Sending raw binary data (recommended way and this how PUT url is used!):
The file was uploaded correctly, and does not include the content-disposition header.
Could you try sending the PUT request without using formData at all?
The ajax's data attribute should have a value of file_data, and the content-type while the signing the S3 URL and sending (ajax) should be ContentType: 'binary/octet-stream'.
If you need to use formData, check out S3's preSignedPost.
Related
I'm trying to send an audiofile via a POST request to the server (aws, ec2) and I'm using Django, but my request.FILES doesn't receive the blob file, but it DOES receive the key and the filename.
Everything worked fine when I ran this on localhost.
How can I get the file?
I'm enabling my website on chrome://flags/#unsafely-treat-insecure-origin-as-secure, so that I can access the microphone.
Using RecorderJs to generate a Blob object containing the recorded audio in WAV format.
Main.js
rec.exportWAV(function(blob){
...
var fd = new FormData();
fd.append('text', speech);
fd.append('audio', blob, 'test.wav');
$.ajax({
type: 'POST',
enctype: 'multipart/form-data',
url: url,
data: fd,
processData: false,
contentType: false,
success: function(response) {
console.log(response);
}
})
...
speech is String,
blob in console is Blob {size: 221228, type: "audio/wav"}, so it does exist.
Views.py:
#csrf_exempt
def get_blob(request):
thislist = []
for key in request.POST:
thislist.append(request.POST.get(key))
for key in request.FILES:
thislist.append(request.FILES.get(key).name)
json_stuff = json.dumps({"check": thislist})
return HttpResponse(json_stuff, content_type="application/json")
I've tried with and without enctype, doesn't make a difference.
I've tried setting contentType to multipart/form-data, doesn't make a difference.
The formdata seems to be sent correctly, because I can get the speech correctly (request.POST).
And I can get the key from request.FILES ('audio'), and get the filename ('test.wav').
If I try request.FILES['audio'].read(), it says MultiValueDictError.
If I try request.FILES.get('audio').read() it says AttributeError, 'NoneType' object has no attribute 'read'.
When I print request.POST I do get the dictionary with 'text': whatever text I've spoken.
When I print request.FILES I get an empty dictionary even though I can get the key and filename through
for key in request.FILES: and request.FILES['audio'].filename.
Does anyone know what's going on and/or can help me with the problem?
You may use read() method for it:
#csrf_exempt ## it doesn't work for post requests in later Django versions, so you need to disable it in another way or add a token to all post requests
def get_blob(request):
thislist = []
for key in request.POST:
thislist.append(request.POST.get(key))
for key in request.FILES:
thislist.append(request.FILES.get(key).name)
# I am not sure that "name" exists in the request — you may use any filename, although the following works with multipart requests.
with open(request.FILES["name"],"wb+") as f:
f.write(request.FILES['file'].read())
json_stuff = json.dumps({"check": thislist})
return HttpResponse(json_stuff, content_type="application/json")
By the way,
#csrf_exempt — doesn't work for post requests in later Django versions, as a token is checked before your view is even called. So, you may need to disable CSRF middleware or just add a correct 'X-CSRFToken' token to all requests.
I'm trying to upload a file to Amazon AWS from Javascript with a signed URL obtained from a django api where I sign it with the help of Boto. This is my line in python:
url = conn.generate_url(300, 'PUT', settings.AWS_VIDEO_BUCKET, key, headers={'Content-Length': '19448423'}, force_http = True )
Using the URL generated from this, I can post to S3 with CURL like so:
curl -v --request PUT --upload-file video.mp4 "http://some_signed_url"
What does not work though, is using this URL to PUT a file to Amazon AWS via Javascript. Debugging with a proxy reveals a broken pipe error, while CURL would give me the regular Amazon Access Denied XML if something went wrong. This is my javascript code (file is a JS File object):
var file = e.originalEvent.dataTransfer.files[0];
var fd = new FormData();
fd.append( 'file', file );
$.ajax({
url: 'signed_url',
data: fd,
processData: false,
contentType: false,
cache: false,
type: 'PUT',
success: function(data){
console.log("success");
},
error: function(data){
console.log(data.responseText);
}
});
Any ideas where to go from here? Maybe I'm not including all the required headers in the signing process, or FormData is not the right way to go.
I am trying to use BusinessObject RESTful API to download a generated (pdf or xls) document.
I am using the following request:
$.ajax({
url: server + "/biprws/raylight/v1/documents/" + documentId,
type: "GET",
contentType: "application/xml",
dataType: "text",
headers: {"X-SAP-LogonToken": token, "Accept": "application/pdf" },
success: function(mypdf) {
// some content to execute
}
});
I receive this data as a response:
%PDF-1.7
%äãÏÒ
5 0 obj
<</Length 6 0 R/Filter/FlateDecode>>
//data
//data
//data
%%EOF
I first assumed that it was a base64 content, so in order to allow the users to download the file, I added these lines in the success function:
var uriContent = "data:application/pdf; base64," + encodeURIComponent(mypdf);
var newWindow=window.open(uriContent, 'generated');
But all I have is an ERR_INVALID_URL, or a failure while opening the generated file when I remove "base64" from the uriContent.
Does anyone have any idea how I could use data response? I went here but it wasn't helful.
Thank you!
. bjorge .
Nothing much can be done from client-side i.e. JavaScript.
The server side coding has to be changed so that a url link is generated (pointing to the pdf file) and sent as part of the response. The user can download the pdf from the url link.
You cannot create file using javascript, JavaScript doesn't have access to writing files as this would be a huge security risk to say the least.
To achieve your functionality, you can implement click event which target to your required file and it will ask about save that file to user.
I have an endpoint from our Django backend guys with documentation that reads:
POST to /api/1/photo-uploads/ with enctype="multipart/form-data" with files in field called "files[]".
I've been attempting to send uploaded files with formData using jquery's AJAX method. I continue to get an error indicating that the file was not sent. When I view the payload I see.
undefined
------WebKitFormBoundary9AzM2HQPcyWLAgyR
Content-Disposition: form-data; name="file"; filename="auzLyrW.jpg"
Content-Type: image/jpeg
Which doesn't necessarily mean that it hasn't sent but there certainly isn't a location being posted. And I don't have any kind of verification that the file is uploaded.
var formData = new FormData();
formData.append('file', $('#file-upload').get(0).files[0]);
$.ajax({
url: '/api/1/photo-uploads/',
type: 'POST',
data: formData,
cache: false,
contentType: false,
processData: false,
});
When I console.log formData it simply show's the prototype methods like .append. So I'm unable to verify if the file's data is being sent beyond checking the payload. I can log $('#file-upload').get(0).files[0] but I only see details from the file itself. Because I'm testing it locally an upload location should be something like localhost:8000/.
The backend guys are under the impression that it's something I'm doing. When I do a simple form post it works fine. I've tried a number of plugins and basic methods and all have produced the 400 {"message": "No photos supplied.", "success": false}
Any ideas would be appreciated.
The documentation asked that it be called files[]. What was being sent was file.
formData.append('files[]', $('#file-upload').get(0).files[0]);
I have the following code to download a .csv file:
$.ajax({
url: urlString,
contentType: "application/json; charset=utf-8",
dataType: "json",
cache: false,
success: function(data) {
if (data) {
var iframe = $("<iframe/>").attr({
src: data,
style: "visibility:hidden;display:none"
}).appendTo(buttonToDownloadFile);
} else {
alert('Something went wrong');
}
}
});
The urlString is pointing to a Restful service that generates the .csv file and returns the file path which is assigned to the src attribute for the iFrame. This works for any .csv files but I'm having problems with .xml files.
When I use the same code but changing the contentType to text/xml and use it for downloading .xml files this doesn't work.
Can I use the same approach here for .xml files?
UPDATE:
Thanks to Ben for pointing me to the right direction. It turns out I don't need the ajax call at all. Instead, I can just use the iFrame and its url attribute to call the web service, which will generate the content, add the header (Content-Disposition), and return the stream.
You can also offer it as a download from a virtual anchor element, even if the data is client-side:
/*
* Create an anchor to some inline data...
*/
var url = 'data:application/octet-stream,Testing%20one%20two%20three';
var anchor = document.createElement('a');
anchor.setAttribute('href', url);
anchor.setAttribute('download', 'myNote.txt');
/*
* Click the anchor
*/
// Chrome can do anchor.click(), but let's do something that Firefox can handle too
// Create event
var ev = document.createEvent("MouseEvents");
ev.initMouseEvent("click", true, false, self, 0, 0, 0, 0, 0, false, false, false, false, 0, null);
// Fire event
anchor.dispatchEvent(ev);
http://jsfiddle.net/D572L/
I'm guessing that the problem is that most browsers will try to render XML in the browser itself, whereas they tend to have no handler for CSV, so they'll automatically default to prompt the user to download the file. Try modifying the headers of the XML file to force the download. Something like (PHP example):
header("Content-Type: application/force-download");
header("Content-Type: application/octet-stream");
header("Content-Type: application/download");
header('Content-Disposition: attachment; filename="some filename"');
That should tell most browsers not to attempt to open the file, but instead to have the user download the file and let the OS determine what to do with it.
If you have no power to control headers in the XML file itself, you can try a work-around using a server-side script. Use JS to pass the URL to a server-side script:
//build the new URL
var my_url = 'http://example.com/load_file_script?url=' + escape(path_to_file);
//load it into a hidden iframe
var iframe = $("<iframe/>").attr({
src: my_url,
style: "visibility:hidden;display:none"
}).appendTo(buttonToDownloadFile);
and on the server-side (your http://example.com/load_file_script script) you use cURL/file_get_contents/wgets/[some other mechanism of fetching remote files] to grab the contents of the remote file, add the Content-Disposition: attachment headers, and print the code of the original file.