Upload images to S3 doesn't work - javascript

I'm trying to upload files to S3 without having to send to my server. I've a endpoint which gives me signed S3 URL where I can make PUT requests to store files to my bucket.
I tried to do couple of things on JavaScript side which didn't work. (I'm not using amazon's SDK, and prefer not to, because I'm looking for simple file upload and nothing more than that)
Here's what I'm trying to do currently in JavaScript:
uploadToS3 = () => {
let file = this.state.files[0];
let formData = new FormData();
formData.append('Content-Type', file.type);
formData.append('file', file);
let xhr = new XMLHttpRequest();
xhr.open('put', this.signed_url, true);
xhr.send(formData)
};
I tried bunch of options, I prefer using fetch because I don't really care for upload progress since these are just images. I used xhr code from somewhere to try out like above. These do make network calls and seem like they should work but they don't.
Here's what happens: An object is created on S3, when I go to public URL, they get downloaded and when I use image viewer to open them, they say it's not valid JPG.
I'm thinking I'm not doing the upload correctly.
Here's how I do in postman:
Notice I have correct signed URL and I've attached binary image file to the request. And added a header stating content type is image/jpeg as shown below:
When I login to S3 and go to my bucket, I can see an image and I can go to it's public URL and view in browser. This works perfect and is exactly what I want, now I don't know how I could achieve the same on JavaScript.
PS: I even tried to click on code on postman, it doesn't generate file code for me.

The problem here starts with xhr.send(formData).
When you PUT a file in S3 you don't use any form structures at all, you just send the raw object bytes in the request body.
Content-Type: and other metadata goes in the request headers, not in form data in the body.
In this case, if you download your uploaded file and view it with a text editor, the problem should be very apparent once you see what your code is actually sending to S3, which S3 then obediently stores and serves up on subsequent requests.
Note that S3 does have support for browser-based form POST uploads, but when doing so the signing process is significantly different, requiring you to create and sign a policy document, so that you can send the form, including the policy and signature, to the browser and allow an otherwise-untrusted user to upload a file -- the signed policy statement prevents the browser user from tampering with the form and performing actions that you didn't intend.

Related

Cannot download pdf from file-saver.js react

I am trying to download the pdf from this url :
http://www.africau.edu/images/default/sample.pdf
I followed the example and wrote the code below.
import { saveAs } from "file-saver";
const downloadPDF = ()=>{
var FileSaver = require("file-saver");
FileSaver.saveAs(
"http://www.africau.edu/images/default/sample.pdf",
"somehthing.pdf"
);
}
However, when the downloadPDF function is invoked on the button pressed. The file is not being saved. The pdf is simply being opened in the new tab.
The screenshot of what the pdf looks like in the new tab is shown below.
How do I save the pdf file?
Also, is this approach to get the pdf even valid in the first place or is axios.get() more preferred approach to get the file, then save the response file (response.body) via FileSaver.saveAs()
If the question is unclear, please let me know in the comment before flagging - I will make the necessary update. Thank you
seems like the FileSaver does not help.
However if the file is coming from the server we recommend you to first try to use Content-Disposition attachment response header as it has more cross-browser compatiblity.
as far as I know, there are 2 ways to download file in browser.
server returns a response with header Content-Disposition with value attachment or header Content-Type with value application/octet-stream. Browser will promote the SaveDialog and handle this download for you. This is preferred way to download but this requires you to have control over the server.
you just use ajax or axios to get the data of any file at anywhere. then you create a dummy link to download (like this one). then browser will promote for SaveDialog and then save file to disk. This is just fine for small file but not for large files because you have to store entire file in memory before saving it to local disk.
I think option 2 is appropriate for you.
Example here. In this example, I place a file named abc.json in public folder. Note that the server must enable cors for your website origin. otherwise, there's no way for you to access that file in javascript code.

Saving user uploaded image to folder and/or server

I have an app, that allows users to upload an image, crop it and with other data save it all as html file to be used as a footer for emails. The image is given to them as base64.
Howver turns out this is not supported by Outlook, since it doesnt accept b64 data as img source.
So my idea was to save the cropped image to a file, let's say /public/avatars/avatar.png and link it's directory as a source. However I'm having trouble finding a way how to save images to to a file using JS. My allowed stack is JS and React, no node.js.
How would I do that? I can have the file as either b64 ot canvas element, so i'm flexible here, as long as it's saved to file.
I'm open to other solutions too.
You can't save a file with client language only. You have to save it to a server, own server or external server or a service like AWS.
The best solution without server-side (strangly) is to use a API for save image and get link from this API. Then you can use this link to Outlook.
You can use https://aws.amazon.com/fr/cloudfront/ free for one year with 50Go and 2 millon request monthly.
If you do not exceed 300,000 images per year you can use this service : https://cloudinary.com/pricing
You can also use https://www.deviantart.com/developers/ but that's not really the point of service.
Weird solution :
Be careful, the login and password of your FTP user will be available in the source of your code. Minimum rights must be administered.
You can use this script for talk to FTP server from JS (not tested but seems worked) : http://www.petertorpey.com/files/ae/scripts/FTPConnection.jsx
You can try something like that :
var ftp = new FtpConnection("ftp://URL") ;
ftp.login("username", "password");
ftp.cd("FOLDER") // For move to folder
ftp.put(file,"FILE.PNG") ; // File must be a File JS Object
ftp.close() ;

Questions about extract.autodesk.io - taking a file path instead of choosing with the file chooser

I'm trying to modify the project so I could plug in a file path or a file as a variable instead of the user choosing the model file. So I'm looking for where the actual upload happens.
In submitProject():
https://github.com/cyrillef/extract.autodesk.io/blob/master/www/js/app.js#L129
I see that it just sends (with an ajax request) an object that holds the file name and unique identifier but not the actual binary file.
In here:
https://github.com/cyrillef/extract.autodesk.io/blob/master/www/js/upload-flow.js#L34
there's r.upload(), is this the actual upload of the model?
Does it start to upload the file right as you press ok in the file chooser?
Is there a way to give it a file path to upload instead of uploading with the form and file chooser?
The author of this sample should be on Christmas vacation, I just downloaded and setup the extractor sample on my machine, with a little debug into the code, let me try to answer as much as I can.
In general, I think some of your understanding is correct, but let me explain a little more:
For a local file to be uploaded and translated, there are actually 2 steps of actual “upload”.
As you mentioned, when you press ok in the file chooser, yes, the file will be first uploaded to the "extractor" server as you noticed by some methods like r.upload(), it’s actually using a JavaScript library call “flow.js", which provides multiple simultaneous, stable, fault-tolerant and resumable/restartable file uploads via the HTML5 File API. I am not expert on this, but you can check that module about how to use it to upload a file.
By now, your file is uploaded from client to the "extractor" server, but if you want to translate the file to "svf", the file is required to be uploaded to Autodesk Server(OSS), that is done by clicking “submit my project” buton, when you click this button, as you mentioned, from client, it will call the method submitProject() in https://github.com/cyrillef/extract.autodesk.io/blob/master/www/js/app.js, this method will send a post request of “/api/projects” to the "extractor" server, if you check the code at server side https://github.com/cyrillef/extract.autodesk.io/blob/master/server/projects.js , you can see the extractor server actually upload the file to Autodesk OSS, and then triggers the translation service.
This feature (passing a URL string vs a file binary) is already implemented. You can use the uri: edit box and paste your file URL there. It supports http(s) or S3 uri with access token.
The physical upload happens in this file, whereas the SubmitProject() code sends only information as JSON. The JSON object only contains a reference to the file which was uploaded using flow.js. But would contain the uri string if you had choose that method.

HTML Object data URL from dynamic source

So all uploads for my app are not stored in my web server space, they are stored in a file system storage. When my users want access to the file they call a URL and the backend process will buffer the data to the browser via the HttpServletResponse outputstream. This works great as intended for downloading a file. Now my use-case has a scenario where I need to load an embedded object using this same method.
I am essentially loading a preview of the PDF file in the browser. This works fine if the PDF is stored on the web server and I provide a direct URL to the file. But when I use my method of sending files to the user then it doesn't work.
<object data='"+pdfUrl+"' type='application/pdf' width='160px' height='160px' />
If i put pdfURL into a browser my file gets downloaded no problem. So I think the issue is the HTTP headers I am sending in the outputstream that maybe is preventing the Object from loading properly. I am not sure if maybe its expecting something specific to be set in order to trigger loading the file
I am currently using very basic headers as follows:
BufferedInputStream is = <Some File Inputstream>;
resp.setContentType(new MimetypesFileTypeMap().getContentType(directory+filename));
resp.setHeader("Content-Disposition", "attachment; filename="+StringFormatHelper.formatFileName(filename));
bufferedCopy(is, resp.getOutputStream());
is.close();
resp.getOutputStream().flush();
Anyone have any ideas on what I have to change to get the data to properly load in the Object tag? I don't get any errors in the JS console or server side. I am not sure how to debug this issue.
Edit:
SO i just realized that if i right click on where the blank Object tag is at I have the option to "Save as..." and when I do I download the PDF. So the pdf data is loaded but Its just not displaying in the UI.
The issue is this line of code
resp.setContentType(new MimetypesFileTypeMap().getContentType(directory+filename));
This was not setting the correct mime-type for the file as I thought it was. So there was a mismatch in that the Object tag was looking for application/pdf but the server was sending a different MIME type in the header. Once I matched them up everything worked.
I was able to get the correct MIME type using the Spring provided lookup instead of the JDK lookup
new ConfigurableMimeFileTypeMap().getContentType(directory+filename)

Upload an image file to azure blob storage from js

I'm trying to upload an image file from an html page to azure blob storage. So far I have written a web service to create an SAS for my blob container. From this I have created a uri of the format "blob address" / "container name" / "blob name" ? "sas". I have an upload control on my html page.
I have then tried to upload the file using the following code:
var xhr = new XMLHttpRequest();
xhr.upload.addEventListener("progress", uploadProgress, false);
xhr.addEventListener("load", uploadComplete, false);
xhr.addEventListener("error", uploadFailed, false);
xhr.addEventListener("abort", uploadCanceled, false);
xhr.open("PUT", blobPath);
xhr.send(upFile.files[0]);
where blobPath is the uri as above and upFile is my html upload control.
When I try to upload a file the uploadFailed routine is triggered. So:
Is this the right way to do this?
How do I trap the error returned by the upload so that I can see what is going wrong?
My sas looks like: "sr=c&si=mypolicy&sig=onZTE4buyh3JEQT3%2B4cJ6uwnWX7LUh7fYQH2wKsRuCg%3D"
Any help appreciated, thanks.
This is certainly the right way however there're a few things:
You have to include x-ms-blob-type request header in your request and it's value should be BlockBlob.
Also please realize that for this to work, you would need to host your HTML page in the same storage account because CORS is not supported in Azure Blob Storage as of today. So if your html page is hosted in some other domain, you would get error because of CORS.
You may find these blog posts useful for uploading blobs using SAS and AJAX:
http://coderead.wordpress.com/2012/11/21/uploading-files-directly-to-blob-storage-from-the-browser/
http://gauravmantri.com/2013/02/16/uploading-large-files-in-windows-azure-blob-storage-using-shared-access-signature-html-and-javascript/
I have now found a solution that works for this problem. My upload now works as follows:
I read the file as a DataURL into a FileReader
I slice the returned string up and send each slice up to the server
where it is stored in a session variable
Once the whole file has been sent up I call another web service which
glues the slices back together and turns the result into a byte array
The byte array is then stored as a file in local storage in azure
Finally the file is transferred from local storage into blob storage
See code at:
Upload files from html to azure

Categories