How to upload and download any file in the browser? - javascript

This is part of an experiment I am working on.
Let's say I upload a file eg: .psd (photoshop file) or .sketch (sketch) through the input type file tag, it displays the name of the file and can be downloaded as a .psd / .sketch on click of a button (without data corruption)
How would this be achieved?
Edit 1:
I'm going to add a bit more info as the above was not completely clear.
This is the flow:
User uploads any file
File gets encrypted in the client before sending to a sockets.io server
User on the other end receives this file and is able to decrypt and download.
Note: There is not database connected with the sockets.io. It just listens and responds to whoever connected to the server.
I got the enc/dec part covered. Only thing is uploading and store as ? in a variable so it can be encrypted and doing the opposite on the recepient end (dec and downlodable)
Thanks again in advance :)

I think these are your questions:
How to read a file that was opened/dropped into a <file> element
How to send a file to a server
How to receive a file from a server
When a user opens a file on your file element, you'll be able to use its files property:
for (const file of fileInputEl.files) {
// Do something with file here...
}
Each file implements the Blob interface, which means you can call await file.arrayBuffer() to get an ArrayBuffer, which you can likely use directly in your other library. At a minimum, you can create your byte array from it.
Now, to send data, I strongly recommend that you use HTTP rather than Socket.IO. If you're only sending data one way, there is no need for a Web Socket connection or Socket.IO. If you make a normal HTTP request, you offload all the handling of it to the browser. On the upload end, it can be as simple as:
fetch('https://files.example.com/some-id-here', {
method: 'PUT'
body: file
});
On the receive end, you can simply open a link <a href="https://files.example.com/some-id-here">.
Now, the server part... You say that you want to just pass this file through. You didn't specify at all what you're doing on the server. So, speaking abstractly, when you receive a request for a file, you can just wait and not reply with data until the sending end connects and start uploading. When the sending end sends data, send that data immediately to the receiving end. In fact, you can pipe the request from the sending end to the response on the receiving end.
You'll probably have some initial signalling to choose an ID, so that both ends know where to send/receive from. You can handle this via your usual methods in your chat protocol.
Some other things to consider... WebRTC. There are several off-the-shelf tools for doing this already, where the data can be sent peer-to-peer, saving you some bandwidth. There are some complexities with this, but it might be useful to you.

Related

Get ajax call progress status in front end

I have a web page which allows users to upload and process specific files. After an user uploads some files, after clicking the 'Process' button an ajax call is being sent to a backend service. In the beforeSend function there is an overlay applied to the screen and a spinner is displayed. When the success function is triggered, then the overlay is removed and a toast notification is being shown like 'Files were processed!'
My goal is to somehow show a progress status for each file based on specific checkpoints in the backend service.
Let's say that the backend service when called does following tasks: parse file, map to specific format, send data to database A.... and in the end it sends back http status 200 and a JSON like
{
"status":"Success",
"message": "File X was processed"
}
Now what I want is that instead of just getting an overlay and disabling the whole page until the success event is triggered, to have a progress bar which is updated for each file based on the exact step where the backend has reached.
For instance, for file A, I would like to see below transitions: 5 % Parsing file, 10 % Mapping file...90% sending data to database, 100% processed.
Is this somehow achievable?
There are few points that you need to look into.
Usually in production code, we need to have timeouts. If you are making an ajax call to the backend API, there will be a timeout associated with that api call. Suppose if the timeout is more than 2 mins, then it will send you a 504 Gateway timeout error.
To overcome this and to implement the functionality which you want, you can have any DB(lets consider SQL server). In your SQL server, make a table:
Process_Table
With schema:
Process_id( Will store the process id/name )
Percentage( Will store the percentage )
At_step ( Parsing, Mapping, Sending to DB etc)
Using Javascript(Or any framework of your choice), use setInterval(), to make check_process() api calls. For check_proceess, you can pass in the process_id, and check against the db. For interval, you can set it to 5 seconds. So that every 5 seconds the call is made.
You can read the response of those API calls and do your processing.
An HTTP request consists of a request and a response. There's no direct way to get status updates beyond the onprogress event which would let you see how much data has been transferred. This is useful for determining how much of the data has been sent to the server, but not so much for how far the server has got with working with that data.
You could store progress in a database and poll a webservice to read the most recent status.
You could also have the server push updates to the client using Websockets for bi-directional communication.
A rough outline for such a system might look like:
Open a Websocket
Send files with Ajax
Get server generated ID back in HTTP response
Pay attention to messages coming over the Websocket that mention that ID
You could also look at doing the whole thing over Websockets (i.e. upload the files that way too). A quick Google search turns up this library for uploading files to a Websocket service hosted on Node.js.

Securing image upload using Node and AWS Lambda

I'm implementing image upload via browser form and I'm working with AWS and NodeJS. The process is that user selects a file, provides additional info and it all is send to backend using multipart/form-data.
This works great so payload goes thru API Gateway ---> Lambda and this lambda uploads to S3 bucket. I'm using busboy to deal with multipart data and end up with nice JSON object containing all the data send from frontend, something like:
{
userName: "Homer Simpson",
file: base64endcoded_string,
}
Then I grab this base64endcoded_string and upload to S3 so file sits in there and I'm able to open it, download etc.
Now, obviously I don't trust any input from frontend and I wonder what is the best way to ensure that file being send is not malicious.
In this case I need to allow upload only images, say png,jpg/jpeg up to 2mb in size.
Busboy gives me the MIME type, encoding and other details but not sure if this is reliable enough or I should use something like mmmagick or else. How secure and reliable would these solutions be?
Any pointers would be much appreciated.
OWASP has a section on this with some ideas, anyways i found out that the best method to secure a image upload is to convert it, period, if you can convert it it's an image and you are sure that any attached info (code, hidden data, etc) is removed with the conversion process, if you can't it's not an image.
Another advantage is that you can strip exif info, add some data (watermarks for example), etc

How to implement Play2 API server returning File via Ajax?

I am using Scala Play2 framework and trying to convert SVG String data to other file types such as PDF,PNG,JPEG and send it to Client as a file.
What I want to achieve is that
client send Data via Ajax(POST with really huge JSON)
server generates a file from the JSON
server returns the file to the client.
But It seems that It's hardly possible that sending a file and let clients save it as a static file, So I am planning to make new static files on clients request and returns its access url to client side and open it via Javascript. and after clients finish the downloading, delete the file in a server though,In this approach, I have to
def generateFile = {
...
...
outputStream.flush() // save the file to a disk
}
and
Ok.sendFile(new File("foo.pdf"))
I need to write and read file to a storage disk. and I do not think this is a efficient way.
Is there any better way to achieve what I want?
Thank you in advance.
Why do you think this is not efficient enough?
I've seen a similar approach in a project:
Images are converted and stored in an arbitrary tmp directory using a special naming scheme
A dedicated server resource streams images to the client
A system cronjob triggered every 5 minutes deletes images older than 5 minutes from the tmp directory
The difference was that the image data (in your case the SVG string) was not sent by the client but was stored in a database.
Maybe you could skip the step of writing images to disk if you're conversion library is able to generate images in memory.

Send request to download file, catch response in Javascript

In a Rails2 Webapp, I am creating in javascript a form to forge a request to download a file. In the controller, I render the response using send_file so the file is downloaded.
There is a rather long process going on looking up the file to download and meanwhile I have the client waiting for the file to start being downloaded. For this reason, as soon as the client clicks the download button I display a small massage saying "Requesting file...".
Problem:
How can I (on javascript side) know when the response is rendered (aka the file starts to download) so I can hide the message I am displaying? I use ajax everywhere and I throw the showMessage method in the "before" part of the call and the hideMessage in the "complete" part of the jquery ajax call. But since I cannot request a file to be downloaded with an ajax call I have to do it trough a form but then I don't know how to realize when I get the repsonse back (or the file starts to download)
The basic idea is to send a cookie along with the file. Some javascript code checks the cookie periodically. When javascript gets the cookie, you know the downloading begins.
You can find sample javascript code here (server code in Java) and here (server code in ASP.NET).
In Rails, the cookie could be sent like this:
cookies['download_token'] = params[:download_token]
send_data ...

Force browser download a large file

I'm using a POST request with body to dynamically create a specified file on the server side and streaming it to the client as a response. The problem on the client side is that the file is being downloaded into the response object (browser's cache memory I suppose, as a temporary file) instead of the PC's hard drive.
How can I pass the byte stream directly for the browser to handle and direct into the /Downloads folder or wherever the user has set the downloads go?
The file size is really large and there's not enough space on the gadget's hard drive to create it there and give a link for GET request. So I need to create it on the fly and stream to the client as a POST response.
Right now I've written my code in Angular, but the language doesn't really matter. I'd like to understand how the browser handles the situation.
When I execute the HTTP request, I bet the browser does something with it before it's passed to the socket. And I'm sure the browser does something with the received request too before passing it to the piece of code which performed the request.
I mean can you only by setting the response headers make the browser download the file into /Downloads? What happens then to the waiting HTTP request if the packets are "stolen" by the browser? Or when the response headers arrive to the response object, can you somehow notify the browser that the body of the response is a file byte stream we wish to download into /Downloads?
And I have the possibility to modify my server code too.
EDIT More detailed explanation:
My server is sending the 5GB file to the client. Instead of just receiving that file in the temporary memory of the browser I would like to save the file directly on hard drive.
But naturally the browser prevents the direct access to the users computer so I can't write it there. So how do I pass the byte stream for the browser to handle and save into a file?
Ok, I have solved the problem. So if you send a file request yourself to the server from your javascript code the browser will just pass the received response directly to your response object, but you can't store it on the hard drive without some external (browser supported) tools.
What I mean is that if you perform any kind of $http.post() or .get() from your scripts the response will be passed to the $http.post().then(function(response){...}) directly and it's passed only after the whole response is received. Meaning that if you're expecting a file with the size of 5GB it will fail for being too large to be received inside the response object.
Instead what you have to do is to trigger the download in another way. What I did in my code is that I built a hidden
<form method="POST" action="/set.uri.here/" id="download">...</form>
and when the user clicks the button to download the file, I run a function that builds the body of the form and then submits it. Which looks like
var form = document.getElementById("download");
// magic here... create the body for the form and so on
form.submit();
This way you can pass the request to be done by your browser so that then the response will be also handled by the browser. This is the crucial difference between doing the request yourself or by the browser. If you do the request yourself then the browser won't download the object but you will receive it yourself to do something with it.
If you wish the browser to download the object, then, make the browser do the request for the object as well!

Categories