I am performing multipart upload to S3 straight from the browser, i.e. bypassing my back-end.
My JS code sends a hand-shake request to S3 (a POST), then uploads the file in chunks of 5MB (a PUT for each) and eventually, finalises the file (a POST).
That works well. As you can guess, each request to S3 (hand-shake, part uploads and finalisation) has to be signed. It is of course out of the question to generate the signature in JS as it would expose my AWS secret key.
What I am doing so far is the following : before each request to S3, I send a request to my own back-end (to /sign?method=HTTPMethod&path=URLToSign) which returns the signature string. This way, all AWS credentials stay in the back-end, as should be.
My question is the following : Is this secure?
One thing you should take into account is that anyone that sees the address you are calling to get the signature string could abuse it to upload any file (and any number of files) on behalf of you/your application.
To avoid that, my first guess would be to implement some sort of validation in the back-end, so for example, you wouldn't sign URLs if you were getting more than X requests per second.
Also, if you need to filter some files out (exe's, huge files that could rise your AWS bill) I don't think bypassing your back-end is a good idea because you have no control over which files are getting uploaded (maybe your user uploads a file named kitten.png which is actually a 700 mb iso file)
Related
This is part of an experiment I am working on.
Let's say I upload a file eg: .psd (photoshop file) or .sketch (sketch) through the input type file tag, it displays the name of the file and can be downloaded as a .psd / .sketch on click of a button (without data corruption)
How would this be achieved?
Edit 1:
I'm going to add a bit more info as the above was not completely clear.
This is the flow:
User uploads any file
File gets encrypted in the client before sending to a sockets.io server
User on the other end receives this file and is able to decrypt and download.
Note: There is not database connected with the sockets.io. It just listens and responds to whoever connected to the server.
I got the enc/dec part covered. Only thing is uploading and store as ? in a variable so it can be encrypted and doing the opposite on the recepient end (dec and downlodable)
Thanks again in advance :)
I think these are your questions:
How to read a file that was opened/dropped into a <file> element
How to send a file to a server
How to receive a file from a server
When a user opens a file on your file element, you'll be able to use its files property:
for (const file of fileInputEl.files) {
// Do something with file here...
}
Each file implements the Blob interface, which means you can call await file.arrayBuffer() to get an ArrayBuffer, which you can likely use directly in your other library. At a minimum, you can create your byte array from it.
Now, to send data, I strongly recommend that you use HTTP rather than Socket.IO. If you're only sending data one way, there is no need for a Web Socket connection or Socket.IO. If you make a normal HTTP request, you offload all the handling of it to the browser. On the upload end, it can be as simple as:
fetch('https://files.example.com/some-id-here', {
method: 'PUT'
body: file
});
On the receive end, you can simply open a link <a href="https://files.example.com/some-id-here">.
Now, the server part... You say that you want to just pass this file through. You didn't specify at all what you're doing on the server. So, speaking abstractly, when you receive a request for a file, you can just wait and not reply with data until the sending end connects and start uploading. When the sending end sends data, send that data immediately to the receiving end. In fact, you can pipe the request from the sending end to the response on the receiving end.
You'll probably have some initial signalling to choose an ID, so that both ends know where to send/receive from. You can handle this via your usual methods in your chat protocol.
Some other things to consider... WebRTC. There are several off-the-shelf tools for doing this already, where the data can be sent peer-to-peer, saving you some bandwidth. There are some complexities with this, but it might be useful to you.
I'm implementing image upload via browser form and I'm working with AWS and NodeJS. The process is that user selects a file, provides additional info and it all is send to backend using multipart/form-data.
This works great so payload goes thru API Gateway ---> Lambda and this lambda uploads to S3 bucket. I'm using busboy to deal with multipart data and end up with nice JSON object containing all the data send from frontend, something like:
{
userName: "Homer Simpson",
file: base64endcoded_string,
}
Then I grab this base64endcoded_string and upload to S3 so file sits in there and I'm able to open it, download etc.
Now, obviously I don't trust any input from frontend and I wonder what is the best way to ensure that file being send is not malicious.
In this case I need to allow upload only images, say png,jpg/jpeg up to 2mb in size.
Busboy gives me the MIME type, encoding and other details but not sure if this is reliable enough or I should use something like mmmagick or else. How secure and reliable would these solutions be?
Any pointers would be much appreciated.
OWASP has a section on this with some ideas, anyways i found out that the best method to secure a image upload is to convert it, period, if you can convert it it's an image and you are sure that any attached info (code, hidden data, etc) is removed with the conversion process, if you can't it's not an image.
Another advantage is that you can strip exif info, add some data (watermarks for example), etc
I want to allow, file upload of maximum 3 files, each file is 200kb max, each.
550kb file a,
25kb file b
and 25kb file c is not allowed.
I want to abort the process by the time the server receives 200kb+1bit of the first file.
Otherwise hackers can celebrate...
I don't care if the frontend GUI doesn't get a response header telling the client that the process has been aborted - since validations in backend are made only for hackers - who simply uses my GUI without trying to hack, should not send 550kb at first of all! The GUI was pre-designed to send the proper files anyway.
I have tried to use formidable and multer (on two different branches).
However the file is getting saved in temp directory of node - which I don't know to get to, this is the path if it tells something:
/var/folders/sc/fjg8d5j16bnb_9st9h2jb8c40000gn/T/upload_2da9cf165791672f3099372a5e5801da
Is there a way to do this? maybe could you post a guide in here or show a quick class making this happening with Node.js and express?
I have tried to search at Google but really - found just nothing.
I am using Scala Play2 framework and trying to convert SVG String data to other file types such as PDF,PNG,JPEG and send it to Client as a file.
What I want to achieve is that
client send Data via Ajax(POST with really huge JSON)
server generates a file from the JSON
server returns the file to the client.
But It seems that It's hardly possible that sending a file and let clients save it as a static file, So I am planning to make new static files on clients request and returns its access url to client side and open it via Javascript. and after clients finish the downloading, delete the file in a server though,In this approach, I have to
def generateFile = {
...
...
outputStream.flush() // save the file to a disk
}
and
Ok.sendFile(new File("foo.pdf"))
I need to write and read file to a storage disk. and I do not think this is a efficient way.
Is there any better way to achieve what I want?
Thank you in advance.
Why do you think this is not efficient enough?
I've seen a similar approach in a project:
Images are converted and stored in an arbitrary tmp directory using a special naming scheme
A dedicated server resource streams images to the client
A system cronjob triggered every 5 minutes deletes images older than 5 minutes from the tmp directory
The difference was that the image data (in your case the SVG string) was not sent by the client but was stored in a database.
Maybe you could skip the step of writing images to disk if you're conversion library is able to generate images in memory.
I'm trying to implement an HTML controller in my webapp that will upload files from the client to my azure blob storage.
I know how to do it in the server side with C# but this solution isn't right for me because i'm dealing with a large volume files(that the client uploads), so I don't want to upload them to my server side, I want that the client will upload them straight to the blob storage.
but here is where I'm lost, maybe you could help me.
Objective: I need to grant SAS for that user.
Solution: I call(using AJAX) to a server side method that generate the string (string - URL + SAS token)
Now all is left to do is split the files to chunks and upload them giving the URL with the token that I generate on the server side.
I read a lot of article about it but every article says different things, half of them was in the period that azure was not supporting CORS, so a huge amount of them out of date.
How can I do the last two things in the right way :
1.Chunk the file.
2.Upload the file.
One last thing i read in some article that i need to split the file to chunks and then upload all the chunks and then to commit or something all the chunks so its become one file in the storage.(maybe i got it in the worng way)
anyway if somebody could help me with guidelines or anything that will help me overcome this two last jobs needed to be done
*Update:
The errors I get(1.OPTION 2. headers):
Open the image in a new tab to see it properly
*Update 2:
This is how i set the CORS: