I am trying to build a Image Upload System with Node and Azure Blob Storage. I have decided to use the Azure Blob Browser JS SDK to upload image . I have a dilemma in the above process .How do I tell the server of the blob name . I thought of the following approach but this has several problems:
Let just say I give the blob a uuid and send the same uuid to the server but the client JS can be changed and the real blob name and the one sent to the server can differ
It might be that my approach is completely trash. Please reply ,I am a newbie to web development
If I understand your question, I’m doing something similar in ASP.NET and SQL where I actually store the blob itself in Azure storage BUT in a SQL table it’s really just a row of meta-data, where each image has an internal numerical seed ID, a “user-defined” name (which does not have to be unique across the system), and the GUID for e blob name stored in Azure.
Related
I have a cordova application which downloads a zip file as blob from azure. Since I am very new to azure, I would like to know that is it okay security wise to access azure blob with SAS url from the cordova application ?
My point is that I would need to append the shared access signature (SAS) token to the blob url, something like below.
https://myazureportal.container.blobs/myblob?MY_SAS
This way my javascript code will have SAS hard-coded. What is the correct approach since I would prefer to access blob using javascript only and preferably without writing any server side code if possible ?
if I use SAS inside javascript files of my cordova application, is it a security flaw ? If so, any approach to implement the same using purely javascript only ?
Things I tried:
Created a back-end WEB-API service in ASP.NET Core and this way, I would be able to download the blob file but I am looking for is a pure javascript approach.
Apart from the point mentioned by Eric about code being decompiled, there are a few other things you would need to worry about.
If you are embedding the SAS URL in your application, you will have to make them long-lived i.e. with an expiry date far out in future. That's a security risk and is against best practices.
A shared access signature is created using an account key and becomes invalid the moment you regenerate your account key. If you're embedding SAS URL in your application and have to regenerate your account key for any reason, your SAS URL becomes essentially useless.
You can learn more about the best practices for SAS Token here: https://learn.microsoft.com/en-us/azure/storage/common/storage-sas-overview#best-practices-when-using-sas.
Yes it is a security flaw as your app can be decompiled and your code inspected. If you want to keep this approach, at least have a login connected to a back-end that sends the SAS back to your front-end.
Ideally you would do everything in the back-end and return the blob to your front-end.
Originally I've made a website that can retrieve data from Azure SqlDB, but the requirement has changed, my website is expected to read those Blob JSON, also, those JSON files are stored in different Azure folders.
Know nothing about JavaScript
I have a web application with a Javascript Frontend and a Java Backend.
I have a Use Case where the users can upload their profile pictures. The Use Case is simple: send the image from the user browser and store it in S3 with a maximum file size. Currently the Frontend sends the image data stream to the Backend, and this the stores the image in S3 using the AWS Java SDK.
The Backend is obliged to previously store this image in a file system in order to know the image file size (and avoid reading more bytes than a certain maximum), since it is required by S3 to send the PUT Object Request along with the file Content Length.
Is there any other way I can do this with AWS? Using another service? maybe Lambda? I don't like this method of having to previously store the file in a file system and then open again a stream to send it to S3.
Thank you very much in advance.
You might get the file size on the client side as mentioned here but consider browser support.
You shouldn't share your keys with the client side code. I believe Query String Authentication should be used in this scenario.
Assuming your maximum file size is less than your available memory, why can't you just read it in a byte[] or something similar where you can send it to S3 without writing it to disk. You also should be able to get the size that way.
Depends on the S3 java client you are using. If it allows you to read from ByteArrayInputStream then you should be able to do whatever.
Looks like you can use InputStream. See javadoc.
new PutObjectRequest(String bucketName,
String key,
InputStream input,
ObjectMetadata metadata)
In a web app I'm using Cropit to let the user upload and crop an image. Then I need to upload this image to the backend server.
Cropit returns the cropped image in Data URI format (type string).
My question is: what's the best way to now upload the cropped image to the backend server?
So far I've figured out two options here:
Send the Data URI from the client as a simple string, then convert it to binary data on the server, and save the image to disk.
Convert the Data URI to binary on the client and attach it to a FormData input, then send it to the server, and save the image to disk.
If I understand correctly, there's no native JS way to send Data URI as a multipart/form-data. Is this right?
Is it better (i.e. more performant / safer) to use approach 1 or 2? Or is preferable to do it in another way that I didn't mention?
Thanks!
The best way is to send the Data uri via a post method
I've found that the fastest most reliable method is to turn the image into a data uri.
With a multipart upload there could be issues sending the data and encoding the correct values, and it becomes a sticky mess. With the resources we havetoday a data URI is the best suggestion
Edit: Depending on the type of server you are using, it might be a smarter option to use multi-part upload. For example, an AWS lambda may only allow 5mb of data in the request. This would mean the best option would be to use a presigned URL with multi-part upload via S3, which should be handled via the frontend web portion.
Essentially the correct solution allows depends upon your architecture.
Hope you are aware of issues using HTML5 canvas based image editing, for various images sizes, canvas size and the picture quality.
As you mentioned using data format you can write the same to a file in the server side logic(as I am not sure about the server side technologies you use)
Another approach could be use CLI tool image magic(http://www.imagemagick.org/) in the server side to process the image. ie, upload the file to server with the additional information to edit image, and process it at the server. For gathering edit information from client you can use any of the client side javascript libraries like (http://odyniec.net/projects/imgareaselect/)
As you told we can save data url as file by using server as well as we can save this in local using JavaScript by using below code
using PHP :
$uri_ = 'data://' . substr($uri, 5);
$binary = file_get_contents($uri_);
By JavaScript refer this github
Best way to do is
If you want to give that file to download by user as soon as they done image editing then go for JavaScript method because it will run in client machine and creates file faster than server side.
If you want to save that file in server for next time use then create file using server side coding.
I have WebApp(MVC4) with a file upload and I did a terrible mistake, I let the client upload files to my server(Virtual Machine Azure) directly.
In order to do it, I set (in the WebConfig) :
maxRequestLength="2097151"
executionTimeout="3600"
maxAllowedContentLength="4294967295"
Now I understand that its not the way to do it.
So what I want is to be able to upload the user files directly to my Azure Blob Storage without the files getting to my site.
I manage to upload files to my storage with C# butI don't want to send the files to the server-side and taking care of them there, because its mean that the files are first upload to the server then they move to the blob storage, and this is not good for me because I'm dealing with a very large files.
I need to transfer the files to the blob storage without going through the server.
How can I do it ? I didn't manage to find too many article addressing this issue, I just read about SAS and CORS that helping addressing the problem but without actual guidelines to follow.
You are right that CORS and SAS are the correct way to do this. I would recommend reading Create and Use a SAS with the Blob Service article for an introduction. Please also see our demo at Build 2014 for a sample application that lists and downloads blob in JavaScript using CORS and SAS.