Is there a better online storage service than Amazon's S3, which requires multipart uploads and the whole file to be buffered to my server before it's uploaded to them.
I want some service that I can directly stream uploads to it (through my server) without any buffering
Assuming that what you have is a complete file that you want to end up stored on Amazon, there isn't much else you can do.
You can do streaming to S3 by using the low-level API: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectOps.html
The only alternatives are to transfer things piece-wise and then later reassemble them. For example, you could use Kinesis Firehose to upload individual file-chunks to S3. Then you'd need some other job to come along and mash the pieces back together into the original file.
You don't have to buffer the entire file to your server before uploading to S3. S3's multipart upload allows you to upload each part separately, which can be as small as 5MB, ie: you server only has to buffer 5MB at a time. I use this technique in goofys to simulate streaming writes.
Related
I want to do a multi-part download from S3 for large files (1 Gb+) using Angular. There is a lack of documentation and I haven't found a single example which explains this in detail.
Enough documentation available for multi-part upload.
I am using s3.getObject() method from aws-sdk.
I know that we can get a chunk by passing the Range parameter to s3.getObject(). I need help on how to pass these range for large files, how to maintain the chunks and how to combine all of them at last.
Let's say I have the user authenticated and I wish to download a large file from private S3 bucket in multi-part for faster downloads. Any help is appreciated.
I have a web application with a Javascript Frontend and a Java Backend.
I have a Use Case where the users can upload their profile pictures. The Use Case is simple: send the image from the user browser and store it in S3 with a maximum file size. Currently the Frontend sends the image data stream to the Backend, and this the stores the image in S3 using the AWS Java SDK.
The Backend is obliged to previously store this image in a file system in order to know the image file size (and avoid reading more bytes than a certain maximum), since it is required by S3 to send the PUT Object Request along with the file Content Length.
Is there any other way I can do this with AWS? Using another service? maybe Lambda? I don't like this method of having to previously store the file in a file system and then open again a stream to send it to S3.
Thank you very much in advance.
You might get the file size on the client side as mentioned here but consider browser support.
You shouldn't share your keys with the client side code. I believe Query String Authentication should be used in this scenario.
Assuming your maximum file size is less than your available memory, why can't you just read it in a byte[] or something similar where you can send it to S3 without writing it to disk. You also should be able to get the size that way.
Depends on the S3 java client you are using. If it allows you to read from ByteArrayInputStream then you should be able to do whatever.
Looks like you can use InputStream. See javadoc.
new PutObjectRequest(String bucketName,
String key,
InputStream input,
ObjectMetadata metadata)
In a web app I'm using Cropit to let the user upload and crop an image. Then I need to upload this image to the backend server.
Cropit returns the cropped image in Data URI format (type string).
My question is: what's the best way to now upload the cropped image to the backend server?
So far I've figured out two options here:
Send the Data URI from the client as a simple string, then convert it to binary data on the server, and save the image to disk.
Convert the Data URI to binary on the client and attach it to a FormData input, then send it to the server, and save the image to disk.
If I understand correctly, there's no native JS way to send Data URI as a multipart/form-data. Is this right?
Is it better (i.e. more performant / safer) to use approach 1 or 2? Or is preferable to do it in another way that I didn't mention?
Thanks!
The best way is to send the Data uri via a post method
I've found that the fastest most reliable method is to turn the image into a data uri.
With a multipart upload there could be issues sending the data and encoding the correct values, and it becomes a sticky mess. With the resources we havetoday a data URI is the best suggestion
Edit: Depending on the type of server you are using, it might be a smarter option to use multi-part upload. For example, an AWS lambda may only allow 5mb of data in the request. This would mean the best option would be to use a presigned URL with multi-part upload via S3, which should be handled via the frontend web portion.
Essentially the correct solution allows depends upon your architecture.
Hope you are aware of issues using HTML5 canvas based image editing, for various images sizes, canvas size and the picture quality.
As you mentioned using data format you can write the same to a file in the server side logic(as I am not sure about the server side technologies you use)
Another approach could be use CLI tool image magic(http://www.imagemagick.org/) in the server side to process the image. ie, upload the file to server with the additional information to edit image, and process it at the server. For gathering edit information from client you can use any of the client side javascript libraries like (http://odyniec.net/projects/imgareaselect/)
As you told we can save data url as file by using server as well as we can save this in local using JavaScript by using below code
using PHP :
$uri_ = 'data://' . substr($uri, 5);
$binary = file_get_contents($uri_);
By JavaScript refer this github
Best way to do is
If you want to give that file to download by user as soon as they done image editing then go for JavaScript method because it will run in client machine and creates file faster than server side.
If you want to save that file in server for next time use then create file using server side coding.
I have WebApp(MVC4) with a file upload and I did a terrible mistake, I let the client upload files to my server(Virtual Machine Azure) directly.
In order to do it, I set (in the WebConfig) :
maxRequestLength="2097151"
executionTimeout="3600"
maxAllowedContentLength="4294967295"
Now I understand that its not the way to do it.
So what I want is to be able to upload the user files directly to my Azure Blob Storage without the files getting to my site.
I manage to upload files to my storage with C# butI don't want to send the files to the server-side and taking care of them there, because its mean that the files are first upload to the server then they move to the blob storage, and this is not good for me because I'm dealing with a very large files.
I need to transfer the files to the blob storage without going through the server.
How can I do it ? I didn't manage to find too many article addressing this issue, I just read about SAS and CORS that helping addressing the problem but without actual guidelines to follow.
You are right that CORS and SAS are the correct way to do this. I would recommend reading Create and Use a SAS with the Blob Service article for an introduction. Please also see our demo at Build 2014 for a sample application that lists and downloads blob in JavaScript using CORS and SAS.
Hypothesis:
I have some json file in an amazon s3 bucket.
In the same bucket I have a html-js viewev that uses that json files as data source.
I'd like that a user could only see data using my html-js viewer and not, for example, downlading the json files locally or using a 3rd party viewer.
Is there any possibility to achieve such objective?
This isn't possible with only browser side code served by a static s3 bucket. No matter how hard you obfuscate or encrypt the data, you still have to provide the code to unlock and display it.
The only way you can (more or less) ensure they have to view your advertisement is to serve up the HTML of the viewer itself. It sounds like this JSON isn't changing much, if you're just serving it out of an s3 bucket. You can probably just pre-render the HTML with your advertisement and put that in the s3 bucket, rather than having json + an html viewer.