Mulit-part download from S3 bucket with Angular - javascript

I want to do a multi-part download from S3 for large files (1 Gb+) using Angular. There is a lack of documentation and I haven't found a single example which explains this in detail.
Enough documentation available for multi-part upload.
I am using s3.getObject() method from aws-sdk.
I know that we can get a chunk by passing the Range parameter to s3.getObject(). I need help on how to pass these range for large files, how to maintain the chunks and how to combine all of them at last.
Let's say I have the user authenticated and I wish to download a large file from private S3 bucket in multi-part for faster downloads. Any help is appreciated.

Related

How to get an object in AWS S3 right after it has been uploaded?

I'm using the javascript SDK for AWS s3. I'm attempting to upload a file to my bucket and right after that, getting that file data from S3. So i'm facing two issues :
I need to execute synchronously two functions. One for uploading and another to fetch the file's data. I am using the Client javascript SDK for aws S3 and i don't know how to be sure to have all files fully uploaded before starting to fetch them in the bucket.
Also, the object data does not show the url of the file , so i don't know how to get it.
Please any help would be appreciated.

What is the difference between a multipart file upload and a chunked file upload ? Which approach is more efficient?

I am currently trying to upload files to Alfresco CMS and i have two approaches
Use submit the file as multipart POST request
Upload the file as chunks from the client and then reassemble the chunks on the Alfresco side using Web Scripts.
Which approach is better and why?
I did some research online on the two approaches
difference between multipart and chunked protoccol
How does HTTP file upload work?
Why is form enctype=multipart/form-data required when uploading a file?
But still unable to conclusively determine the pros and cons.
PS: The size of files being uploaded can range for 5 MB to 2 GB.
I think use of second approach
Upload the file as chunks from the client and then reassemble the
chunks on the Alfresco side using Web Scripts
will be better, the reason being Form submission is synchronous and will block browser usage until the whole file is uploaded which in your case is pretty big, Using client side script to send your data will enable you show upload progress to end-user and provide you the ability to resume/restart upload in case of any network error during file upload.
You can read this article for more details http://creativejs.com/tutorials/advanced-uploading-techniques-part-1/

S3 alternative that allows upload-stream files without buffering

Is there a better online storage service than Amazon's S3, which requires multipart uploads and the whole file to be buffered to my server before it's uploaded to them.
I want some service that I can directly stream uploads to it (through my server) without any buffering
Assuming that what you have is a complete file that you want to end up stored on Amazon, there isn't much else you can do.
You can do streaming to S3 by using the low-level API: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectOps.html
The only alternatives are to transfer things piece-wise and then later reassemble them. For example, you could use Kinesis Firehose to upload individual file-chunks to S3. Then you'd need some other job to come along and mash the pieces back together into the original file.
You don't have to buffer the entire file to your server before uploading to S3. S3's multipart upload allows you to upload each part separately, which can be as small as 5MB, ie: you server only has to buffer 5MB at a time. I use this technique in goofys to simulate streaming writes.

Upload File/s From Client To Azure Blob Storage | MVC

I have WebApp(MVC4) with a file upload and I did a terrible mistake, I let the client upload files to my server(Virtual Machine Azure) directly.
In order to do it, I set (in the WebConfig) :
maxRequestLength="2097151"
executionTimeout="3600"
maxAllowedContentLength="4294967295"
Now I understand that its not the way to do it.
So what I want is to be able to upload the user files directly to my Azure Blob Storage without the files getting to my site.
I manage to upload files to my storage with C# butI don't want to send the files to the server-side and taking care of them there, because its mean that the files are first upload to the server then they move to the blob storage, and this is not good for me because I'm dealing with a very large files.
I need to transfer the files to the blob storage without going through the server.
How can I do it ? I didn't manage to find too many article addressing this issue, I just read about SAS and CORS that helping addressing the problem but without actual guidelines to follow.
You are right that CORS and SAS are the correct way to do this. I would recommend reading Create and Use a SAS with the Blob Service article for an introduction. Please also see our demo at Build 2014 for a sample application that lists and downloads blob in JavaScript using CORS and SAS.

Unzipping a file in node.js to a url path

I'm making an application in the express framework where the user uploads a zip file and views it on the website. I already have uploading a single html file and viewing it; however, I can't seem to figure out extracting a zip file online. I can currently store the zip file in the database, but when it's pulled from the database it seems to be impossible to unzip it in to the url and not on my disk. Where do you think I should start with trying to solve this problem?
I suggest the use of this module: https://github.com/cthackers/adm-zip - I have succeded in using it when user uploads file that must be unziped at server side. I think that docs for the native zlib API are missing or not yet supplied. Let me know if this helps
Not sure which zip format you're using, but one example of how it could work is with the zlib api: http://nodejs.org/api/zlib.html
If you use that library to create a stream you can write that stream to the response, in concept. The bigger question in my mind is what do you mean by "unzip to a url"? If the zip is an archive w/ multiple files, what do you expect a user to see at that URL?

Categories