Uploading large file (100mb+) crashes Chrome only - javascript

I am allowing users to upload CSV files through the website. The file is getting read using the JavaScript file API then getting sent through to the server to be saved.
, upload: function (prefix, numberType, file, name)
{
this.attributes = { // Set the data to be sent along
'upload': true,
'prefix': prefix,
'file': file,
'name': name,
'numberType': numberType
};
console.log('upload', this) // This will correctly show in the console
return this.sync('create', this, { // This is when Chrome crashes
xhr: function() {
var xhr = $.ajaxSettings.xhr();
xhr.upload.onprogress = function(evt){
document.querySelector('.uploadProgressBar').style.width = parseInt(evt.loaded/evt.total*100) + '%';
document.querySelector('#uploadNow').classList.add('percentageUpload');
document.querySelector('#uploadNow').innerText = parseInt(evt.loaded/evt.total*100) + '%';
};
return xhr;
}
});
}
When inspecting the network tab it looks like the request is never sent so it's breaking just while the request is being created. This will only break when the file is around 100mb and smaller files will upload fine. As well as this, it will work fine on both Safari and Firefox so it's a Chrome specific issue. Is this a known issue with Chrome where it has trouble dealing with large files?
I'm thinking the only way to really get around this problem is to split the file into chunks and piece it back together on the server. This will certainly be possible but it would be worth finding out if it's a limitation to note in the future.

The browser crashes because it runs out of memory.
Instead of loading the file in memory pass the file object to XMLHttpRequest so that Chrome can stream the file contents in the upload form.
Use the FormData object for this:
// your file input
const file = document.getElementById('file').files[0];
// your form
var form = new FormData();
form.append('file', file);
const xhr = $.ajaxSettings.xhr();
xhr.upload.onprogress = function(evt) {
document.querySelector('.uploadProgressBar').style.width = parseInt(evt.loaded / evt.total * 100) + '%';
document.querySelector('#uploadNow').classList.add('percentageUpload');
document.querySelector('#uploadNow').innerText = parseInt(evt.loaded / evt.total * 100) + '%';
};
xhr.open('POST', 'http://example.com/'); // Url where you want to upload
xhr.send(form);

Related

What is the difference between file upload using FileReader and FormData?

There are two ways I can upload files using Ajax (XHR2). First, I can read the file content as array buffer or binary string and then simply stream using XHR send method. For example, as shown here:
function uploadFile(img, file) {
const reader = new FileReader();
const xhr = new XMLHttpRequest();
xhr.upload.addEventListener("progress", function(e) {
if (e.lengthComputable) {
const percentage = Math.round((e.loaded * 100) / e.total);
// Do something with percentage
}
});
xhr.upload.addEventListener("load", (e) => console.log('Do something more'));
xhr.open("POST", "some-url");
xhr.overrideMimeType('text/plain; charset=x-user-defined-binary');
reader.onload = function(evt) {
xhr.send(evt.target.result);
};
reader.readAsBinaryString(file);
}
Second, I can use FormData to upload my file as shown here:
var formData = new FormData();
// HTML file input, chosen by user
formData.append("userfile", fileInputElement.files[0]);
var request = new XMLHttpRequest();
request.open("POST", "some-url");
request.send(formData);
Are the two methods equivalent? Is there any advantage of using FileReader instead of FormData? Is one more performant than the other?
First, there is a third option you omitted which is to send the File directly through xhr.send(file) just like you did with the ArrayBuffer.
That being said, there doesn't exist any possible advantage to first reading the file in memory through FileReader.
When doing a file upload from a File on disk, the browser doesn't load the full file in memory but streams it through the request. This is how you can upload gigs of data even though it wouldn't fit in memory. This also is more friendly with the HDD since it allows for other processes to access it between each chunk instead of locking it.
When reading the File through a FileReader you are asking the browser to read the full file to memory, and then when you send it through XHR the data from memory is being used. You are thus limited by the memory available, bloating it for no good reasons, and even asking the CPU to work here while the data could have gone from the disk to the network card almost directly.
As to what's the difference between formdata.append(file); xhr.send(formdata); and xhr.send(file), basically only request headers. The former will wrap the request as a multipart/form-data enctype request, while the latter will send it as is.
So you'd handle both requests differently on the receiving end.

Using Simple Webaudiorecorder.js in R/Shiny and posting the recording to the server

I am using WebAudioRecorder.js for making online recordings in an R Shiny app, see:
https://github.com/addpipe/simple-web-audio-recorder-demo
As a format, I chose the wave format, and in the JavaScript code, the recording is obtained as a blob. I would like the program to save this blob on the server without any dialog.
Here, you shouldn't set the hole filePath in javascript, you should give it a filename and then php should put it in the correct folder.
function uploadWaveBlob (blob, encoding) {
var xhr = new XMLHttpRequest();
var formData = new FormData();
var fileName = Date().toISOString() + '.' + encoding;
formData.append("Wav", blob, fileName);
xhr.open('POST', uploadUrl);
xhr.onload = function () {
console.log('xhr complete');
};
xhr.send(formData);
}
imagine if i would upload something to like /etc/hosts or something
The following site gives code that shows how to upload a blob to the server:
https://gist.github.com/primaryobjects/d6cdf5d31242a629b0e4cda1bfc4bff9
The complete solution is available at:
https://github.com/heeringa0/simple-web-audio-recorder
and shows how to integrate the Simple WebAudioRecorder.js in an R Shiny app where the recording is saved to the server.

What the best mode to send a large json to server

I have configured in my web.config with maxrequestlength = 10MB. But in my system there's a functionality of import a .csv and this .csv that my clients are import can have more than 10MB. So, I need a mode with a good performance to send a large json (the json imported).
I thought of zip or send the json in parts. What is the best mode?
Is there other forms more efficient?
Using the File APIs (https://www.html5rocks.com/tutorials/file/dndfiles/), we can minimize the work to upload a large file. The technique is to slice the upload into multiple chunks, spawn an XHR for each portion, and put the file together on the server. This is similar to how GMail uploads large attachments so quickly. Such a technique could also be used to get around Google App Engine's 32MB http request limit.
function upload(blobOrFile) {
var xhr = new XMLHttpRequest();
xhr.open('POST', '/server', true);
xhr.onload = function(e) { ... };
xhr.send(blobOrFile);
}
document.querySelector('input[type="file"]').addEventListener('change', function(e) {
var blob = this.files[0];
const BYTES_PER_CHUNK = 1024 * 1024; // 1MB chunk sizes.
const SIZE = blob.size;
var start = 0;
var end = BYTES_PER_CHUNK;
while(start < SIZE) {
upload(blob.slice(start, end));
start = end;
end = start + BYTES_PER_CHUNK;
}
}, false);
})();
What is not shown here is the code to reconstruct the file on the server.
P.S. Of cause functions like http://underscorejs.org/#zip can be used as well.

AJAX Upload file straight after downloading it (without storing)

I'm making a JavaScript script that is going to essentially save an old game development sandbox website before the owners scrap it (and lose all of the games). I've created a script that downloads each game via AJAX, and would like to somehow upload it straight away, also using AJAX. How do I upload the downloaded file (that's stored in responseText, presumably) to a PHP page on another domain (that has cross origin headers enabled)?
I assume there must be a way of uploading the data from the first AJAX request, without transferring the responseText to another AJAX request (used to upload the file)? I've tried transferring the data, but as expected, it causes huge lag (and can crash the browser), as the files can be quite large.
Is there a way that an AJAX request can somehow upload individual packets as soon as they're recieved?
Thanks,
Dan.
You could use Firefox' moz-chunked-text and moz-chunked-arraybuffer response types. On the JavaScript side you can do something like this:
function downloadUpload() {
var downloadUrl = "server.com/largeFile.ext";
var uploadUrl = "receiver.net/upload.php";
var dataOffset = 0;
xhrDownload = new XMLHttpRequest();
xhrDownload.open("GET", downloadUrl, true);
xhrDownload.responseType = "moz-chunked-text"; // <- only works in Firefox
xhrDownload.onprogress = uploadData;
xhrDownload.send();
function uploadData() {
var data = {
file: downloadUrl.substring(downloadUrl.lastIndexOf('/') + 1),
offset: dataOffset,
chunk: xhrDownload.responseText
};
xhrUpload = new XMLHttpRequest();
xhrUpload.open("POST", uploadUrl, true);
xhrUpload.setRequestHeader('Content-Type', 'application/json; charset=UTF-8');
xhrUpload.send(JSON.stringify(data));
dataOffset += xhrDownload.responseText.length;
};
}
On the PHP side you need something like this:
$in = fopen("php://input", "r");
$postContent = stream_get_contents($in);
fclose($in);
$o = json_decode($postContent);
file_put_contents($o->file . '-' . $o->offset . '.txt', $o->chunk);
These snippets will just give you the basic idea, you'll need to optimize the code yourself.

Accessing already downloaded data

I need to download a large (>100MB) file of data via XmlHttpRequest. The data is from a third party and I would like to display the content gradually as it gets downloaded.
So I thought the following would work:
var req = new XMLHttpRequest();
req.open( "GET", mirror.url, true );
req.responseType = "arraybuffer";
req.onload = function( oEvent ) {
console.log( "DONE" );
};
var current_offset = 0;
req.addEventListener("progress", function(event) {
if( event.lengthComputable ) {
var percentComplete = Math.round(event.loaded * 100 / event.total);
}
var data = req.response;
// Fails here: req.response is null till load is called
var dataView = new DataView( data );
while( current_offset < dataView.byteLength ) {
// do work
++current_offset;
}
console.log( "OFFSET " + current_offset + " [" + percentComplete + "%]" );
}, false);
try {
req.send( null );
} catch( er ) {
console.log( er );
}
Sadly, according to the spec, .response is not available.
Is there any way to access the already downloaded data without going to such horrible workarounds like using Flash?
EDIT:
Found at least a working non-standard solution for Firefox:
responseType = "moz-chunked-arraybuffer";
See also: WebKit equivalent to Firefox's "moz-chunked-arraybuffer" xhr responseType
One solution would be to download only parts of the file using range requests to only download parts of the file. For more information see the following blog post HTTP Status: 206 Partial Content and Range Requests.
If you have control over the server you could also split up the file on the server side and download chunks of it. This way you would be able to access the response as the different chunks are recieved.
A third approach would be to use WebSockets to download the data.
If you have no control of the server you are downloading from and none of the other options will work, you will probably need to implement a proxy service that will act as an intermediate and allow you to download only part of it.
Have you looked at using a library for this?
I've never used it to load a 100MB file, but PreloadJS is pretty nice for loading all kinds of assets.

Categories