What the best mode to send a large json to server - javascript

I have configured in my web.config with maxrequestlength = 10MB. But in my system there's a functionality of import a .csv and this .csv that my clients are import can have more than 10MB. So, I need a mode with a good performance to send a large json (the json imported).
I thought of zip or send the json in parts. What is the best mode?
Is there other forms more efficient?

Using the File APIs (https://www.html5rocks.com/tutorials/file/dndfiles/), we can minimize the work to upload a large file. The technique is to slice the upload into multiple chunks, spawn an XHR for each portion, and put the file together on the server. This is similar to how GMail uploads large attachments so quickly. Such a technique could also be used to get around Google App Engine's 32MB http request limit.
function upload(blobOrFile) {
var xhr = new XMLHttpRequest();
xhr.open('POST', '/server', true);
xhr.onload = function(e) { ... };
xhr.send(blobOrFile);
}
document.querySelector('input[type="file"]').addEventListener('change', function(e) {
var blob = this.files[0];
const BYTES_PER_CHUNK = 1024 * 1024; // 1MB chunk sizes.
const SIZE = blob.size;
var start = 0;
var end = BYTES_PER_CHUNK;
while(start < SIZE) {
upload(blob.slice(start, end));
start = end;
end = start + BYTES_PER_CHUNK;
}
}, false);
})();
What is not shown here is the code to reconstruct the file on the server.
P.S. Of cause functions like http://underscorejs.org/#zip can be used as well.

Related

Uploading large file (100mb+) crashes Chrome only

I am allowing users to upload CSV files through the website. The file is getting read using the JavaScript file API then getting sent through to the server to be saved.
, upload: function (prefix, numberType, file, name)
{
this.attributes = { // Set the data to be sent along
'upload': true,
'prefix': prefix,
'file': file,
'name': name,
'numberType': numberType
};
console.log('upload', this) // This will correctly show in the console
return this.sync('create', this, { // This is when Chrome crashes
xhr: function() {
var xhr = $.ajaxSettings.xhr();
xhr.upload.onprogress = function(evt){
document.querySelector('.uploadProgressBar').style.width = parseInt(evt.loaded/evt.total*100) + '%';
document.querySelector('#uploadNow').classList.add('percentageUpload');
document.querySelector('#uploadNow').innerText = parseInt(evt.loaded/evt.total*100) + '%';
};
return xhr;
}
});
}
When inspecting the network tab it looks like the request is never sent so it's breaking just while the request is being created. This will only break when the file is around 100mb and smaller files will upload fine. As well as this, it will work fine on both Safari and Firefox so it's a Chrome specific issue. Is this a known issue with Chrome where it has trouble dealing with large files?
I'm thinking the only way to really get around this problem is to split the file into chunks and piece it back together on the server. This will certainly be possible but it would be worth finding out if it's a limitation to note in the future.
The browser crashes because it runs out of memory.
Instead of loading the file in memory pass the file object to XMLHttpRequest so that Chrome can stream the file contents in the upload form.
Use the FormData object for this:
// your file input
const file = document.getElementById('file').files[0];
// your form
var form = new FormData();
form.append('file', file);
const xhr = $.ajaxSettings.xhr();
xhr.upload.onprogress = function(evt) {
document.querySelector('.uploadProgressBar').style.width = parseInt(evt.loaded / evt.total * 100) + '%';
document.querySelector('#uploadNow').classList.add('percentageUpload');
document.querySelector('#uploadNow').innerText = parseInt(evt.loaded / evt.total * 100) + '%';
};
xhr.open('POST', 'http://example.com/'); // Url where you want to upload
xhr.send(form);

display multi page tiff in browser

I'd like to know if there is any way so that I can display a multi-paged tif image in browser using client-side coding (not server-side) in a way that user can navigate between the pages like common jquery photo libraries. I found Tiff.js from https://github.com/seikichi/tiff.js, but this library only gives download link of multi-paged tiff and do not display it in html.
I can do it in server-side using libraries like ImageMagic, LibTiff.Net etc but don't want to because the number of photos are huge and if I do that it consume the large amount of server's cpu
do you know any alternative solution??
I had this problem too and converting the images was not an option for us.
You can use tiff.js that you linked to, have a look at the demo and then view source at http://seikichi.github.io/tiff.js/multipage.html.
$(function () {
Tiff.initialize({TOTAL_MEMORY: 16777216 * 10});
var xhr = new XMLHttpRequest();
xhr.open('GET', 'images/multipage.tiff');
xhr.responseType = 'arraybuffer';
xhr.onload = function (e) {
var buffer = xhr.response;
var tiff = new Tiff({buffer: buffer});
for (var i = 0, len = tiff.countDirectory(); i < len; ++i) {
tiff.setDirectory(i);
var canvas = tiff.toCanvas();
$('body').append(canvas);
}
};
xhr.send();
});
Replace 'images/multipage.tiff' with the path to your file and it will add each page to the body element (just replace $('body') with your element if you want it somewhere else). Works with single tiff as well.
Browsers won't support tif images.
check this Wiki Link.
You have to generate a png image and store it and show that in browser for the tif.

createBlockBlob and commitBlobBlocks create empty files in BlobStorage

I'm developing a web app that can upload large file into the Azure Blob Storage.
As a backend, I am using Windows Azure Mobile Services (the web app will generate contents for mobile devices) in nodeJS.
My client can successfully send chunks of data to the backend, everything looks fine but, at the end, the uploaded file is empty. The data upload has been prepared by following this tutorial: it works perfectly when the file is small enough to be uploaded in a single requests. The process fails when the file needs to be broken in chunks. It uses the ReadableStreamBuffer from the tutorial.
Can somebody help me?
Here the code:
Back-end : createBlobBlockFromStream
[...]
//Get references
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
//console.log(request.body);
var blobName = request.body.file;
var blobExt = request.body.ext;
var blockId = request.body.blockId;
var data = new Buffer(request.body.data, "base64");
var stream = new ReadableStreamBuffer(data);
var streamLen = stream.size();
var blobFull = blobName+"."+blobExt;
console.log("BlobFull: "+blobFull+"; id: "+blockId+"; len: "+streamLen+"; "+stream);
var blobService = azure.createBlobService(accountName, accountKey, host);
//console.log("blockId: "+blockId+"; container: "+container+";\nblobFull: "+blobFull+"streamLen: "+streamLen);
blobService.createBlobBlockFromStream(blockId, container, blobFull, stream, streamLen,
function(error, response){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, {message : "block created"});
}
});
[...]
Back-end: commitBlobBlock
[...]
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
var blobName = request.body.file;
var blobExt = request.body.ext;
var blobFull = blobName+"."+blobExt;
var blockIdList = request.body.blockList;
console.log("blobFull: "+blobFull+"; blockIdList: "+JSON.stringify(blockIdList));
var blobService = azure.createBlobService(accountName, accountKey, host);
blobService.commitBlobBlocks(container, blobFull, blockIdList, function(error, result){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, result);
blobService.listBlobBlocks(container, blobFull)
}
});
[...]
The second method returns the correct list of blockId, so I think that the second part of the process works fine. I think that it is the first method that fails to write the data inside the block, as if it creates some empty blocks.
In the client-side, I read the file as an ArrayBuffer, by using the FileReader JS API.
Then I convert it in a Base4 encoded string by using the following code. This approach works perfectly if I create the blob in a single call, good for small files.
[...]
//data contains the ArrayBuffer read by the FileReader API
var requestData = new Uint8Array(data);
var binary = "";
for (var i = 0; i < requestData.length; i++) {
binary += String.fromCharCode( requestData[ i ] );
}
[...]
Any idea?
Thank you,
Ric
Which version of the Azure Storage Node.js SDK are you using? It looks like you might be using an older version; if so I would recommend upgrading to the latest (0.3.0 as of this writing). We’ve improved many areas with the new library, including blob upload; you might be hitting a bug that has already been fixed. Note that there may be breaking changes between versions.
Download the latest Node.js Module (code is also on Github)
https://www.npmjs.org/package/azure-storage
Read our blog post: Microsoft Azure Storage Client Module for Node.js v. 0.2.0 http://blogs.msdn.com/b/windowsazurestorage/archive/2014/06/26/microsoft-azure-storage-client-module-for-node-js-v-0-2-0.aspx
If that’s not the issue, can you check a Fiddler trace (or equivalent) to see if the raw data blocks are being sent to the service?
Not too sure if your still suffering from this problem but i was experiencing the exact same thing and came across this looking for a solution. Well i found one and though id share.
My problem was not with how i push the block but in how i committed it. My little proxy server had no knowledge of prior commits, it just pushes the data its sent and commits it. Trouble is i wasn't providing the commit message with the previously committed blocks so it was overwriting them with the current commit each time.
So my solution:
var opts = {
UncommittedBlocks: [IdOfJustCommitedBlock],
CommittedBlocks: [IdsOfPreviouslyCommittedBlocks]
}
blobService.commitBlobBlocks('containerName', 'blobName', opts, function(e, r){});
For me the bit that broke everything was the format of the opts object. I wasn't providing an array of previously committed block names. Its also worth noting that i had to base64 decode the existing block names as:
blobService.listBlobBlocks('containerName', 'fileName', 'type IE committed', fn)
Returns an object for each block with the name being base64 encoded.
Just for completeness here's how i push my blocks, req is from the express route:
var blobId = blobService.getBlockId('blobName', 'lengthOfPreviouslyCommittedArray + 1 as Int');
var length = req.headers['content-length'];
blobService.createBlobBlockFromStream(blobId, 'containerName', 'blobName', req, length, fn);
Also with the upload i had a strange issue where the content-length header caused it to break and so had to delete it from the req.headers object.
Hope this helps and is detailed enough.

How to get the size and duration of an mp3 file?

I need to calculate the total length of an mp3 file.
Currently I am using a PHP class which I found # http://www.zedwood.com/article/php-calculate-duration-of-mp3.
This is working perfectly if the mp3 file in same server.
but if I have a URL from other site it throwing error. Please help me.
Is there any JavaScript J-Query function to get the length of the mp3 file
<?php include("mp3.class.php");
$f = 'http://cdn.enjoypur.vc/upload_file/5570/5738/5739/7924/Blue%20Eyes%20-%20Yo%20Yo%20Honey%20Singh%20(PagalWorld.com)%20-192Kbps%20.mp3';
$m = new mp3file($f);
$a = $m->get_metadata();
if ($a['Encoding']=='Unknown')
echo "?";
else if ($a['Encoding']=='VBR')
print_r($a);
else if ($a['Encoding']=='CBR')
print_r($a);
unset($a);
?>
Here's how you can get mp3 duration using Web Audio API:
const mp3file = 'https://raw.githubusercontent.com/prof3ssorSt3v3/media-sample-files/master/doorbell.mp3'
const audioContext = new (window.AudioContext || window.webkitAudioContext)()
const request = new XMLHttpRequest()
request.open('GET', mp3file, true)
request.responseType = 'arraybuffer'
request.onload = function() {
audioContext.decodeAudioData(request.response,
function(buffer) {
let duration = buffer.duration
console.log(duration)
document.write(duration)
}
)
}
request.send()
There is actually a library that can run at client-side, attempting to fetch just enough of the MP3 to read the ID3 tags:
http://github.com/aadsm/JavaScript-ID3-Reader
or
Try
HTML File API.
http://lostechies.com/derickbailey/2013/09/23/getting-audio-file-information-with-htmls-file-api-and-audio-element/
Perhaps the simplest solution is to use the audio html element to get the time duration and to obtain the size directly from the file returned by the FileReader object. A code example of this approach is shown below.
One down side of this and all the other solutions presented so far is the 10-20 second delay it takes for the audio tag's durationchanged event to fire when loading large, eg > 200MB files. Clearly there is a faster way to get this info because the duration is shown immediately when the file is entered into the browser as a file:///.... URL.
function checkMp3SizeAndDuration()
{
var files = document.getElementById('upload-file').files;
var file = files[0];
if (file.size > MAX_FILE_SIZE) {
return;
}
var reader = new FileReader();
var audio = document.createElement('audio');
reader.onload = function (e) {
audio.src = e.target.result
audio.addEventListener('durationchange', function() {
console.log("durationchange: " + audio.duration);
},false);
audio.addEventListener('onerror', function() {
alert("Cannot get duration of this file.");
}, false);
};
reader.readAsDataURL(file);
});
Having not been able to find something that was fast and didn't require a bunch of extra boilerplate code, I tweaked an existing server side javascript utility to run directly in the browser. Demo code is available at: https://github.com/eric-gilbertson/fast-mp3-duration
A Famous and Very SPI that you can use MP3 SPI
and the code is also very simple
File file = new File("filename.mp3");
AudioFileFormat baseFileFormat = AudioSystem.getAudioFileFormat(file);
Map properties = baseFileFormat.properties();
Long duration = (Long) properties.get("duration");
Use getID3() PHP library that works for VBR files as well.
This link help you sourceforge.net.
It's very much active in development.

How to upload a binary file in IE8 then send it to server using xmlhttprequest

I'm working on the web pages of an embeded device. To exchange data between the web page and the application of this device I use xmlhttprequest.
Now I search a way to allow the client to upload a binary (to update the firmware of that device) to the server.
One big limitation : it needs to works in IE8 (a cross browser solution would be ideal, but it's mandatory to work on IE8 first...)
In detail what I have to do :
Use the <input type='file'> to select the file on the client computer
Send the file (using xmlhttprequest?) to the server
The server will reassemble the file and to whatever it need to do with it...
I was able to get a binary from the client to the server in chrome, but in IE8, my method was not compatible.
The relevant html file :
<input id="uploadFile" type="file" />
In the javascript, I tried different way to fire an event with the input file type
// does not work in IE8 (get an Obj doesnt support this property or method)
document.querySelector('input[type="file"]').addEventListener("change"),function(e)...
// tried with jQuery, does not work in IE8(I may not using it correctly...)
$('upload').addEvent('change', function(e)....
$('upload').change(function(e)....
So my first problem is : how to do a onChange event with the input type file in IE8?
Also the method I was using in chrome (found on this page : http://www.html5rocks.com/en/tutorials/file/xhr2/ ) but that is not working on IE8 :
function upload(blobOrFile) {
var xhr = new XMLHttpRequest();
xhr.open('POST', '/server', true);
xhr.onload = function(e) { ... };
xhr.send(blobOrFile);
}
document.querySelector('input[type="file"]').addEventListener('change', function(e) {
var blob = this.files[0];
const BYTES_PER_CHUNK = 1024 * 1024; // 1MB chunk sizes.
const SIZE = blob.size;
var start = 0;
var end = BYTES_PER_CHUNK;
while(start < SIZE) {
upload(blob.slice(start, end));
start = end;
end = start + BYTES_PER_CHUNK;
}
}, false);
})();
Because the document.querySelector generate an error in IE8, I don't know if the rest of this code works in IE8 (I wish it can works!)
Any help and suggestion will be greatly appreciated!!!

Categories