The goal is to send a list of items to retrieve from the server, which queries the file system or a database and ideally returns an array of binary files.
My current methodology is to base64 encode binaries into a JSON array.
// Extend Array class methods to NodeList data type
Object.getOwnPropertyNames( Array.prototype ).forEach( function(method){
if (method !== 'length')
NodeList.prototype[method]=Array.prototype[method];
});
// Find People
var nodeList = document.querySelectorAll('div.people');
var people = nodeList.map(function(el){ return el.dataset.personid; });
// Search Server
var request = new XMLHttpRequest();
var skip_cache = '?' + new Date().getTime();
request.open('POST', 'cgi.script' + skip_cache );
request.responseType = 'json';
request.onload = function(){
// response handler
/* receives json from server, including an array of base64 files for each person
loop over array and append base64 values to images (data uri) and prepare other HTML elements
*/
};
request.send( JSON.stringify({ids:people}) );
The bottleneck is the size of the request - I'm already breaking the number of people into multiple requests. Although, base64 is only ~37% larger in its uncompressed state; when you have 1k files to download, it doesn't matter.
The goal is to reduce the size per request (without sacrificing time), which has come down to either a better compression method (lzma vs gzip), or improving the data format (binary over ascii).
Is it possible to transfer multiple binary files at once, or even embed them directly in JSON without side-effects? As a preventative measure I've never attempted this, thinking of possible side effects that would challenge technology of yesteryear.
Related
I have a JS library that is responsible to perform the download of JPEG images for the client. All of this is done asynchronously. In some cases, the count of images is really large... Around 5000 images. In this case, the Chrome browser issues the "ERR_INSUFFICIENT_RESOURCES" error for the ajax request.
Each request must be done individually, there is no option to pack the images on the server-side.
What are my options here? How can I find a workaround for this problem? The download works fine in Firefox...
Attached code of the actual download:
function loadFileAndDecrypt(fileId, key, type, length, callback, obj) {
var step = 100 / length;
eventBus.$emit('updateProgressText', "downloadingFiles");
var req = new dh.crypto.HttpRequest();
req.setAesKey(key);
let dataUrl;
if (type == "study") {
dataUrl = "/v1/images/";
}else {
dataUrl = "/v1/dicoms/";
}
var url = axios.defaults.baseURL + dataUrl + fileId;
req.open("GET", url, true);
req.setRequestHeader("Authorization", authHeader().Authorization+"")
req.setRequestHeader("Accept", "application/octet-stream, application/json, text/plain, */*");
req.responseType = "arraybuffer";
req.onload = function() {
console.log(downloadStep);
downloadStep += step;
eventBus.$emit('updatePb', Math.ceil(downloadStep));
var data = req.response;
obj.push(data);
counter ++;
//last one
if (counter == length) {
callback(obj);
}
};
req.send();
}
The error means your code is overloading your memory (most likely, or the quota of pending requests was exhausted). Instead of sending all the data from the backend, make your frontend request for 5000 individual images instead and control the requests flow. regardless, downloading 5000 images is bad. You should pack them up for downloading. If you mean fetching the images, then loading images from the frontend through static or dynamic links is much more logical ;)
Create a class:
Which accepts the file-Id (image that needs to be downloaded) as an argument
Which can perform the HTTP API request
Which can store the result of the request
Create an array of objects from this class using how many ever file-Ids that needs to be downloaded.
Store the array in a RequestManager which can start and manage the downloads:
can batch the downloads, say it fires 5 requests from the array and waits for them to finish before starting the next batch
can stop the downloads on multiple failures
manipulate batch size depending on the available bandwidth
stops download on auth expiry and resumes on auth refresh
offers to retry the previously failed downloads
There are two ways I can upload files using Ajax (XHR2). First, I can read the file content as array buffer or binary string and then simply stream using XHR send method. For example, as shown here:
function uploadFile(img, file) {
const reader = new FileReader();
const xhr = new XMLHttpRequest();
xhr.upload.addEventListener("progress", function(e) {
if (e.lengthComputable) {
const percentage = Math.round((e.loaded * 100) / e.total);
// Do something with percentage
}
});
xhr.upload.addEventListener("load", (e) => console.log('Do something more'));
xhr.open("POST", "some-url");
xhr.overrideMimeType('text/plain; charset=x-user-defined-binary');
reader.onload = function(evt) {
xhr.send(evt.target.result);
};
reader.readAsBinaryString(file);
}
Second, I can use FormData to upload my file as shown here:
var formData = new FormData();
// HTML file input, chosen by user
formData.append("userfile", fileInputElement.files[0]);
var request = new XMLHttpRequest();
request.open("POST", "some-url");
request.send(formData);
Are the two methods equivalent? Is there any advantage of using FileReader instead of FormData? Is one more performant than the other?
First, there is a third option you omitted which is to send the File directly through xhr.send(file) just like you did with the ArrayBuffer.
That being said, there doesn't exist any possible advantage to first reading the file in memory through FileReader.
When doing a file upload from a File on disk, the browser doesn't load the full file in memory but streams it through the request. This is how you can upload gigs of data even though it wouldn't fit in memory. This also is more friendly with the HDD since it allows for other processes to access it between each chunk instead of locking it.
When reading the File through a FileReader you are asking the browser to read the full file to memory, and then when you send it through XHR the data from memory is being used. You are thus limited by the memory available, bloating it for no good reasons, and even asking the CPU to work here while the data could have gone from the disk to the network card almost directly.
As to what's the difference between formdata.append(file); xhr.send(formdata); and xhr.send(file), basically only request headers. The former will wrap the request as a multipart/form-data enctype request, while the latter will send it as is.
So you'd handle both requests differently on the receiving end.
I want to read and write to a file in a specific way.
An example file could be:
name1:100
name2:400
name3:7865786
...etc etc
What would be the best way to read this data in and store in, and eventually write it out?
I don't know which type of data structure to use? I'm still fairly new to javascript.
I want to be able to determine if any key,values are matching.
For example, if I were to add to the file, I could see that name1 is already in the file, and I just edit the value instead of adding a duplicate.
You can use localStorage as a temporary storage between reads and writes.
Though, you cannot actually read and write to a user's filesystem at will using client side JavaScript. You can however request the user to select a file to read the same way you can request the user to save the data you push, as a file.
localStorage allow you to store the data as key-value pairs and it's easy to check if an item exists or not. Optionally simply use a literal object which basically can do the same but only exists in memory. localStorage can be saved between sessions and navigation between pages.
// set some data
localStorage.setItem("key", "value");
// get some data
var data = localStorage.getItem("key");
// check if key exists, set if not (though, you can simply override the key as well)
if (!localStorage.getItem("key")) localStorage.setItem("key", "value");
The method getItem will always return null if the key doesn't exist.
But note that localStorage can only store strings. For binary data and/or large sizes, look into Indexed DB instead.
To read a file you have to request the user to select one (or several):
HTML:
<label>Select a file: <input type=file id=selFile></label>
JavaScript
document.getElementById("selFile").onchange = function() {
var fileReader = new FileReader();
fileReader.onload = function() {
var txt = this.result;
// now we have the selected file as text.
};
fileReader.readAsText(this.files[0]);
};
To save a file you can use File objects this way:
var file = new File([txt], "myFilename.txt", {type: "application/octet-stream"});
var blobUrl = (URL || webkitURL).createObjectURL(file);
window.location = blobUrl;
The reason for using octet-stream is to "force" the browser to show a save as dialog instead of it trying to show the file in the tab, which would happen if we used text/plain as type.
So, how do we get the data between these stages. Assuming you're using key/value approach and text only you can use JSON objects.
var file = JSON.stringify(localStorage);
Then send to user as File blob shown above.
To read you will have to either manually parse the file format if the data exists in a particular format, or if the data is the same as you save out you can read in the file as shown above, then convert it from string to an object:
var data = JSON.parse(txt); // continue in the function block above
Object.assign(localStorage, data); // merge data from object with localStorage
Note that you may have to delete items from the storage first. There is also the chance other data have been stored there so these are cases that needs to be considered, but this is the basis of one approach.
Example
// due to security reasons, localStorage can't be used in stacksnippet,
// so we'll use an object instead
var test = {"myKey": "Hello there!"}; // localStorage.setItem("myKey", "Hello there!");
document.getElementById("selFile").onchange = function() {
var fileReader = new FileReader();
fileReader.onload = function() {
var o = JSON.parse(this.result);
//Object.assign(localStorage, o); // use this with localStorage
alert("done, myKey=" + o["myKey"]); // o[] -> localStorage.getItem("myKey")
};
fileReader.readAsText(this.files[0]);
};
document.querySelector("button").onclick = function() {
var json = JSON.stringify(test); // test -> localStorage
var file = new File([json], "myFilename.txt", {type: "application/octet-stream"});
var blobUrl = (URL || webkitURL).createObjectURL(file);
window.location = blobUrl;
}
Save first: <button>Save file</button> (<code>"myKey" = "Hello there!"</code>)<br><br>
Then read the saved file back in:<br>
<label>Select a file: <input type=file id=selFile></label>
Are you using Nodejs? Or browser javascript?
In either case the structure you should use is js' standard object. Then you can turn it into JSON like this:
var dataJSON = JSON.stringify(yourDataObj)
With Nodejs, you'll want to require the fs module and use one of the writeFile or appendFile functions -- here's sample code:
const fs = require('fs');
fs.writeFileSync('my/file/path', dataJSON);
With browser js, this stackoverflow may help you: Javascript: Create and save file
I know you want to write to a file, but but consider a database instead so that you don't have to reinvent the wheel. INSERT ... ON DUPLICATE KEY UPDATE seems like the logical choice for what you're looking to do.
For security reasons it's not possible to use JavaScript to write to a regular text or similar file on a client's system.
However Asynchronous JavaScript and XML (AJAX) can be used to send an XMLHttpRequest to a file on the server, written in a server-side language like PHP or ASP.
The server-side file can then write to other files, or a database on the server.
Cookies are useful if you just need to save relatively small amounts of data locally on a client's system.
For more information have a look at
Read/write to file using jQuery
I'm saving large text files as objects in Parse. They are too large to save directly as text in a normal String column.
Later, I want to retrieve these files and process the text in JavaScript.
Here's the code I'm using to store the text in a Parse file:
// Inputs
var long_text_string = '...'; // A long string
var description = '...'; // Description of this string
// Convert string to array of bytes
var long_text_string_bytes = [];
for (var i = 0; i < long_text_string.length; i++) {
long_text_string_bytes.push(long_text_string.charCodeAt(i));
}
// Create Parse file
var parsefile = new Parse.File("long_text_string.txt", long_text_string_bytes);
parsefile.save({
success: function() {
// Associate file with a new object...
var textFileObject = new Parse.Object("TextFile");
textFileObject.set('description', description);
textFileObject.set('file', parsefile);
textFileObject.save();
}
});
How do I then retrieve the content of the data file, convert it back from bytes to string, and end up with it stored in a normal string variable in JavaScript?
UPDATE
I've tried three different approaches, all to no avail...
Method 1 [preferred]
Use Parse commands to process the file
It's simple to use the Parse JS API to retrieve my TextFile object, and use parseFile = object.get('file'); to get the Parse file itself. The URL of the file is then parseFile.url().
But then what? I can't do anything with that URL in JS because of cross-origin restrictions.
There is no documentation on how to use the JS API to access the byte data contained within the file. There appears to be an Android command, getDataInBackground, documented here, so I am hopeful there is a JS equivalent....
Method 2
Use the Parse REST API to fire a XMLHTTP request
Unfortunately, it seems that Parse have not enabled CORS for their file URLs. I have tried the following code, adapted from a Parse.com blog post (blog.parse.com/learn/engineering/javascript-and-user-authentication-for-the-rest-api/):
var fileURL = textFileObject.get('file').url();
var xhr = new XMLHttpRequest();
xhr.open("GET", fileURL, true);
xhr.setRequestHeader("X-Parse-Application-Id", appId);
xhr.setRequestHeader("X-Parse-REST-API-Key", restKey);
xhr.setRequestHeader("Content-Type", "application/json");
xhr.onreadystatechange = function() {
if (xhr.readyState == 4) {
alert(xhr.responseText);
}
};
var data = JSON.stringify({ message: "" });
xhr.send(data);
But I get the following error:
Response to preflight request doesn't pass access control check:
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin '<my host>' is therefore not allowed access.
The response had HTTP status code 403
A bit of a Google suggests that the file URLs are not CORS-enabled (parse.com/questions/access-control-allow-origin--2).
(Note that the above code works for a normal object request, it's only when you use the fileURL that it errors).
Method 3
Use a browser to circumvent cross-origin restrictions
I can create a webpage with an empty iframe and set iframe.src to parseFile.url(). The content of the file appears on the web page. But I still end up with cross-origin issues when I try to access the DOM content of the iframe! Not to mention loading each file onto a webpage one by one is an incredibly substandard solution.
I've searched related questions but wasn't able to find any relevant info.
I'm trying to get the Web Audio API to play an mp3 file which is encoded in another file container, so what I'm doing so far is parsing said container, and feeding the result binary data (arraybuffer) to the audioContext.decodeAudioData method, which supposedly accepts any kind of arraybuffer containing audio data. However, it always throws the error callback.
I only have a faint grasp of what I'm doing so probably the whole approach is wrong. Or maybe it's just not possible.
Has any of you tried something like this before? Any help is appreciated!
Here's some of the code to try to illustrate this better. The following just stores the arraybuffer:
newFile: function(filename){
var that=this;
var oReq = new XMLHttpRequest();
oReq.open("GET", filename, true);
oReq.responseType = "arraybuffer";
oReq.onload = function (oEvent) {
var arrayBuffer = oReq.response; //
if (arrayBuffer) {
that.arrayBuffer=arrayBuffer;
that.parsed=true;
}
};
oReq.send(null);
And this is what I'm doing in the decoding part:
newTrack: function(tracknumber){
var that=this;
var arraybuffer=Parser.arrayBuffer;
that.audioContext.decodeAudioData(arraybuffer,function(buffer){
var track={};
track.trackBuffer=buffer;
track.isLoaded=true;
track.trackSource=null;
track.gainNode=that.audioContext.createGainNode();
that.tracklist.push(track);
},alert('error'));
Where Parser is an object literal that I've used to parse and store the arraybuffer (which has the newFile function)
So, to sum up, I don't know if I'm doing something wrong or it simply cannot be done.
Without the container, I'm not sure how decodeAudioData would know that it's an MP3. Or what the bitrate is. Or how many channels it has. Or a lot of other pretty important information. Basically, you need to tell decodeAudioData how to interpret that ArrayBuffer.
The only thing I could think of on the client side is trying to use a Blob. You'd basically have to write the header yourself, and then readAsArrayBuffer before passing it in to decodeAudioData.
If you're interested in trying that out, here's a spec:
http://www.mpgedit.org/mpgedit/mpeg_format/mpeghdr.htm
And here's RecorderJS, which would show you how to create the Blob (although it writes RIFF/WAV headers instead of MP3):
https://github.com/mattdiamond/Recorderjs/blob/master/recorderWorker.js
You'd want to look at the encodeWAV method.
Anyway, I would strongly recommend getting this sorted out on the server instead, if you can.