I'm developing a FileShare application with webRTC. I want to implement the client in JavaScript/HTML. The code should be run on the clients browser.
I need to save them when downloaded via webRTC. The files can be quite big and I can't completely donwload them and save them in a array or blob before saving them to disk as a file.
Is there any API that allows me to save the file in chunks as I recieve them?
I have found so far Downloadify, FileSave.js and html5 FileWriterApi so far.
While the first two are not chunked and require me to first download the complete file to memory before saving, the FileWriterAPI is not available on most browsers.
As #jordan-gray suggested, saving the chunks in blobs and joining them to a larger blob might be a solution if:
Persistence of chunks is not needed (i.e. closing the browser will delete all chunks)
The file is persisted only by the user saving it to his own filesystem. The web application will not have access to the file once it is closed, unless the user gave access to the saved file again.
Possibly, if the file sizes are not too big (you'll have to benchmark to find that out). Chrome was behaving quite nice for me for chunks totaling at 1GB.
I've created a simple test for using blobs as chunks. You can play around with the different size and chunk numbers parameters:
var chunkSize = 500000;
var totalChunks = 200;
var currentChunk = 0;
var mime = 'application/octet-binary';
var waitBetweenChunks = 50;
var finalBlob = null;
var chunkBlobs =[];
function addChunk() {
var typedArray = new Int8Array(chunkSize);
chunkBlobs[currentChunk] = new Blob([typedArray], {type: mime});
console.log('added chunk', currentChunk);
currentChunk++;
if (currentChunk == totalChunks) {
console.log('all chunks completed');
finalBlob = new Blob(chunkBlobs, {type: mime});
document.getElementById('completedFileLink').href = URL.createObjectURL(finalBlob);
} else {
window.setTimeout(addChunk, waitBetweenChunks);
}
}
addChunk();
If you do need that persistence, the W3C File System API should support what you need. You can use it to write the chunks to separate files, and then when all the chunks are completed you can read them all and append them to a single file, and remove the chunks.
Note that it works by assigning a sandboxed filesystem for your application (for a given quota), and the files are only accessible to that application. If the files are meant to use outside of the web application, you might need the function for the use to save the file from the application filesystem to his "normal" filesystem. You can do something like that using the createObjectURL() method.
You are right about current state of browser support. A Filesystem API polyfill is available, which is based on IndexedDB (which is more widely supported) as a filesystem emulation backend. I did not test the polyfill on large files. You might run into size limits or performance limitations.
Did you check https://github.com/Peer5/Sharefest out ? It should cover your requirements
Related
I am using SimpleHTTPServer to serve a directory and run and html code locally. There, I use getusermedia api to take some pictures. If I use js localstorage to store the pictures, where are they stored exactly? How can I copy them to a known directory?
The browser usually manages the localStorage and sessionStorage variables in an encrypted, completely private area so that your browsing experience is as safe as possible (imagine if you could read and write someones files whenever they visit your web page!!).
You cannot copy the images to or from the clients computer without their knowing. You can however, cause automatic downloads server-side.
As for saving a previously downloaded image, see:
How to save an image to localStorage and display it on the next page?
However, do note, that the maximum storage space is usually capped, with sizes wildly varying between browsers and their relative environments.
My own test suggest Chromium will only support 10x 5mb files by default.
Edit:
As for copying to a known directory, you must send the file back you your http server and collect them from there. You may use ajax if you would choose, by converted the data to base64, enabling you to send the data in a json string ('failure to encode the data will results in errors'), and collect on server side with new Buffer(str,"base64").toString("binary")
var http = require('http'),
cluster = require('cluster'),
path = require('path')
url = require('url')
util = require('util')
function posthandler(request,response){
if(request.url=="/image_streamer"){
var datum = [];
request.on('data',function(d){datum.push(d);});
request.on('end',function(){
datum = JSON.parse(datum.join("").toString('utf8'));
datum.image = new Buffer(datum.image,"base64");
// datum.image NOW POINTS TO A BINARY BUFFER :) HAPPY DAYS.
});
}
}
var server = http.createServer(function(request,response){
switch(request.method){
case: "GET":{} break;
case: "POST":{posthandler(request,response);} break;
};
}).listen(8080);
I'm using the WebRTC data channels to build a file transfer service.
Its going quite good with smaller files, under 30 Mb or so. Right now on the receiving end I am simply saving the file data in memory, when all data is transferred I save the file.
Kinda like this :
//On the recieving side
var dataArray = [];
var dcOnMessage= function(event){
dataArray .push(event.data);
if(bytesToRecieve == 0)
{
var blob = new Blob(dataArray ,{type: incFileDesc.type});
reader.onload = function (event) {
saveToDisk(event.target.result,incFileDesc.name);
}
reader.readAsDataURL(blob);
}
}
var saveToDisk = function(fileUrl, fileName) {
var save = document.createElement('a');
save.href = fileUrl;
save.target = '_blank';
save.download = fileName || fileUrl;
var event = document.createEvent('Event');
event.initEvent('click', true, true);
save.dispatchEvent(event);
(window.URL || window.webkitURL).revokeObjectURL(save.href);
}
So I want to save the data on a file on disk, and then write directly to that file. But how do I do that?
I'm afraid the current standardized APIs don't easily allow that (see Philipp's response). The closest would be to save each as a blob/etc in localstorage/indexeddb, then use a Blob constructor to build the final file from the set of blobs. It will still have a temporary memory hit of roughly 2x filesize. Or just hold onto each blob in memory until building the final Blob and saving to disk (still a memory hit, but gradual until it goes 2x when building the final blob). These likely start having problems when the sizes of the final files get in the magnitude range of the available RAM.
Direct transfer of a single large Blob in Firefox->Firefox works today without SCTP ndata support (which isn't available yet) using a deprecated low-level chunking mechanism; it avoids the 2x part of the memory hit.
There's a non-standard API in Chrome that can mostly do the append-to-file part, last I checked. This has been an ongoing area of discussion with the WebAPI folk; it's probably time to poke them again.
Due to the lack of a way to append data to a blob (see the BlobBuilder API which was never implemented in all browsers) what you do is currently the best way to do it. That might change once Chrome (like Mozilla already does) supports sending blobs over the datachannel.
The filetransfer sample works reasonably well for files up to a gigabyte.
I don't think you can save files on disk (for security reasons), but you can save it to the indexedDB as a BLOB. IndexedDB is widely supported now (see http://caniuse.com/#search=indexeddb) and is suited for local large objects store.
See https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API for more details about the API.
Here is an example for saving BLOB in IndexedDB: https://hacks.mozilla.org/2012/02/storing-images-and-files-in-indexeddb/
Lets suppose a case where a huge string is generated from a small string using some javascript logic, and then the text file is forced to be downloaded on the browser.
This is possible using an octet-stream download by putting it as an href , as mentioned in this answer :
Create a file in memory for user to download, not through server.
function download(filename, text) {
var pom = document.createElement('a');
pom.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text));
pom.setAttribute('download', filename);
pom.click();
}
But this solution requires 'text' to be fully generated before being pushed for the download,
hence it will have to be held in browser memory fully .
Is it possible to stream the text as it gets generated using CLIENT SIDE LOGIC ONLY ?
For example :
var inputString = "A";
var outStr = "";
for(var i = 0; i < 10000000 ; i++)
{
/* concatenate inputString to output on the go */
}
Yes & no. No because there's not a way to write to files with just client-side javascript. Kinda. You can prompt a user to download & save a file, but as you mentioned, the code must generate the whole file before that download happens. Note: By "stream" I assume you mean stream to file (constantly write to a file) & by "CLIENT SIDE LOGIC ONLY" I assume you mean in the browser.
Looks like Mozilla has been working on a way to let client-side code interact with files. Here comes the yes. Kind of. They have their own file system api that lets you interact with (write to) the local machines file system. Specifically, there's a function that lets you write an input stream to a file. However, there's a few asterisks:
1) looks like that whole system is being deprecated; they encourage developers to use OS.file over File I/O
2) You have to use XPConnect, a system that lets you access Mozilla's XPCOM (component library) in javascript. If you want to do this in the browser, it looks like only firefox extensions have the proper permissions to interact with those components (). If you didn't want to do this in the browser, you obviously could just use node.
Assuredly, more complications are bound to show up during implementation. But this looks like the most sure path forward, seeing as how OS.File gives you access to functions like OS.File.writeAtomic() & basic write to file
That being said, it's not that great of a path, but hopefully this gives you a solid starting point. As #dandavis mentioned, browsers (i.e. "client side logic") are designed to not allow this sort of thing. It would be an incredibly huge oversight / security flaw if a website could interact with any user's local file system.
Additional resources:
Wikipedia on XPConnect
Guide on working with XPCOM in javascript - may not be that useful
There is a way to do this, but it relies on a Chrome only Filesystem API. We will create and write to a temporary file in a sandboxed file system and the copy it to the regular file system once we are done. This way you do not have to store the entire file in memory. The asynchronous version of the Chrome API is not currently being considered for standardization by W3C, but the synchronous verison (which uses web workers) is. If browser support is a concern, then this answer is not for you.
The API works like this:
First, we get the requestFileSystem() function from the browser. Currently it is prefixed by "webkit":
window.requestFileSystem = window.requestFileSystem || window.webkitRequestFileSystem;
Next, we request a temporary file system (this way we do not need to ask for user permission):
var fileSystem; //This will store the fileSystem for later access
var fileSize = 1024*1024 //Our maximum file system size.
function errorHandler(e) {
console.log('Error: ' + e.name);
}
window.requestFileSystem(window.TEMPORARY, fileSize, function (fs) { fileSystem = fs; }, errorHandler);
Now that we have access to the file system it is time to create a file:
var fileOptions = {
create: true, //If the file is not found, create it
exclusive: false //Don't throw an error if the file doesn't exist
};
Here we call the getFile() function, which can create a file if it doesn't exist. Inside of the callback, we can create a new fileWriter for writing to the file. The fileWriter is then moved to the end of the file, and we create a new text blob to append to it.
fileSystem.root.getFile(fileName, fileOptions, function(fileEntry) {
fileEntry.createWriter(function(fileWriter) {
fileWriter.seek(fileWriter.length);
var blob = new Blob([STRING_TO_WRITE], {type: 'text/plain'});
fileWriter.write(blob);
}, errorHandler);
});
Note that this API does not save to the normal, user filesystem. Instead, it saves to a special sandboxed folder. If you want to save it to the user's file system, you can create a filesystem: link. When the user clicks on it, it will prompt them to save it. After they save it, you can then remove the temporary file.
This function generates the filesystem link using the fileEntry's toURL() function:
var save = function () {
var download = document.querySelector("a[download]");
if (!fileSystem) { return; }
fileSystem.root.getFile(fileName, {create: false, exclusive: true}, function(fileEntry) {
download.href = fileEntry.toURL();
}, errorHandler);
}
Using a link with the download attribute will force the download of the file.
<a download></a>
Here is a plunker that demonstrates this: http://plnkr.co/edit/q6ihXWEXSOtutbEy1b5G?p=preview
Hopefully this accomplishes what you want. You can continuously append to the file, it won't be kept in memory, but it will be in the sandboxed filesystem until the user saves it to the regular filesystem.
For more information take a look at this HTML5rocks article or this one if you want to use the newer, synchronous Web Worker API.
I would have suggest it the way #quantumwannabe describes it, using temporary sandbox file to append chunks.
But there is a new way that can be used today (behind a flag) but will be enabled in the next version of chrome (52)
And here is where i will make #KeenanLidral-Porter answer incorrect. And #quantumwannabe answer a unnecessary step
Because there is now a way to write a stream to the filesystem directly: StreamSaver.js
It acts as if there was a server sending octet-stream header and tells the browser to download chunks of data with help of a service worker
const writeStream = streamSaver.createWriteStream('filename.txt')
const encoder = new TextEncoder
let data = 'a'.repeat(1024) // Writing some stuff triggers the save dialog to show
let uint8array = encoder.encode(data + "\n\n")
writeStream.write(uint8array) // Write some data when you got some
writeStream.close() // End the saving
With File API it is possible load data from local files into the browser memory via Javascript. I'm accessing huge files (200MB and bigger) on a system that is low in available RAM (webapp on an mobile device). How can I use the W3C File API (or Cordovas File Api as a fallback) to partially load data (e.g. by specifying a byte range) from files?
The solution is to use File.slice(). Notice that File inherits from Blob and thereby receives its slice() method.
var blob = file.slice(startingByte, endindByte);
reader.readAsBinaryString(blob);
Source of this information is HTML5Rocks.com. A full example can be found there.
We are developing an app that is to download files from HTTP URLs, the extensions/file types of which we will not know until runtime. We've been following this tutorial as a starting point, but since we aren't dealing with images, it hasn't helped us.
The issue is that the code in the tutorial will get you a Blob object and I can't find any code that will allow us to either:
Convert the Blob to a byte array.
Save the Blob straight to the file system.
The ultimate goal is to seamlessly save the file at the given URL to the file system and launch it with the default application, or to just launch it from the URL directly (without the save prompt you get if you just call Windows.System.Launcher.launchUriAsync(uri);).
Any insight anyone might have is greatly appreciated.
Regarding downloading content into byte array:
Using WinJS.xhr with the responseType option as 'arraybuffer' will return the contents in ArrayBuffer. A javascript typed array can be instantiated from the ArrayBuffer for example UInt8Array. This way contents can be read into byte array. code should look something like this:
// todo add other options reqd
var options = { url: url, responseType: 'arraybuffer' };
WinJS.xhr(options).then(function onxhr(ab)
{
var bytes = new Uint8Array(ab, 0, ab.byteLength);
}, function onerror()
{
// handle error
});
Once you take care of permissions to save the file to file system either by user explicitly picking the save file location using SaveFilePicker or pick folder using folder picker - file can be saved on local file system. Also, file can be saved to app data folder.
AFAIK, html/js/css files from local file system or the app data cannot be loaded for security reasons. Although DOM can be manipulated under constraints, to add content. I am not sure of your application requirements. You might need to consider alternatives instead of launching downloaded html files.