Currently, as a requirement, if a user wishes to download a large zip file, the download is a streamed.
This is done by fetching an endpoint, then using Streamsaver.js to stream the download to their browser as shown below.
function download(id, fileName) {
const endpoint = `.../extract/downloads/zip_download/?id=${id}`;
return fetch(endpoint, requestOptions.get()).then(res => {
const downloadSize = res.headers.get("content-length");
const fileStream = createWriteStream(fileName, { size: downloadSize });
const writer = fileStream.getWriter();
if (res.body.pipeTo) {
writer.releaseLock();
return res.body.pipeTo(fileStream);
}
const reader = res.body.getReader();
const pump = () =>
reader
.read()
.then(({ value, done }) =>
done ? writer.close() : writer.write(value).then(pump)
);
return pump();
});
}
This works fine in Chrome, however I'm running into issues with Firefox and Safari. The issue I get is:
TypeError: undefined is not a constructor (evaluating 'new streamSaver.WritableStream')
What other methods are there of approaching this? Surely there must be a universal way to stream a download of a large that I'm missing?
I ran into the same issue and included the web-streams-polyfill package in my project to fix it. Currently, some non-chromium browsers appear to not support WritableStream.
For myself, I simply included this script tag in my index.html
<script src="https://unpkg.com/web-streams-polyfill/dist/polyfill.min.js"></script>
Related
I need to access the base64 code of images, if a user opens it inside chrome.
Therefore I have created a chrome extension which activates on a right-click on images.
After selecting the context-menu the base64 code is loaded via Url which is the local file path.
const getBase64FromUrl = async (url) => {
const data = await fetch(url);
const blob = await data.blob();
return new Promise((resolve) => {
const reader = new FileReader();
reader.readAsDataURL(blob);
reader.onloadend = () => {
const base64data = reader.result;
resolve(base64data);
}
});
}
And thats the error i got.
background.js:152 Not allowed to load local resource:
file:///C:/Users/cbec/Downloads/MicrosoftTeams-image.png
getBase64FromUrl # background.js:152
background.js:152
Uncaught (in promise) TypeError:
Failed to fetch
at getBase64FromUrl (background.js:152:24)
at background.js:20:31
Happy about every hints / suggestion.
The problem was an easy one if one knows how chrome extensiosn work :-)
You just have to add in your manifest file the permission that all urls are allowed inside
"host_permissions": [
"<all_urls>"
]
see also for more details, for other problem:
Chrome Extension Background script: Change Mouse Cursor while loading
I would like to read file in Java Script.
The best it would be to read line by line, but there is also possibility to read the whole file at once.
Generally on the web there is quite a lot of implementations, but I would like to read a file in very simple way by entering, hardcoded path and file name in the Java Script code, not but any buttons or something like this. The pseudo code below:
<!DOCTYPE html>
<html>
<body>
<script type="text/javascript">
var file = FileReader("home/test.txt");//hardcoded path and name of the file
var listOfLines = [];
var line = file.readLine();
while(line != eof) {
listOfLines.push(file.readLine());
file.readLine();
}
</script>
</body>
</html>
Is there such possibility to do something like this.
Thank you.
That would be a pretty big security hole, if your browser could simply read arbityry files from your filesystem. Think, every banner on the web could read configuation files of your OS, and stuff like that.
also check this question/answer: How to set a value to a file input in HTML?
slightly different question, same problem.
But if the client gives your app the file you should process, that's something different.
//use fetch() to get the file content
function fetchFile(file){
const {name, lastModified, size, type} = file;
return fetch(URL.createObjectURL(file))
.then(response => response.text())
.then(text => ({name, lastModified, size, type, text}));
}
//use a FileReader to get the content
function readFile(file){
const {name, lastModified, size, type} = file;
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = () => resolve(reader.result);
reader.onerror = reject;
reader.readAsText(file);
})
.then(text => ({name, lastModified, size, type, text}));
}
let input = document.querySelector('input');
input.onchange = function(){
const promises = [...input.files].map(fetchFile); //fetch
//const promises = [...input.files].map(readFile); //FileReader
Promise.all(promises)
.then(files => {
console.log(files);
})
}
<input type="file" />
This is just a quick snippet. You'd have to check wether there are further security measures/obstacles due to the fact that a web page is accessing local files; but since this snippet is working, I'm confident that you're good.
FileReader is working well and it is well supported, see:
https://caniuse.com/#search=FileReader
even IE has a partial support.
But it can read ONLY file returned from the user. You cannot read any file with an hardcoded path from the developer. Of course that is for security reasons, the javascript engine in the browser runs in a sandbox.
PS
Also, to read big csv files from javascript, I suggest this library, that I used many times with success:
https://www.papaparse.com/
reference:
https://www.w3.org/TR/FileAPI/
https://www.w3.org/TR/FileAPI/#dfn-filereader
https://developer.mozilla.org/it/docs/Web/API/FileReader
I am working with the sportradar api. The api call that I am making, returns a png image. Till now I have done this.
const apiCall = (url) => {
return axios.get(url).then((data) => {
if(data){
return Promise.resolve(data);
}
else{
return Promise.reject();
}
});
}
//all data in data.assetlist
for(let i=0; i<data.assetlist.length; i++){
let imageLink = data.assetlist[i].links[12].href;
let url = `https://api.sportradar.us/nfl-images-p3/ap_premium${imageLink}?api_key=${api_key}`;
apiCall(url).then((data) => {
console.log(data);
let blob = new Blob(data, {type: "image/png"}); // didn't compile with this line
});
}
The above code is working fine and is returning the data. But the data are weird character, which if I am understanding it correctly, is because of the fact that image is a blob type and I am getting a stream of data.
I took reference from here and wrote this line
let blob = new Blob(data, {type: "image/png"});
I didn't try this, because I am afraid that the data is so big that it might crash my system (old laptop). If I am doing this right, I wanna know, how to save this blob file into my system as a png file and if not then I wanna know how can i convert this stream of data into an image and can download and save it in a local directory.
If the image you get is already a blob you can download it using this library :
https://github.com/eligrey/FileSaver.js
let FileSaver = require('file-saver');
FileSaver.saveAs(blob, "my_image.png");
I think you shouldn't be so worried about the size and just go for it.
After a lot of research, I found this solution and it worked.
apiCall(url).then((response) => {
response.data.pipe(fs.createWriteStream('test.jpg'));
});
I was wondering if it was possible to stream data from javascript to the browser's downloads manager.
Using webrtc, I stream data (from files > 1Gb) from a browser to the other. On the receiver side, I store into memory all this data (as arraybuffer ... so the data is essentially still chunks), and I would like the user to be able to download it.
Problem : Blob objects have a maximum size of about 600 Mb (depending on the browser) so I can't re-create the file from the chunks. Is there a way to stream these chunks so that the browser downloads them directly ?
if you want to fetch a large file blob from an api or url, you can use streamsaver.
npm install streamsaver
then you can do something like this
import { createWriteStream } from 'streamsaver';
export const downloadFile = (url, fileName) => {
return fetch(url).then(res => {
const fileStream = createWriteStream(fileName);
const writer = fileStream.getWriter();
if (res.body.pipeTo) {
writer.releaseLock();
return res.body.pipeTo(fileStream);
}
const reader = res.body.getReader();
const pump = () =>
reader
.read()
.then(({ value, done }) => (done ? writer.close() : writer.write(value).then(pump)));
return pump();
});
};
and you can use it like this:
const url = "http://urltobigfile";
const fileName = "bigfile.zip";
downloadFile(url, fileName).then(() => { alert('done'); });
Following #guest271314's advice, I added StreamSaver.js to my project, and I successfully received files bigger than 1GB on Chrome. According to the documentation, it should work for files up to 15GB but my browser crashed before that (maximum file size was about 4GB for me).
Note I: to avoid the Blob max size limitation, I also tried to manually append data to the href field of a <a></a> but it failed with files of about 600MB ...
Note II: as amazing as it might seem, the basic technique using createObjectURL works perfectly fine on Firefox for files up to 4GB !!
I want to download an encrypted file from my server, decrypt it and save it locally. I want to decrypt the file and write it locally as it is being downloaded rather than waiting for the download to finish, decrypting it and then putting the decrypted file in an anchor tag. The main reason I want to do this is so that with large files the browser does not have to store hundreds of megabytes or several gigabytes in memory.
This is only going to be possible with a combination of service worker + fetch + stream
A few browser has worker and fetch but even fewer support fetch with streaming (Blink)
new Response(new ReadableStream({...}))
I have built a streaming file saver lib to communicate with a service worker in other to intercept network request: StreamSaver.js
It's a little bit different from node's stream here is an example
function unencrypt(){
// should return Uint8Array
return new Uint8Array()
}
// We use fetch instead of xhr that has streaming support
fetch(url).then(res => {
// create a writable stream + intercept a network response
const fileStream = streamSaver.createWriteStream('filename.txt')
const writer = fileStream.getWriter()
// stream the response
const reader = res.body.getReader()
const pump = () => reader.read()
.then(({ value, done }) => {
let chunk = unencrypt(value)
// Write one chunk, then get the next one
writer.write(chunk) // returns a promise
// While the write stream can handle the watermark,
// read more data
return writer.ready.then(pump)
)
// Start the reader
pump().then(() =>
console.log('Closed the stream, Done writing')
)
})
There are also two other way you can get streaming response with xhr, but it's not standard and doesn't mather if you use them (responseType = ms-stream || moz-chunked-arrayBuffer) cuz StreamSaver depends on fetch + ReadableStream any ways and can't be used in any other way
Later you will be able to do something like this when WritableStream + Transform streams gets implemented as well
fetch(url).then(res => {
const fileStream = streamSaver.createWriteStream('filename.txt')
res.body
.pipeThrogh(unencrypt)
.pipeTo(fileStream)
.then(done)
})
It's also worth mentioning that the default download manager is commonly associated with background download so ppl sometimes close the tab when they see the download. But this is all happening in the main thread so you need to warn the user when they leave
window.onbeforeunload = function(e) {
if( download_is_done() ) return
var dialogText = 'Download is not finish, leaving the page will abort the download'
e.returnValue = dialogText
return dialogText
}
New solution has arrived: showSaveFilePicker/FileSystemWritableFileStream, supported in Chrome, Edge, and Opera since October 2020 (and with a ServiceWorker-based shim for Firefox—from the author of the other major answer!), will allow you to do this directly:
async function streamDownloadDecryptToDisk(url, DECRYPT) {
// create readable stream for ciphertext
let rs_src = fetch(url).then(response => response.body);
// create writable stream for file
let ws_dest = window.showSaveFilePicker().then(handle => handle.createWritable());
// create transform stream for decryption
let ts_dec = new TransformStream({
async transform(chunk, controller) {
controller.enqueue(await DECRYPT(chunk));
}
});
// stream cleartext to file
let rs_clear = rs_src.then(s => s.pipeThrough(ts_dec));
return (await rs_clear).pipeTo(await ws_dest);
}
Depending on performance—if you're trying to compete with MEGA, for instance—you might also consider modifying DECRYPT(chunk) to allow you to use ReadableStreamBYOBReader with it:
…zero-copy reading from an underlying byte source. It is used for efficient copying from underlying sources where the data is delivered as an "anonymous" sequence of bytes, such as files.
For security reasons, browsers do not allow piping an incoming readable stream directly to the local file system, so you have two ways to solve it:
window.open(Resource_URL): download the resource in a new window with
Content_Disposition set to "attachment";
<a download href="path/to/resource"></a>: using the "download" attribute of
AnchorElement to download stream into the hard disk;
hope these helps :)