Ionic read file from system and put it in canvas - javascript

Im trying to read file stored in file system (android) and then put the content into canvas using pdfjs library so I could render it in view. Idid the same with system pdf viewer and it worked, but I need to do later some painting on it and manupulation so it can't be dispsplayed in viewer, it must be within my app.
Rendering of my pdf works fine, since I have tested it with live reload mode.
Below is reading code
readFile( pathToFile ){
this.file.resolveLocalFilesystemUrl( pathToFile).then((fileEntry: any) => {
fileEntry.file( (file) => {
var reader = new FileReader();
reader.onloadend = (event) => {
const x = event.target as any;
// let sliced = x._result.slice(x._result.indexOf(',') + 1, x._result.length);
console.log('item', x)
console.log('item', x.result)
console.log('buffer',new Uint8Array(x.result))
// console.log('64', new Uint8Array(x._result));
// const bytes = this.base64ToUint8Array(sliced)
this.renderPDF(x.result, this.container.nativeElement, 1)
};
reader.readAsArrayBuffer(file);
});
});
}
here are the logs:
as you can see pdf1 ist the last log so the promise from getDocument gets not resolved:
renderPDF(url, canvasContainer, scale) {
console.log('pdf1')
this.pdfCreator.disableWorker = true;
this.pdfCreator
.getDocument(url)
.then((doc) => {
this.doc = doc;
console.log('pdf2')
this.renderPages(canvasContainer, scale);
})
.catch(err => console.log(err))
}
I have spent two days on it without sccess...

I think there is something went wrong with url,
Could you please try this
readFile( pathToFile ){
this.file.resolveLocalFilesystemUrl( pathToFile).then((fileEntry: any) => {
fileEntry.file( (file) => {
var reader = new FileReader();
reader.onloadend = (event) => {
const x = event.target as any;
// let sliced = x._result.slice(x._result.indexOf(',') + 1, x._result.length);
console.log('item', x)
console.log('item', x.result)
console.log('buffer',new Uint8Array(x.result))
// console.log('64', new Uint8Array(x._result));
// const bytes = this.base64ToUint8Array(sliced)
var blob = new Blob([new Uint8Array(x.result)], {type: 'application/pdf'});
var url = URL.createObjectURL(blob);
this.renderPDF(url, this.container.nativeElement, 1)
};
reader.readAsArrayBuffer(file);
});
});
}

Related

knowing whether a fetch is requesting or responding [duplicate]

I'm struggling to find documentation or examples of implementing an upload progress indicator using fetch.
This is the only reference I've found so far, which states:
Progress events are a high level feature that won't arrive in fetch for now. You can create your own by looking at the Content-Length header and using a pass-through stream to monitor the bytes received.
This means you can explicitly handle responses without a Content-Length differently. And of course, even if Content-Length is there it can be a lie. With streams you can handle these lies however you want.
How would I write "a pass-through stream to monitor the bytes" sent? If it makes any sort of difference, I'm trying to do this to power image uploads from the browser to Cloudinary.
NOTE: I am not interested in the Cloudinary JS library, as it depends on jQuery and my app does not. I'm only interested in the stream processing necessary to do this with native javascript and Github's fetch polyfill.
https://fetch.spec.whatwg.org/#fetch-api
Streams are starting to land in the web platform (https://jakearchibald.com/2016/streams-ftw/) but it's still early days.
Soon you'll be able to provide a stream as the body of a request, but the open question is whether the consumption of that stream relates to bytes uploaded.
Particular redirects can result in data being retransmitted to the new location, but streams cannot "restart". We can fix this by turning the body into a callback which can be called multiple times, but we need to be sure that exposing the number of redirects isn't a security leak, since it'd be the first time on the platform JS could detect that.
Some are questioning whether it even makes sense to link stream consumption to bytes uploaded.
Long story short: this isn't possible yet, but in future this will be handled either by streams, or some kind of higher-level callback passed into fetch().
My solution is to use axios, which supports this pretty well:
axios.request({
method: "post",
url: "/aaa",
data: myData,
onUploadProgress: (p) => {
console.log(p);
//this.setState({
//fileprogress: p.loaded / p.total
//})
}
}).then (data => {
//this.setState({
//fileprogress: 1.0,
//})
})
I have example for using this in react on github.
fetch: not possible yet
It sounds like upload progress will eventually be possible with fetch once it supports a ReadableStream as the body. This is currently not implemented, but it's in progress. I think the code will look something like this:
warning: this code does not work yet, still waiting on browsers to support it
async function main() {
const blob = new Blob([new Uint8Array(10 * 1024 * 1024)]); // any Blob, including a File
const progressBar = document.getElementById("progress");
const totalBytes = blob.size;
let bytesUploaded = 0;
const blobReader = blob.stream().getReader();
const progressTrackingStream = new ReadableStream({
async pull(controller) {
const result = await blobReader.read();
if (result.done) {
console.log("completed stream");
controller.close();
return;
}
controller.enqueue(result.value);
bytesUploaded += result.value.byteLength;
console.log("upload progress:", bytesUploaded / totalBytes);
progressBar.value = bytesUploaded / totalBytes;
},
});
const response = await fetch("https://httpbin.org/put", {
method: "PUT",
headers: {
"Content-Type": "application/octet-stream"
},
body: progressTrackingStream,
});
console.log("success:", response.ok);
}
main().catch(console.error);
upload: <progress id="progress" />
workaround: good ol' XMLHttpRequest
Instead of fetch(), it's possible to use XMLHttpRequest to track upload progress — the xhr.upload object emits a progress event.
async function main() {
const blob = new Blob([new Uint8Array(10 * 1024 * 1024)]); // any Blob, including a File
const uploadProgress = document.getElementById("upload-progress");
const downloadProgress = document.getElementById("download-progress");
const xhr = new XMLHttpRequest();
const success = await new Promise((resolve) => {
xhr.upload.addEventListener("progress", (event) => {
if (event.lengthComputable) {
console.log("upload progress:", event.loaded / event.total);
uploadProgress.value = event.loaded / event.total;
}
});
xhr.addEventListener("progress", (event) => {
if (event.lengthComputable) {
console.log("download progress:", event.loaded / event.total);
downloadProgress.value = event.loaded / event.total;
}
});
xhr.addEventListener("loadend", () => {
resolve(xhr.readyState === 4 && xhr.status === 200);
});
xhr.open("PUT", "https://httpbin.org/put", true);
xhr.setRequestHeader("Content-Type", "application/octet-stream");
xhr.send(blob);
});
console.log("success:", success);
}
main().catch(console.error);
upload: <progress id="upload-progress"></progress><br/>
download: <progress id="download-progress"></progress>
Update: as the accepted answer says it's impossible now. but the below code handled our problem for sometime. I should add that at least we had to switch to using a library that is based on XMLHttpRequest.
const response = await fetch(url);
const total = Number(response.headers.get('content-length'));
const reader = response.body.getReader();
let bytesReceived = 0;
while (true) {
const result = await reader.read();
if (result.done) {
console.log('Fetch complete');
break;
}
bytesReceived += result.value.length;
console.log('Received', bytesReceived, 'bytes of data so far');
}
thanks to this link: https://jakearchibald.com/2016/streams-ftw/
As already explained in the other answers, it is not possible with fetch, but with XHR. Here is my a-little-more-compact XHR solution:
const uploadFiles = (url, files, onProgress) =>
new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
xhr.upload.addEventListener('progress', e => onProgress(e.loaded / e.total));
xhr.addEventListener('load', () => resolve({ status: xhr.status, body: xhr.responseText }));
xhr.addEventListener('error', () => reject(new Error('File upload failed')));
xhr.addEventListener('abort', () => reject(new Error('File upload aborted')));
xhr.open('POST', url, true);
const formData = new FormData();
Array.from(files).forEach((file, index) => formData.append(index.toString(), file));
xhr.send(formData);
});
Works with one or multiple files.
If you have a file input element like this:
<input type="file" multiple id="fileUpload" />
Call the function like this:
document.getElementById('fileUpload').addEventListener('change', async e => {
const onProgress = progress => console.log('Progress:', `${Math.round(progress * 100)}%`);
const response = await uploadFiles('/api/upload', e.currentTarget.files, onProgress);
if (response.status >= 400) {
throw new Error(`File upload failed - Status code: ${response.status}`);
}
console.log('Response:', response.body);
}
Also works with the e.dataTransfer.files you get from a drop event when building a file drop zone.
I don't think it's possible. The draft states:
it is currently lacking [in comparison to XHR] when it comes to request progression
(old answer):
The first example in the Fetch API chapter gives some insight on how to :
If you want to receive the body data progressively:
function consume(reader) {
var total = 0
return new Promise((resolve, reject) => {
function pump() {
reader.read().then(({done, value}) => {
if (done) {
resolve()
return
}
total += value.byteLength
log(`received ${value.byteLength} bytes (${total} bytes in total)`)
pump()
}).catch(reject)
}
pump()
})
}
fetch("/music/pk/altes-kamuffel.flac")
.then(res => consume(res.body.getReader()))
.then(() => log("consumed the entire body without keeping the whole thing in memory!"))
.catch(e => log("something went wrong: " + e))
Apart from their use of the Promise constructor antipattern, you can see that response.body is a Stream from which you can read byte by byte using a Reader, and you can fire an event or do whatever you like (e.g. log the progress) for every of them.
However, the Streams spec doesn't appear to be quite finished, and I have no idea whether this already works in any fetch implementation.
with fetch: now possible with Chrome >= 105 🎉
How to:
https://developer.chrome.com/articles/fetch-streaming-requests/
Currently not supported by other browsers (maybe that will be the case when you read this, please edit my answer accordingly)
Feature detection (source)
const supportsRequestStreams = (() => {
let duplexAccessed = false;
const hasContentType = new Request('', {
body: new ReadableStream(),
method: 'POST',
get duplex() {
duplexAccessed = true;
return 'half';
},
}).headers.has('Content-Type');
return duplexAccessed && !hasContentType;
})();
HTTP >= 2 required
The fetch will be rejected if the connection is HTTP/1.x.
Since none of the answers solve the problem.
Just for implementation sake, you can detect the upload speed with some small initial chunk of known size and the upload time can be calculated with content-length/upload-speed. You can use this time as estimation.
A possible workaround would be to utilize new Request() constructor then check Request.bodyUsed Boolean attribute
The bodyUsed attribute’s getter must return true if disturbed, and
false otherwise.
to determine if stream is distributed
An object implementing the Body mixin is said to be disturbed if
body is non-null and its stream is disturbed.
Return the fetch() Promise from within .then() chained to recursive .read() call of a ReadableStream when Request.bodyUsed is equal to true.
Note, the approach does not read the bytes of the Request.body as the bytes are streamed to the endpoint. Also, the upload could complete well before any response is returned in full to the browser.
const [input, progress, label] = [
document.querySelector("input")
, document.querySelector("progress")
, document.querySelector("label")
];
const url = "/path/to/server/";
input.onmousedown = () => {
label.innerHTML = "";
progress.value = "0"
};
input.onchange = (event) => {
const file = event.target.files[0];
const filename = file.name;
progress.max = file.size;
const request = new Request(url, {
method: "POST",
body: file,
cache: "no-store"
});
const upload = settings => fetch(settings);
const uploadProgress = new ReadableStream({
start(controller) {
console.log("starting upload, request.bodyUsed:", request.bodyUsed);
controller.enqueue(request.bodyUsed);
},
pull(controller) {
if (request.bodyUsed) {
controller.close();
}
controller.enqueue(request.bodyUsed);
console.log("pull, request.bodyUsed:", request.bodyUsed);
},
cancel(reason) {
console.log(reason);
}
});
const [fileUpload, reader] = [
upload(request)
.catch(e => {
reader.cancel();
throw e
})
, uploadProgress.getReader()
];
const processUploadRequest = ({value, done}) => {
if (value || done) {
console.log("upload complete, request.bodyUsed:", request.bodyUsed);
// set `progress.value` to `progress.max` here
// if not awaiting server response
// progress.value = progress.max;
return reader.closed.then(() => fileUpload);
}
console.log("upload progress:", value);
progress.value = +progress.value + 1;
return reader.read().then(result => processUploadRequest(result));
};
reader.read().then(({value, done}) => processUploadRequest({value,done}))
.then(response => response.text())
.then(text => {
console.log("response:", text);
progress.value = progress.max;
input.value = "";
})
.catch(err => console.log("upload error:", err));
}
I fished around for some time about this and just for everyone who may come across this issue too here is my solution:
const form = document.querySelector('form');
const status = document.querySelector('#status');
// When form get's submitted.
form.addEventListener('submit', async function (event) {
// cancel default behavior (form submit)
event.preventDefault();
// Inform user that the upload has began
status.innerText = 'Uploading..';
// Create FormData from form
const formData = new FormData(form);
// Open request to origin
const request = await fetch('https://httpbin.org/post', { method: 'POST', body: formData });
// Get amount of bytes we're about to transmit
const bytesToUpload = request.headers.get('content-length');
// Create a reader from the request body
const reader = request.body.getReader();
// Cache how much data we already send
let bytesUploaded = 0;
// Get first chunk of the request reader
let chunk = await reader.read();
// While we have more chunks to go
while (!chunk.done) {
// Increase amount of bytes transmitted.
bytesUploaded += chunk.value.length;
// Inform user how far we are
status.innerText = 'Uploading (' + (bytesUploaded / bytesToUpload * 100).toFixed(2) + ')...';
// Read next chunk
chunk = await reader.read();
}
});
const req = await fetch('./foo.json');
const total = Number(req.headers.get('content-length'));
let loaded = 0;
for await(const {length} of req.body.getReader()) {
loaded = += length;
const progress = ((loaded / total) * 100).toFixed(2); // toFixed(2) means two digits after floating point
console.log(`${progress}%`); // or yourDiv.textContent = `${progress}%`;
}
Key part is ReadableStream &Lt;obj_response.body&Gt;.
Sample:
let parse=_/*result*/=>{
console.log(_)
//...
return /*cont?*/_.value?true:false
}
fetch('').
then(_=>( a/*!*/=_.body.getReader(), b/*!*/=z=>a.read().then(parse).then(_=>(_?b:z=>z)()), b() ))
You can test running it on a huge page eg https://html.spec.whatwg.org/ and https://html.spec.whatwg.org/print.pdf . CtrlShiftJ and load the code in.
(Tested on Chrome.)

Progress for a fetch blob javascript

I'm trying to do a javascript fetch to grab a video file using fetch. I am able to get the file downloaded and get the blob URL, but I can't seem to get the progress while its downloading.
I tried this:
let response = await fetch('test.mp4');
const reader = response.body.getReader();
const contentLength=response.headers.get('Content-Length');
let receivedLength = 0;
d=document.getElementById('progress_bar');
while(true)
{
const {done, value} = await reader.read();
if (done)
{
break;
}
receivedLength += value.length;
d.innerHTML="Bytes loaded:"+receivedLength;
}
const blob = await response.blob();
var vid=URL.createObjectURL(blob);
The problem is that I get "Response.blob: Body has already been consumed". I see that the reader.read() is probably doing that. How do I just get the amount of data received and then get a blob URL at the end of it?
Thanks.
Update:
My first attempt collected the chunks as they downloaded and them put them back together, with a large (2-3x the size of the video) memory footprint. Using a ReadableStream has a much lower memory footprint (memory usage hovers around 150MB for a 1.1GB mkv). Code largely adapted from the snippet here with only minimal modifications from me:
https://github.com/AnthumChris/fetch-progress-indicators/blob/master/fetch-basic/supported-browser.js
<div id="progress_bar"></div>
<video id="video_player"></video>
const elProgress = document.getElementById('progress_bar'),
player = document.getElementById('video_player');
function getVideo2() {
let contentType = 'video/mp4';
fetch('$pathToVideo.mp4')
.then(response => {
const contentEncoding = response.headers.get('content-encoding');
const contentLength = response.headers.get(contentEncoding ? 'x-file-size' : 'content-length');
contentType = response.headers.get('content-type') || contentType;
if (contentLength === null) {
throw Error('Response size header unavailable');
}
const total = parseInt(contentLength, 10);
let loaded = 0;
return new Response(
new ReadableStream({
start(controller) {
const reader = response.body.getReader();
read();
function read() {
reader.read().then(({done, value}) => {
if (done) {
controller.close();
return;
}
loaded += value.byteLength;
progress({loaded, total})
controller.enqueue(value);
read();
}).catch(error => {
console.error(error);
controller.error(error)
})
}
}
})
);
})
.then(response => response.blob())
.then(blob => {
let vid = URL.createObjectURL(blob);
player.style.display = 'block';
player.type = contentType;
player.src = vid;
elProgress.innerHTML += "<br /> Press play!";
})
.catch(error => {
console.error(error);
})
}
function progress({loaded, total}) {
elProgress.innerHTML = Math.round(loaded / total * 100) + '%';
}
First Attempt (worse, suitable for smaller files)
My original approach. For a 1.1GB mkv, the memory usage creeps up to 1.3GB while the file is downloading, then spikes to about 3.5Gb when the chunks are being combined. Once the video starts playing, the tab's memory usage goes back down to ~200MB but Chrome's overall usage stays over 1GB.
Instead of calling response.blob() to get the blob, you can construct the blob yourself by accumulating each chunk of the video (value). Adapted from the exmaple here: https://javascript.info/fetch-progress#0d0g7tutne
//...
receivedLength += value.length;
chunks.push(value);
//...
// ==> put the chunks into a Uint8Array that the Blob constructor can use
let Uint8Chunks = new Uint8Array(receivedLength), position = 0;
for (let chunk of chunks) {
Uint8Chunks.set(chunk, position);
position += chunk.length;
}
// ==> you may want to get the mimetype from the content-type header
const blob = new Blob([Uint8Chunks], {type: 'video/mp4'})

Microsoft Speech-to-Text SDK JS won't accept a file with a long array of bytes

I'm using Microsoft's Azure speech-to-text SDK to get text from a .wav file, using JavaScript. The problem is, the recognizer won't accept the File object and returns the error "Uncaught rangeerror: source array is too long". Calling .slice(0, 2248) on the blob that is used to make the File object works correctly, returning the correct first word of the .wav file. But if I try to slice the blob into chunks like (2249, 4497) returns the error "Uncaught rangeerror: offset is outside the bounds of the DataView". I'm at a loss for how to either a) get the recognizer to accept a blob with a long source array or b) break the blob into chunks that aren't out of bounds. The .wav url has been changed to dashes for anonymity and should be ignored. Any solutions are appreciated!
JS:
<script>
//get wav file from url, create File object with it
function fromFile() {
fetch("http://www.-----------.com/prod/wp-content/uploads/2020/12/cutafew.wav")
.then(response => response.blob())
.then(blob => {
var file = new File([blob], "http://www.---------.com/prod/wp-content/uploads/2020/12/cutafew.wav", {
type:"audio/x-wav", lastModified:new Date().getTime()
});
//if file got successfully, do the following:
var reader = new FileReader();
var speechConfig = SpeechSDK.SpeechConfig.fromSubscription("f6abc3bfabc64f0d820d537c0d738788", "centralus");
var audioConfig = SpeechSDK.AudioConfig.fromWavFileInput(file);
var recognizer = new SpeechSDK.SpeechRecognizer(speechConfig, audioConfig);
//use recognizer to convert wav file to text
recognizer.recognizing = (s, e) => {
console.log(e.result);
console.log(`RECOGNIZING: Text=${e.result.text}`);
};
recognizer.recognized = (s, e) => {
if (e.result.reason == ResultReason.RecognizedSpeech) {
console.log(`RECOGNIZED: Text=${e.result.text}`);
}
else if (e.result.reason == ResultReason.NoMatch) {
console.log("NOMATCH: Speech could not be recognized.");
}
};
recognizer.canceled = (s, e) => {
console.log(`CANCELED: Reason=${e.reason}`);
if (e.reason == CancellationReason.Error) {
console.log(`"CANCELED: ErrorCode=${e.errorCode}`);
console.log(`"CANCELED: ErrorDetails=${e.errorDetails}`);
console.log("CANCELED: Did you update the subscription info?");
}
recognizer.stopContinuousRecognitionAsync();
};
recognizer.sessionStopped = (s, e) => {
console.log("\n Session stopped event.");
recognizer.stopContinuousRecognitionAsync();
};
recognizer.startContinuousRecognitionAsync();
})
//throw error if file wasn't created
.catch(err => console.error(err));
}
fromFile();
</script>
you can use Recognize from in-memory stream example
const fs = require('fs');
const sdk = require("microsoft-cognitiveservices-speech-sdk");
const speechConfig = sdk.SpeechConfig.fromSubscription("<paste-your-speech-key-here>", "<paste-your-speech-location/region-here>");
function fromStream() {
let pushStream = sdk.AudioInputStream.createPushStream();
fs.createReadStream("YourAudioFile.wav").on('data', function(arrayBuffer) {
pushStream.write(arrayBuffer.slice());
}).on('end', function() {
pushStream.close();
});
let audioConfig = sdk.AudioConfig.fromStreamInput(pushStream);
let recognizer = new sdk.SpeechRecognizer(speechConfig, audioConfig);
recognizer.recognizeOnceAsync(result => {
console.log(`RECOGNIZED: Text=${result.text}`);
recognizer.close();
});
}
fromStream();

How can I make .push work in FileReader.onload()?

I have the following code in which I'm trying to read a file using FileReader and put its contents in an array. Only after all the data has been pushed do I want to continue. Here's what I have currently:
const confirm = () => {
var reader = new FileReader();
let images = [];
reader.onload = function(e) {
images.push(e.target.result);
};
reader.readAsDataURL(formValues.images[0].file);
console.log('images base 64');
console.log(images); // this prints the empty array.
};
I want to continue on only after images have been updated with the file contents. How can I do that?
-- edit --
I want to in fact add multiple files to the array, so I tried the following.
var reader = new FileReader();
let images = [];
reader.onload = function(e) {
images.push(e.target.result);
console.log('images base 64');
console.log(images);
};
for (let i = 0; i < formValues.images.length; i++) {
reader.readAsDataURL(formValues.images[i].file);
}
But this gives the error "InvalidStateError: The object is in an invalid state."
You are trying to the result before it load. Move the console.log(images) inside onload function.
const confirm = () => {
var reader = new FileReader();
let images = [];
reader.onload = function(e) {
images.push(e.target.result);
console.log(images);
};
reader.readAsDataURL(formValues.images[0].file);
console.log('images base 64');
};
it works for me try like this , surly
this.file64=[];
reader.onloadend =(e) =>{
var result = reader.result;
console.log(result)
this.file64.push(result)
};

Store contents of fileReader in a variable

trying to do something that I feel like should be releatively straight forward.
I just want to read a text files content and store it in a variable.
Here is my code:
readToCSV(file) {
// console.log(file);
let cat = "";
var reader = new FileReader();
reader.onloadend = function (event) {
if (event.target.readyState == FileReader.DONE) {
var data = event.target.result;
}
console.log("data:", data)
cat = data;
};
reader.readAsText(file)
console.log("cat:",cat);
};
Ive tried just about everything, and keep getting undefined outside the function. Do i just have to nest any further processing inside the onloadend function. That seems silly.
Ok, so I found a work around and wanted to post it for anyone else who is trying to just get fileContents into a veriable (seriously, why should that be so difficult)
anyways, I ended up wrapping the reader in a promise and storing that.
Anyways, new code as follows:
async handleFileUpload(e){
console.log("e",e)
await this.setState({file:e[0]})
console.log("state: ",this.state.file)
const fileContents = await this.readToText(this.state.file)
console.log("fc:",fileContents);
//await this.readToCSV(fileContents)
}
async readToText(file) {
const temporaryFileReader = new FileReader();
return new Promise((resolve, reject) => {
temporaryFileReader.onerror = () => {
temporaryFileReader.abort();
reject(new DOMException("Problem parsing input file."));
};
temporaryFileReader.onload = () => {
resolve(temporaryFileReader.result);
};
temporaryFileReader.readAsText(file);
});
};
I just want to read a text files content and store it in a variable.
The way you're currently doing it, cat will be an empty string because of the async nature of the FileReader.
I would do it with a callback.
var cat = '';
const reader = new FileReader();
reader.readAsText(document.querySelector('input[type="file"]').files[0]);
reader.onload = () => storeResults(reader.result);
function storeResults(result) {
cat = result;
}
This way you get the job done and don't have to nest further processing directly within onloadend.
Hope that helps!

Categories