Blazor JsInterop Invoke after each promise resolves - javascript

Trying to get things working correctly in Blazor Server Side App. I have an Uploader Component but it doesn't InvokeAsync after each promise is resolved on client side. It waits for all Images to load then Invokes the C# method. How would I get it to Invoke the C# method after each image is loaded?
I know JavaScript is single threaded but also tried with web workers and still does the same thing.
Sample repo can be found here
https://dev.azure.com/twinnaz/BlazorUploader
Gif of what's happening.
https://imgur.com/a/aF4AQUf
It should be able to invoke the C# method Async in parallel from javascript file if my thinking is correct.

This issue is related with Blazor and JS. On JS you are not awaiting for GenerateImageData.
You should to use a modern for … ofloop instead, in which await will work as expected:
GetFileInputFiles = async (instance, fileInput) => {
var files = Array.from(fileInput.files);
for (const image of files) {
var imagedata = await readUploadedFileAsText(image);
console.log("sending");
_ = await instance.invokeMethodAsync('GenerateImageData', imagedata);
console.log("sent");
};
};
On Blazor, I suggest to you to rewrite GenerateImageData as :
[JSInvokable]
public async Task GenerateImageData(string data)
{
System.Console.WriteLine( "Receiving" );
ImageBase64.Add(data);
await Task.Delay(1);
StateHasChanged();
System.Console.WriteLine( "Received" );
}
Result:
More detailed info about JS issue: Using async/await with a forEach loop
More detailed info about Blazor issue: Blazor - Display wait or spinner on API call

GeneratePreviewImages = async (dotNet, fileInput) => {
const files = Array.from(fileInput.files);
const imageSrcs = files.map(file => getPreviewImageSrc(file));
loop(imageSrcs, dotNet);
};
const loop = async (arr, dotNet) => {
for await (const src of arr) {
console.log(src);
dotNet.invokeMethodAsync('GenerateImageData', src);
}
};
const getPreviewImageSrc = async (file) => {
return URL.createObjectURL(file);
};

Related

Can I build a WebWorker that executes arbitrary Javascript code?

I'd like to build a layer of abstraction over the WebWorker API that would allow (1) executing an arbitrary function over a webworker, and (2) wrapping the interaction in a Promise. At a high level, this would look something like this:
function bake() {
... // expensive calculation
return 'mmmm, pizza'
}
async function handlePizzaButtonClick() {
const pizza = await workIt(bake)
eat(pizza)
}
(Obviously, methods with arguments could be added without much difficulty.)
My first cut at workIt looks like this:
async function workIt<T>(f: () => T): Promise<T> {
const worker: Worker = new Worker('./unicorn.js') // no such worker, yet
worker.postMessage(f)
return new Promise<T>((resolve, reject) => {
worker.onmessage = ({data}: MessageEvent) => resolve(data)
worker.onerror = ({error}: ErrorEvent) => reject(error)
})
}
This fails because functions are not structured-cloneable and thus can't be passed in worker messages. (The Promise wrapper part works fine.)
There are various options for serializing Javascript functions, some scarier than others. But before I go that route, am I missing something here? Is there another way to leverage a WebWorker (or anything that executes in a separate thread) to run arbitrary Javascript?
I thought an example would be useful in addition to my comment, so here's a basic (no error handling, etc.), self-contained example which loads the worker from an object URL:
Meta: I'm not posting it in a runnable code snippet view because the rendered iframe runs at a different origin (https://stacksnippets.net at the time I write this answer — see snippet output), which prevents success: in Chrome, I receive the error message Refused to cross-origin redirects of the top-level worker script..
Anyway, you can just copy the text contents, paste it into your dev tools JS console right on this page, and execute it to see that it works. And, of course, it will work in a normal module in a same-origin context.
console.log(new URL(window.location.href).origin);
// Example candidate function:
// - pure
// - uses only syntax which is legal in worker module scope
async function get100LesserRandoms () {
// If `getRandomAsync` were defined outside the function,
// then this function would no longer be pure (it would be a closure)
// and `getRandomAsync` would need to be a function accessible from
// the scope of the `message` event handler within the worker
// else a `ReferenceError` would be thrown upon invocation
const getRandomAsync = () => Promise.resolve(Math.random());
const result = [];
while (result.length < 100) {
const n = await getRandomAsync();
if (n < 0.5) result.push(n);
}
return result;
}
const workerModuleText =
`self.addEventListener('message', async ({data: {id, fn}}) => self.postMessage({id, value: await eval(\`(\${fn})\`)()}));`;
const workerModuleSpecifier = URL.createObjectURL(
new Blob([workerModuleText], {type: 'text/javascript'}),
);
const worker = new Worker(workerModuleSpecifier, {type: 'module'});
worker.addEventListener('message', ({data: {id, value}}) => {
worker.dispatchEvent(new CustomEvent(id, {detail: value}));
});
function notOnMyThread (fn) {
return new Promise(resolve => {
const id = window.crypto.randomUUID();
worker.addEventListener(id, ({detail}) => resolve(detail), {once: true});
worker.postMessage({id, fn: fn.toString()});
});
}
async function main () {
const lesserRandoms = await notOnMyThread(get100LesserRandoms);
console.log(lesserRandoms);
}
main();

Working with Node.js streams without callbacks

To send a PDF file from a Node.js server to a client I use the following code:
const pdf = printer.createPdfKitDocument(docDefinition);
const chunks = [];
pdf.on("data", (chunk) => {
chunks.push(chunk);
});
pdf.on("end", () => {
const pdfBuffered = `data:application/pdf;base64, ${Buffer.concat(chunks).toString("base64")}`;
res.setHeader("Content-Type", "application/pdf");
res.setHeader("Content-Length", pdfBuffered.length);
res.send(pdfBuffered);
});
pdf.end();
Everything is working correctly, the only issue is that the stream here is using callback-approach rather then async/await.
I've found a possible solution:
const { pipeline } = require("stream/promises");
async function run() {
await pipeline(
fs.createReadStream('archive.tar'),
zlib.createGzip(),
fs.createWriteStream('archive.tar.gz')
);
console.log('Pipeline succeeded.');
}
run().catch(console.error);
But I can't figure out how to adopt the initial code to the one with stream/promises.
You can manually wrap your PDF code in a promise like this and then use it as a function that returns a promise:
function sendPDF(docDefinition) {
return new Promise((resolve, reject) => {
const pdf = printer.createPdfKitDocument(docDefinition);
const chunks = [];
pdf.on("data", (chunk) => {
chunks.push(chunk);
});
pdf.on("end", () => {
const pdfBuffered =
`data:application/pdf;base64, ${Buffer.concat(chunks).toString("base64")}`;
resolve(pdfBuffered);
});
pdf.on("error", reject);
pdf.end();
});
}
sendPDF(docDefinition).then(pdfBuffer => {
res.setHeader("Content-Type", "application/pdf");
res.setHeader("Content-Length", pdfBuffer.length);
res.send(pdfBuffer);
}).catch(err => {
console.log(err);
res.sendStatus(500);
});
Because there are many data events, you can't promisify just the data portion. You will still have to listen for each data event and collect the data.
You can only convert a callback-API to async/await if the callback is intended to only be executed once.
The one you found online works, because you're just waiting for the whole stream to finish before the callback runs once. What you've got is callbacks that execute multiple times, on every incoming chunk of data.
There are other resources you can look at to make streams nicer to consume, like RXJS, or this upcoming ECMAScript proposal to add observables to the language. Both of these are designed to handle the scenario when a callback can execute multiple times — something that async/await can not do.

How to use Clipboard API to write image to clipboard in Safari

The following code (adapted from here) successfully writes an image file to the clipboard upon a button click in Chrome:
document.getElementById('copy-button').addEventListener('click', async () => {
try {
const data = await fetch('image.png')
const blob = await data.blob()
await navigator.clipboard.write(
[new ClipboardItem({[blob.type]: blob})]
)
console.log('success')
} catch (err) {
console.log(`${err.name}: ${err.message}`)
}
})
(Similar code also works with chaining the promises with .then() or copying the contents of a <canvas> using .toBlob() with a callback function)
However, this fails in Safari, throwing a NotAllowedError. I suspect this is something to do with the asynchronous making of the blob causing Safari think that the call to write() is 'outside the scope of a user gesture (such as "click" or "touch" event handlers)' as described here, since control is released from the event handler during the await portions.
For example, the following code pre-loads the blob into a global variable when the script first runs, and the call to write() does not need to wait for any other async code to finish executing:
let imageBlob
(async function () {
const data = await fetch('image.png')
const blob = await data.blob()
imageBlob = blob
console.log('Image loaded into memory')
})()
document.getElementById('image-button-preload').addEventListener('click', () => {
const clipItem = new ClipboardItem({[imageBlob.type]: imageBlob})
navigator.clipboard.write([clipItem]).then( () => {
console.log('success')
}, (reason) => {
console.log(reason)
})
})
But this is clearly not ideal, especially if the image data is something dynamically created (e.g. in a canvas).
So, the question: How can I generate an image blob and write this to the clipboard upon a user action which Safari/webkit will accept? (Or, is this a bug in Safari/webkit's implementation of the API)
The solution (for safari) is to assign a Promise to the value of the hashmap you pass into ClipboardItem like this:
document.getElementById('copy-button').addEventListener('click', async () => {
try {
const makeImagePromise = async () => {
const data = await fetch('image.png')
return await data.blob()
}
await navigator.clipboard.write(
[new ClipboardItem({[blob.type]: makeImagePromise() })]
)
console.log('success')
} catch (err) {
console.log(`${err.name}: ${err.message}`)
}
})
That way you're calling clipboard.write without awaiting, and Safari will await the promise for you that generates the image.
Note: Other browsers may not support passing a promise to ClipboardItem, so you'll likely want to check if the UserAgent contains Mac or iOS in it before doing this.

FileReader is not being fired on a web worker

I have the below function that convert pdfs to images, the function is within a web worker.
For some reason the fileReader.onload is not being fired, the filePdf is correct and is on the right format. Any idea?
const processFile = async (filePdf, post) => {
let PDFJS
if (!PDFJS) {
PDFJS = await import('pdfjs-dist/build/pdf.js')
}
if (!filePdf) return
const fileReader = new FileReader()
console.log(filePdf)
let pages
try {
fileReader.onload = async () => {
const pdf = await PDFJS.getDocument(fileReader.result).promise
pages = await pdfToImageMap(pdf)
}
} catch (e) {
console.log({e})
}
fileReader.readAsArrayBuffer(filePdf)
return post({type: 'done'})
}
filePdf:
Try to change your logic.
At the moment you are trying to wait for the onload, which will work. So the try block succeeds. Then you return your post function. So you've run the file reader, but didn't wait for it load and returned with the post function.
Instead wait for the fileReader to load by awaiting a promise wrapped around the load function. And inside the Promise call fileReader.readAsArrayBuffer(filePdf) to make sure that the onload function is called. In the onload function use your try / catch block to use your PDFJS framework.
Also, don't waste any values stored in variables. If the pages value is something you need, then use it and return it somehow. Otherwise don't store the value and discard it.
Try the snippet below and see if it works.
const processFile = async (filePdf, post) => {
const PDFJS = await import('pdfjs-dist/build/pdf.js')
if (!filePdf) return
console.log(filePdf)
const fileReader = new FileReader()
const pages = await new Promise(resolve => {
fileReader.onload = async () => {
try {
const pdf = await PDFJS.getDocument(fileReader.result).promise
const pages = await pdfToImageMap(pdf)
resolve(pages)
} catch (e) {
console.log({e})
}
}
fileReader.readAsArrayBuffer(filePdf)
})
return post({type: 'done', pages})
}

How to use Promise.all in react js ES6

What i want to do is to upload file on server, then get URL of uploaded file and preview it. Files can be more than one. For that purpose i have written following code:
let filesURL=[];
let promises=[];
if(this.state.files_to_upload.length>0) {
for(let i=0; i<this.state.files_to_upload.length; i++) {
promises.push(this.uploadFilesOnServer(this.state.files_to_upload[i]))
}
Promise.all(promises).then(function(result){
console.log(result);
result.map((file)=>{
filesURL.push(file);
});
});
console.log(filesURL);
}
const uploadedFilesURL=filesURL;
console.log(uploadedFilesURL);
console.log(filesURL); give me the values returned by Promise.all.
And i want to use these values only when Promise.all completes properly. But, i am facing problem that lines console.log(uploadedFilesURL); excutes first irrespective of Promise.all and give me undefined values.I think i am not using promises correctly, can anyone please help me?
uploadFileOnServer code is:
uploadFilesOnServer(file)
{
let files=[];
let file_id='';
const image=file;
getImageUrl().then((response) => {
const data = new FormData();
data.append('file-0', image);
const {upload_url} = JSON.parse(response);
console.log(upload_url);
updateProfileImage(upload_url, data).then ((response2) => {
const data2 = JSON.parse(response2);
file_id=data2;
console.log(file_id);
files.push(file_id);
console.log(files);
});
});
return files;
}
No, promise is asynchronous and as such, doesn't work the way you think. If you want to execute something after a promise completed, you must put it into the promise's then callback. Here is the example based on your code:
uploadFilesOnServer(file) {
let files=[];
let file_id='';
const promise = getImageUrl()
.then((imageUrlResponse) => {
const data = new FormData();
data.append('file-0', file);
const { upload_url } = JSON.parse(imageUrlResponse);
console.log(upload_url);
return updateProfileImage(upload_url, data);
})
.then ((updateImageResponse) => {
file_id= JSON.parse(updateImageResponse);
console.log(file_id);
files.push(file_id);
console.log(files);
return files;
});
return promise;
}
let filesPromise = Promise.resolve([]);
if(this.state.files_to_upload.length > 0) {
const promises = this.state.files_to_upload.map((file) => {
return this.uploadFilesOnServer(file);
});
filesPromise = Promise.all(promises).then((results) => {
console.log(results);
return [].concat(...results);
});
}
// This is the final console.log of you (console.log(uploadedFilesURL);)
filesPromise.then((filesUrl) => console.log(filesUrl));
A good book to read about ES6 in general and Promises in particular is this book Understanding ECMAScript 6 - Nicholas C. Zakas
Edit:
Here is an simple explanation of the example code:
The uploadFilesOnServer is a function that takes a file, upload it and will return the file URL when the upload completes in the future in the form of a promise. The promise will call its then callback when it gets the url.
By using the map function, we create a list of url promises, the results we've got from executing uploadFilesOnServer on each file in the list.
The Promise.all method waits for all the promises in the list to be completed, joins the list of url results and create a promise with the result which is the list of urls. We need this because there is no guarantee that all of the promises will complete at once, and we need to gather all the results in one callback for convenience.
We get the urls from the then callback.
You have to do this on the .then part of your Promise.all()
Promise.all(promises)
.then(function(result){
console.log(result);
result.map((file)=>{
filesURL.push(file);
});
return true; // return from here to go to the next promise down
})
.then(() => {
console.log(filesURL);
const uploadedFilesURL=filesURL;
console.log(uploadedFilesURL);
})
This is the way async code works. You cannot expect your console.log(filesURL); to work correctly if it is being called syncronously after async call to fetch files from server.
Regarding to your code there are several problems:
1.uploadFilesOnServer must return Promise as it is async. Therefore:
uploadFilesOnServer(file)
{
let files=[];
let file_id='';
const image=file;
return getImageUrl().then((response) => {
const data = new FormData();
data.append('file-0', image);
const {upload_url} = JSON.parse(response);
console.log(upload_url);
updateProfileImage(upload_url, data).then ((response2) => {
const data2 = JSON.parse(response2);
file_id=data2;
console.log(file_id);
files.push(file_id);
console.log(files);
return files;
});
});
}
2.Inside your main function body you can assess results of the Promise.all execution only in its respective then handler.
As a side note I would recomment you to use es7 async/await features with some transpilers like babel/typescript. This will greatly reduce the nesting/complications of writing such async code.

Categories