How to use Clipboard API to write image to clipboard in Safari - javascript

The following code (adapted from here) successfully writes an image file to the clipboard upon a button click in Chrome:
document.getElementById('copy-button').addEventListener('click', async () => {
try {
const data = await fetch('image.png')
const blob = await data.blob()
await navigator.clipboard.write(
[new ClipboardItem({[blob.type]: blob})]
)
console.log('success')
} catch (err) {
console.log(`${err.name}: ${err.message}`)
}
})
(Similar code also works with chaining the promises with .then() or copying the contents of a <canvas> using .toBlob() with a callback function)
However, this fails in Safari, throwing a NotAllowedError. I suspect this is something to do with the asynchronous making of the blob causing Safari think that the call to write() is 'outside the scope of a user gesture (such as "click" or "touch" event handlers)' as described here, since control is released from the event handler during the await portions.
For example, the following code pre-loads the blob into a global variable when the script first runs, and the call to write() does not need to wait for any other async code to finish executing:
let imageBlob
(async function () {
const data = await fetch('image.png')
const blob = await data.blob()
imageBlob = blob
console.log('Image loaded into memory')
})()
document.getElementById('image-button-preload').addEventListener('click', () => {
const clipItem = new ClipboardItem({[imageBlob.type]: imageBlob})
navigator.clipboard.write([clipItem]).then( () => {
console.log('success')
}, (reason) => {
console.log(reason)
})
})
But this is clearly not ideal, especially if the image data is something dynamically created (e.g. in a canvas).
So, the question: How can I generate an image blob and write this to the clipboard upon a user action which Safari/webkit will accept? (Or, is this a bug in Safari/webkit's implementation of the API)

The solution (for safari) is to assign a Promise to the value of the hashmap you pass into ClipboardItem like this:
document.getElementById('copy-button').addEventListener('click', async () => {
try {
const makeImagePromise = async () => {
const data = await fetch('image.png')
return await data.blob()
}
await navigator.clipboard.write(
[new ClipboardItem({[blob.type]: makeImagePromise() })]
)
console.log('success')
} catch (err) {
console.log(`${err.name}: ${err.message}`)
}
})
That way you're calling clipboard.write without awaiting, and Safari will await the promise for you that generates the image.
Note: Other browsers may not support passing a promise to ClipboardItem, so you'll likely want to check if the UserAgent contains Mac or iOS in it before doing this.

Related

Can I build a WebWorker that executes arbitrary Javascript code?

I'd like to build a layer of abstraction over the WebWorker API that would allow (1) executing an arbitrary function over a webworker, and (2) wrapping the interaction in a Promise. At a high level, this would look something like this:
function bake() {
... // expensive calculation
return 'mmmm, pizza'
}
async function handlePizzaButtonClick() {
const pizza = await workIt(bake)
eat(pizza)
}
(Obviously, methods with arguments could be added without much difficulty.)
My first cut at workIt looks like this:
async function workIt<T>(f: () => T): Promise<T> {
const worker: Worker = new Worker('./unicorn.js') // no such worker, yet
worker.postMessage(f)
return new Promise<T>((resolve, reject) => {
worker.onmessage = ({data}: MessageEvent) => resolve(data)
worker.onerror = ({error}: ErrorEvent) => reject(error)
})
}
This fails because functions are not structured-cloneable and thus can't be passed in worker messages. (The Promise wrapper part works fine.)
There are various options for serializing Javascript functions, some scarier than others. But before I go that route, am I missing something here? Is there another way to leverage a WebWorker (or anything that executes in a separate thread) to run arbitrary Javascript?
I thought an example would be useful in addition to my comment, so here's a basic (no error handling, etc.), self-contained example which loads the worker from an object URL:
Meta: I'm not posting it in a runnable code snippet view because the rendered iframe runs at a different origin (https://stacksnippets.net at the time I write this answer — see snippet output), which prevents success: in Chrome, I receive the error message Refused to cross-origin redirects of the top-level worker script..
Anyway, you can just copy the text contents, paste it into your dev tools JS console right on this page, and execute it to see that it works. And, of course, it will work in a normal module in a same-origin context.
console.log(new URL(window.location.href).origin);
// Example candidate function:
// - pure
// - uses only syntax which is legal in worker module scope
async function get100LesserRandoms () {
// If `getRandomAsync` were defined outside the function,
// then this function would no longer be pure (it would be a closure)
// and `getRandomAsync` would need to be a function accessible from
// the scope of the `message` event handler within the worker
// else a `ReferenceError` would be thrown upon invocation
const getRandomAsync = () => Promise.resolve(Math.random());
const result = [];
while (result.length < 100) {
const n = await getRandomAsync();
if (n < 0.5) result.push(n);
}
return result;
}
const workerModuleText =
`self.addEventListener('message', async ({data: {id, fn}}) => self.postMessage({id, value: await eval(\`(\${fn})\`)()}));`;
const workerModuleSpecifier = URL.createObjectURL(
new Blob([workerModuleText], {type: 'text/javascript'}),
);
const worker = new Worker(workerModuleSpecifier, {type: 'module'});
worker.addEventListener('message', ({data: {id, value}}) => {
worker.dispatchEvent(new CustomEvent(id, {detail: value}));
});
function notOnMyThread (fn) {
return new Promise(resolve => {
const id = window.crypto.randomUUID();
worker.addEventListener(id, ({detail}) => resolve(detail), {once: true});
worker.postMessage({id, fn: fn.toString()});
});
}
async function main () {
const lesserRandoms = await notOnMyThread(get100LesserRandoms);
console.log(lesserRandoms);
}
main();

Working with Node.js streams without callbacks

To send a PDF file from a Node.js server to a client I use the following code:
const pdf = printer.createPdfKitDocument(docDefinition);
const chunks = [];
pdf.on("data", (chunk) => {
chunks.push(chunk);
});
pdf.on("end", () => {
const pdfBuffered = `data:application/pdf;base64, ${Buffer.concat(chunks).toString("base64")}`;
res.setHeader("Content-Type", "application/pdf");
res.setHeader("Content-Length", pdfBuffered.length);
res.send(pdfBuffered);
});
pdf.end();
Everything is working correctly, the only issue is that the stream here is using callback-approach rather then async/await.
I've found a possible solution:
const { pipeline } = require("stream/promises");
async function run() {
await pipeline(
fs.createReadStream('archive.tar'),
zlib.createGzip(),
fs.createWriteStream('archive.tar.gz')
);
console.log('Pipeline succeeded.');
}
run().catch(console.error);
But I can't figure out how to adopt the initial code to the one with stream/promises.
You can manually wrap your PDF code in a promise like this and then use it as a function that returns a promise:
function sendPDF(docDefinition) {
return new Promise((resolve, reject) => {
const pdf = printer.createPdfKitDocument(docDefinition);
const chunks = [];
pdf.on("data", (chunk) => {
chunks.push(chunk);
});
pdf.on("end", () => {
const pdfBuffered =
`data:application/pdf;base64, ${Buffer.concat(chunks).toString("base64")}`;
resolve(pdfBuffered);
});
pdf.on("error", reject);
pdf.end();
});
}
sendPDF(docDefinition).then(pdfBuffer => {
res.setHeader("Content-Type", "application/pdf");
res.setHeader("Content-Length", pdfBuffer.length);
res.send(pdfBuffer);
}).catch(err => {
console.log(err);
res.sendStatus(500);
});
Because there are many data events, you can't promisify just the data portion. You will still have to listen for each data event and collect the data.
You can only convert a callback-API to async/await if the callback is intended to only be executed once.
The one you found online works, because you're just waiting for the whole stream to finish before the callback runs once. What you've got is callbacks that execute multiple times, on every incoming chunk of data.
There are other resources you can look at to make streams nicer to consume, like RXJS, or this upcoming ECMAScript proposal to add observables to the language. Both of these are designed to handle the scenario when a callback can execute multiple times — something that async/await can not do.

FileReader is not being fired on a web worker

I have the below function that convert pdfs to images, the function is within a web worker.
For some reason the fileReader.onload is not being fired, the filePdf is correct and is on the right format. Any idea?
const processFile = async (filePdf, post) => {
let PDFJS
if (!PDFJS) {
PDFJS = await import('pdfjs-dist/build/pdf.js')
}
if (!filePdf) return
const fileReader = new FileReader()
console.log(filePdf)
let pages
try {
fileReader.onload = async () => {
const pdf = await PDFJS.getDocument(fileReader.result).promise
pages = await pdfToImageMap(pdf)
}
} catch (e) {
console.log({e})
}
fileReader.readAsArrayBuffer(filePdf)
return post({type: 'done'})
}
filePdf:
Try to change your logic.
At the moment you are trying to wait for the onload, which will work. So the try block succeeds. Then you return your post function. So you've run the file reader, but didn't wait for it load and returned with the post function.
Instead wait for the fileReader to load by awaiting a promise wrapped around the load function. And inside the Promise call fileReader.readAsArrayBuffer(filePdf) to make sure that the onload function is called. In the onload function use your try / catch block to use your PDFJS framework.
Also, don't waste any values stored in variables. If the pages value is something you need, then use it and return it somehow. Otherwise don't store the value and discard it.
Try the snippet below and see if it works.
const processFile = async (filePdf, post) => {
const PDFJS = await import('pdfjs-dist/build/pdf.js')
if (!filePdf) return
console.log(filePdf)
const fileReader = new FileReader()
const pages = await new Promise(resolve => {
fileReader.onload = async () => {
try {
const pdf = await PDFJS.getDocument(fileReader.result).promise
const pages = await pdfToImageMap(pdf)
resolve(pages)
} catch (e) {
console.log({e})
}
}
fileReader.readAsArrayBuffer(filePdf)
})
return post({type: 'done', pages})
}

Blazor JsInterop Invoke after each promise resolves

Trying to get things working correctly in Blazor Server Side App. I have an Uploader Component but it doesn't InvokeAsync after each promise is resolved on client side. It waits for all Images to load then Invokes the C# method. How would I get it to Invoke the C# method after each image is loaded?
I know JavaScript is single threaded but also tried with web workers and still does the same thing.
Sample repo can be found here
https://dev.azure.com/twinnaz/BlazorUploader
Gif of what's happening.
https://imgur.com/a/aF4AQUf
It should be able to invoke the C# method Async in parallel from javascript file if my thinking is correct.
This issue is related with Blazor and JS. On JS you are not awaiting for GenerateImageData.
You should to use a modern for … ofloop instead, in which await will work as expected:
GetFileInputFiles = async (instance, fileInput) => {
var files = Array.from(fileInput.files);
for (const image of files) {
var imagedata = await readUploadedFileAsText(image);
console.log("sending");
_ = await instance.invokeMethodAsync('GenerateImageData', imagedata);
console.log("sent");
};
};
On Blazor, I suggest to you to rewrite GenerateImageData as :
[JSInvokable]
public async Task GenerateImageData(string data)
{
System.Console.WriteLine( "Receiving" );
ImageBase64.Add(data);
await Task.Delay(1);
StateHasChanged();
System.Console.WriteLine( "Received" );
}
Result:
More detailed info about JS issue: Using async/await with a forEach loop
More detailed info about Blazor issue: Blazor - Display wait or spinner on API call
GeneratePreviewImages = async (dotNet, fileInput) => {
const files = Array.from(fileInput.files);
const imageSrcs = files.map(file => getPreviewImageSrc(file));
loop(imageSrcs, dotNet);
};
const loop = async (arr, dotNet) => {
for await (const src of arr) {
console.log(src);
dotNet.invokeMethodAsync('GenerateImageData', src);
}
};
const getPreviewImageSrc = async (file) => {
return URL.createObjectURL(file);
};

puppeteer how to return page.on response values [duplicate]

This question already has answers here:
Puppeteer: How to listen to a specific response?
(5 answers)
Closed 1 year ago.
I know this should be simple. But how do return the values for use outside the function, I cannot get it to work. This works downloading file and in the console returns
value: attachment; filename="filename"
await page._client.send('Page.setDownloadBehavior', {behavior: 'allow', downloadPath: './tmp'})
await page.click('download');
await page.on('response', resp => {
var header = resp.headers();
console.log("value: " + header['content-disposition']);
});
but this and everything I have tried returns nothing
await page.on('response', resp => {
var header = resp.headers();
return header['content-disposition'];
});
I want to be able to return the filename, file size, etc. of a downloaded file for further use in the script.
How do I return and access the response values?
You shouldn't use the await operator before page.on().
The Puppeteer page class extends Node.js's native EventEmitter, which means that whenever you call page.on(), you are setting up an event listener using Node.js's emitter.on().
This means that the functionality you include in page.on('response') will execute when the response event is fired.
You don't return values from an event handler. Instead, the functionality within the event handler is executed when the event occurs.
If you want to use the result of page.on() in a function, you can use the following method:
const example_function = value => {
console.log(value);
};
page.on('response', resp => {
var header = resp.headers();
example_function(header['content-disposition']);
});
Grant I've realised from your answer that I have have made a few beginner mistakes.
Puppeteer await - I thought await page.on() would pause the script until complete. I was wrong.
I had placed page.on() inside the loop causing errors, it should have been outside.
The script was going to the next download page before the download started and page.on() being called.
I should have saved the file inside page.on() instead of outside.
Correct me if I am wrong.
This is what I was trying to do.(abbreviated)
async function main() {
await page.goto(page, { waitUntil: 'networkidle0' });
for(loop through download pages){
await page.click(download);
await page.on('response', resp => {
var header = resp.headers();
return header['content-disposition'];
});
save.write(header['content-disposition']);
}
}
main();
This is what has worked.
async function main() {
page.on('response', resp => {
var header = resp.headers();
var fileName = header['content-disposition'];
save.write(fileName);
});
await page.goto(startPage, { waitUntil: 'networkidle0' });
for(loop through download pages){
await page.goto(downloadPage, { waitUntil: 'networkidle0' });
await page.click(download);
await page.waitFor(30000);
//download starts
//page.on called and saves fileName
//page.waitFor gives it time to complete before starting next loop
}
}
main();
await page.waitFor(30000);
I don't know if await is required.
And page.waitFor(30000); slows the script down, but I could not get it to work without it. There might be a better way.

Categories