I'm using the javascript file API in the browser, to force the opening of the dialog window for downloading files, my goal is to make this window always open when there is a download, even if the user has defined the download in their browser if automatic.
So for that I'm using the window.showSaveFilePicker() function to open the window, and then add a file (Blob) coming from a download.
The problem is that the call to window.showSaveFilePicker() can only be performed in a safe context according to the documentation at: https://web.dev/file-system-access/
That is, if I call this function like this:
async function DownloadFile(){
async function startDialogWindow() {
const options = {
types: [
{
description: 'Text Files',
accept: {
'text/plain': ['.txt'],
},
},
],
};
// an exception will be thrown here
const handle = await window.showSaveFilePicker(options);
}
await startDialogWindow()
}
an exception "DOMException: Failed to execute 'showSaveFilePicker' on 'Window': Must be handling a user gesture to show a file picker." will be released.
but if the same call is made like this:
buttonTest.addEventListener('click', async() => {
const handle = await window.showSaveFilePicker(options);
} )
no exception occurs.
Given this, is there any way to run window.showSaveFilePicker within an unsafe context?
I also tried:
let handle = null;
let buttonTest = document.createElement('button');
buttonTest.addEventListener('click', async() => {
handle = await window.showSaveFilePicker(options);
} );
buttonTest.click()
but I was not successful.
I found the answer to the problem.
I can't explain why the solution worked, but the exception stopped being thrown when I removed all the "debugger" keywords from my code, i.e. letting the process flow normally without stopping for debugging solved the problem. this is very strange, but that's what was happening.
Related
I need to open a series of popup windows, each window must be closed before the next one can be opened.
function openWindows() {
var urls = getListOfUrls();
for (let url of urls) {
openWindow(url);
// Wait for window to close before continuing.
}
}
The only way I found to make sure that a window is closed is to use a setInterval which, as I understand it, causes the function behave asynchronously.
Any idea on how I can achieve this?
Potential alternate suggestion
Without more information, it sounds like what you're trying to accomplish can be completely automated using puppeteer and in-page scripting. If the URLs that you are visiting aren't all in the same origin, then this is the only method which will work (the solution below will not apply).
Interpretation of question
However, let's say that you need to manually perform some tasks on each page in order (one at a time) for whatever reason (maybe the pages you're retrieving often change their DOM in a way that keeps breaking your scripts), but you want to skip the rigor of serially opening the URLs in new tabs, so that you can just focus on the manual tasks.
Solution
JavaScript web APIs don't provide a way to check for the closure of a window (a script would no longer be running at that point), but the last event that you can respond to is the unload event, and using it would look something like this:
References:
Window
Window: unload event
Window.open()
Same-origin policy
async function openEachWindowAfterThePreviousUnloads (urls) {
for (const url of urls) {
console.log(`Opening URL: "${url}"`);
const target = '_blank';
const initialTime = performance.now();
const windowProxy = window.open(url, target);
if (!windowProxy) {
throw new Error(`Could not get window proxy for URL: "${url}"`);
}
await new Promise(resolve => {
windowProxy.addEventListener('unload', ev => {
const delta = performance.now() - initialTime;
const thresholdMs = 1000;
if (delta < thresholdMs) return;
resolve();
});
});
}
}
const tags = [
'javascript',
'puppeteer',
];
const urls = tags.map(tag => `https://stackoverflow.com/questions/tagged/${tag}`);
openEachWindowAfterThePreviousUnloads(urls);
Code in TypeScript Playground
Caveats:
The script will fail if any of the following is not true:
Every URL is in the same origin as that of the invoking window
If your browser blocks pop-ups, the page where you run the script is allowed to create pop-ups. Example error:
You can try the code above in your browser JS console on this page, and (as long as https://stackoverflow.com is allowed to create popups) it should work.
I'm trying to log the network calls in the browser. For my use case I require the browser to be in an open state and should be closed only with the scripts. I am currently using the page.pause() function to prevent the browser from automatically closing. Is there any other way to prevent the browser from closing automatically.
test('Verify home page Load event',async({page})=>{
//const browser = await chromium.launchPersistentContext("",{headless:false});
await page.goto("https://samplesite.com")
await page.on('request',req=>{
const requestUrl = req.url();
if(requestUrl.indexOf("google-analytics.com/collect")>-1){
console.log("Intercepted:->"+requestUrl);
}else{
req.continue
}
})
await page.pause();
})
I tried checking out this [link] (How to keep browser opening by the end of the code running with playwright-python?) for python but could not apply it to JS.
Similar to what was described in the answer to the python question, you need to keep your script alive somehow.
This answer describes a couple of ways to do that.
However, page.pause() is definitely the recommended approach- it exists precisely for this kind of situation where you need to inspect the browser while your script is executing. Your script also has some problems- as it stands when you encounter your target request you are logging something but not calling request.continue() (note that this a method, not a property). That will cause all requests to hang indefinitely until it is continued or aborted.
You probably want to do something like this:
await page.route('**/*', (route, request) => {
const rurl = request.url();
if (rurl.includes('google-analytics.com/collect')) {
console.log(`Intercepted request to ${rurl}`);
// Do other stuff?
}
route.continue();
});
It's not clear what you are trying to accomplish from your snippet- if you just need to wait for a particular request to fire, you can use either:
page.waitForRequest or page.waitForResponse, and do away with worrying about keeping the browser open.
I tried to use await page.pause() and it doesn't work for me, but I found the tricky way, and it works well, just put at the end of your test:
await new Promise(() => {})
Reference on the link.
You can try await Task.Delay(-1)
I'm using ar.js which uses the users webcam and issues a permission prompt to do so. What I want is to listen to a global event triggered by this dialogue if the user either allows or denies access to the webcam or has done so previously.
I've tried with global listeners like:
document.documentElement.addEventListener("error", e=>{console.log("GOT ERROR : ", e)})
window.addEventListener("error", e=>{console.log("GOT ERROR : ", e)});
WebRTC errors on MDN references only global error events.
I don't know ar.js events, and can't answer whether it has some direct way to observe this.
If you're asking for a way to hack around it, then there's no such global event in browsers. NotAllowedError is from calling await navigator.mediaDevices.getUserMedia().
But if you know approximately when it's prompting the user, then you can do a parallel request, like this:
// library
(async () => {
video.srcObject = await navigator.mediaDevices.getUserMedia({video: true});
})();
// us
(async () => {
try {
await navigator.mediaDevices.getUserMedia({video: true});
console.log("GOT CAM");
} catch (e) {
console.log("GOT ERROR : " + e);
}
})();
This should give you the notification you want without causing a second user prompt.
It works because the spec mandates that getUserMedia must succeed without prompting the user again if the page already has a camera stream.
If you don't know when it'll prompt, then you'd need to override the getUserMedia method on the navigator.mediaDevices object. Various libraries like adapter.js do this successfully, if you need an example.
I'm analyzing some code on a website and I came across the following anonymous function followed by a try catch statement. I'm just wondering what the try catch statement is doing at the end there. Is it pre-loading the url so thats it loads more quickly then the anonymous function goes? Also, whats the point is it's not catching any errors.
(function() {
var fired = false;
bsnPop.add("http://www.someurl.com", {
under: !noPopunder,
newTab: false,
forceUnder: true,
shouldFire: function() {
return !fired;
},
cookieExpires: -1,
afterOpen: function(url) {
createCookie();
fired = true;
doSecondPop();
}
});
})();
try {
var hint = document.createElement("link");
hint.rel = "dns-prefetch";
hint.href = "http://www.someurl.com";
document.head.appendChild(hint);
var hint = document.createElement("link");
hint.rel = "preconnect";
hint.href = "http://www.someurl.com";
document.head.appendChild(hint);
} catch (e) {}
With reference to the link types list on MDN, "dns-prefetch" and "preconnect" are listed as experimental. They do not appear in the list of "rel" values for link types of link elements in HTML5
So the code is using experimental technology on the web which might throw an error in some browsers. To prevent stopping the application and logging an exception on the console, the code is placed in a try block with a catch block that ignores the error.
In answer to question details, the anonymous function in the IIFE is invoked and passes an object containing parameters and callbacks in a call to bsnPop.add. It does not appear to create a popup window at this stage.
Next code within the try block attempts to speed up access to the web site by requesting DNS lookup of the website's name in advance, and to open a connection to the site before attempting to retrieve content.
The code is placed in the try block to accommodate the possibility of a browser throwing an exception if the requested operations are not supported. The application does not consider lack of support an error and wants to continue anyway.
The end result is that if dns-prefetch or preconnect are supported the browser can take the hint and perform the operations. If they are not supported any error generated is ignored and code continues at the next statement - connecting to the website later will have to proceed at normal speed.
I'm using Google App Engine with Java and Google Cloud Endpoints. In my JavaScript front end, I'm using this code to handle initialization, as recommended:
var apisToLoad = 2;
var url = '//' + $window.location.host + '/_ah/api';
gapi.client.load('sd', 'v1', handleLoad, url);
gapi.client.load('oauth2', 'v2', handleLoad);
function handleLoad() {
// this only executes once,
if (--apisToLoad === 0) {
// so this is not executed
}
}
How can I detect and handle when gapi.client.load fails? Currently I am getting an error printed to the JavaScript console that says: Could not fetch URL: https://webapis-discovery.appspot.com/_ah/api/static/proxy.html). Maybe that's my fault, or maybe it's a temporary problem on Google's end - right now that is not my concern. I'm trying to take advantage of this opportunity to handle such errors well on the client side.
So - how can I handle it? handleLoad is not executed for the call that errs, gapi.client.load does not seem to have a separate error callback (see the documentation), it does not actually throw the error (only prints it to the console), and it does not return anything. What am I missing? My only idea so far is to set a timeout and assume there was an error if initialization doesn't complete after X seconds, but that is obviously less than ideal.
Edit:
This problem came up again, this time with the message ERR_CONNECTION_TIMED_OUT when trying to load the oauth stuff (which is definitely out of my control). Again, I am not trying to fix the error, it just confirms that it is worth detecting and handling gracefully.
I know this is old but I came across this randomly. You can easily test for a fail (at least now).
Here is the code:
gapi.client.init({}).then(() => {
gapi.client.load('some-api', "v1", (err) => { callback(err) }, "https://someapi.appspot.com/_ah/api");
}, err, err);
function callback(loadErr) {
if (loadErr) { err(loadErr); return; }
// success code here
}
function err(err){
console.log('Error: ', err);
// fail code here
}
Example
Unfortunately, the documentation is pretty useless here and it's not exactly easy to debug the code in question. What gapi.client.load() apparently does is inserting an <iframe> element for each API. That frame then provides the necessary functionality and allows accessing it via postMessage(). From the look of it, the API doesn't attach a load event listener to that frame and rather relies on the frame itself to indicate that it is ready (this will result in the callback being triggered). So the missing error callback is an inherent issue - the API cannot see a failure because no frame will be there to signal it.
From what I can tell, the best thing you can do is attaching your own load event listener to the document (the event will bubble up from the frames) and checking yourself when they load. Warning: While this might work with the current version of the API, it is not guaranteed to continue working in future as the implementation of that API changes. Currently something like this should work:
var framesToLoad = apisToLoad;
document.addEventListener("load", function(event)
{
if (event.target.localName == "iframe")
{
framesToLoad--;
if (framesToLoad == 0)
{
// Allow any outstanding synchronous actions to execute, just in case
window.setTimeout(function()
{
if (apisToLoad > 0)
alert("All frames are done but not all APIs loaded - error?");
}, 0);
}
}
}, true);
Just to repeat the warning from above: this code makes lots of assumptions. While these assumptions might stay true for a while with this API, it might also be that Google will change something and this code will stop working. It might even be that Google uses a different approach depending on the browser, I only tested in Firefox.
This is an extremely hacky way of doing it, but you could intercept all console messages, check what is being logged, and if it is the error message you care about it, call another function.
function interceptConsole(){
var errorMessage = 'Could not fetch URL: https://webapis-discovery.appspot.com/_ah/api/static/proxy.html';
var console = window.console
if (!console) return;
function intercept(method){
var original = console[method];
console[method] = function() {
if (arguments[0] == errorMessage) {
alert("Error Occured");
}
if (original.apply){
original.apply(console, arguments)
}
else {
//IE
var message = Array.prototype.slice.apply(arguments).join(' ');
original(message)
}
}
}
var methods = ['log', 'warn', 'error']
for (var i = 0; i < methods.length; i++)
intercept(methods[i])
}
interceptConsole();
console.log('Could not fetch URL: https://webapis-discovery.appspot.com/_ah/api/static/proxy.html');
//alerts "Error Occured", then logs the message
console.log('Found it');
//just logs "Found It"
An example is here - I log two things, one is the error message, the other is something else. You'll see the first one cause an alert, the second one does not.
http://jsfiddle.net/keG7X/
You probably would have to run the interceptConsole function before including the gapi script as it may make it's own copy of console.
Edit - I use a version of this code myself, but just remembered it's from here, so giving credit where it's due.
I use a setTimeout to manually trigger error if the api hasn't loaded yet:
console.log(TAG + 'api loading...');
let timer = setTimeout(() => {
// Handle error
reject('timeout');
console.error(TAG + 'api loading error: timeout');
}, 1000); // time till timeout
let callback = () => {
clearTimeout(timer);
// api has loaded, continue your work
console.log(TAG + 'api loaded');
resolve(gapi.client.apiName);
};
gapi.client.load('apiName', 'v1', callback, apiRootUrl);