Are google devs just wrong with their Puppeteer code: custom-event.js? - javascript

When I try the custom-event.js on https://try-puppeteer.appspot.com/ no response is displayed on the log. Surely this demo can be improved by showing some result of the code run ?!?
I've seen the popular answers on Puppeteer: How to listen to object events and the much less appreciated https://stackoverflow.com/a/66713946/2455159 (0 votes!).
NONE OF THE FIRST EXAMPLES work on try-puppeteer and the SECOND ONE DOES !
I get the idea,
First, you have to expose a function that can be called from within
the page. Second, you listen for the event and call the exposed
function and pass on the event data.
-- but that snippet didn't work (for me ) applied to the https://www.chromestatus.com/features custom event.
What's the bottom line for observing custom events with puppeteer?

There is a difference between the first two examples and the last one: the first ones just illustrate the subscription on an event, but they do not fire this event, so there is no observable reaction there (this can be confusing as a reader would expect the sites to fire the events but they do not). The third example does fire the event.
You can get the first two examples observable even on https://try-puppeteer.appspot.com/ if you add this lines accordingly.
await page.evaluate(() => { document.dispatchEvent(new Event('app-ready')); });
await page.evaluate(() => { document.dispatchEvent(new Event('status')); });

Ok, cracked it, so that a single or some multiple events are detected by Puppeteer. A couple things weren't apparent to me about how puppeteer works:
puppeteer page ≈ a browser tab
that js can be attached to that 'tab'
puppeteer must browse the test url after setup of the 'tab' (timing is critical)
So now combining previous answers, in Node or on try-puppeteer:
// in node wrap with an asynch iife
const browser = await puppeteer.launch();
const page = await browser.newPage();
// Expose a handler to the page
await page.exposeFunction('onCustomEvent', ({ type, detail }) => {
console.log(`Event fired: ${type}, detail: ${detail}`);
});
// listen for events of type 'status' and
// pass 'type' and 'detail' attributes to our exposed function
await page.evaluateOnNewDocument(() => {
window.addEventListener('status', ({ type, detail }) => {
window.onCustomEvent({ type, detail });
});
});
await page.goto('https://puppet.azurewebsites.net/puppeteer/soEvntSingle.html');
// await page.goto('https://puppet.azurewebsites.net/puppeteer/soEvnt.html');
// await page.waitFor(3000);
await browser.close();
Detects the events fired from that webpage. Where with devTools you'll see the event is fired by a CustomEvent.
<script>
var event = new CustomEvent('status', { detail: 'ok' });
window.addEventListener('status', function(e) {
console.log('status: ', e.detail);
});
window.dispatchEvent(event);
// setTimeout(window.dispatchEvent, 1000, event);
</script>
Switching on commented lines and commenting out respective lines gets Puppeteer to monitor for repeated firing of an event. Then page soEvnt.html fires the CustomEvent 'status' event every second ad nauseam. The Puppeteer test has to terminate at some stage, to report to the terminal (or test infrastructure like Jest) so monitoring is set for 3 seconds.
!Google you can come to me for help any time!

Related

JS Foreach Alternative (timeout) for Queue

I'm working on the "Approve All" button. The process here is when I click "Approve All," each individual "Approve" button will be triggered as "click" all at once, and then it will send POST requests to the controller. However, when I clicked Approve All button, there was a race condition causing the controller returns Error 500: Internal server error. I have tried using JS setTimeout() with value 1500*iter, but when the iterator gets higher, for example at i = 100, then it would take 1500*100 => 150000ms (150s). I hope that explains the problem clearly. Is there a way to prevent such a case?
Here is my code, I'm using JQuery:
let inspection = $this.parents("li").find("ul button.approve"); // this will get all 'approve' button to be clicked at once
inspection.each((i,e)=>{
(function () {
setTimeout(function () {
$(e).data("note",r);
$(e).click();
}, 1500 * i); // this acts like a queue, but when i > 100, it takes even longer to send POST requests.
})(this,i,e,r);
});
// then, each iteration will send a POST request to the controller.
$("#data-inspection ul button.approve").on("click", function() {
// send POST requests
});
Any help would be much appreciated. Thank you.
That 500 error may also be the server crashing from being unable to process all the requests simultaneously.
What I'd recommend is using an event-driven approach instead of setTimeout. Your 1500ms is basically a guess - you don't know whether clicks will happen too quickly, or if you'll leave users waiting unnecessarily.
I'll demonstrate without jQuery how to do it, and leave the jQuery implementation up to you:
// use a .js- class to target buttons your buttons directly,
// simplifying your selectors, and making them DOM agnostic
const buttonEls = document.querySelectorAll('.js-my-button');
const buttonsContainer = document.querySelector('.js-buttons-container');
const startRequestsEvent = new CustomEvent('customrequestsuccess');
// convert the DOMCollection to an array when passing it in
const handleRequestSuccess = dispatchNextClickFactory([...buttonEls]);
buttonsContainer.addEventListener('click', handleButtonClick);
buttonsContainer.addEventListener(
'customrequestsuccess',
handleRequestSuccess
);
// start the requests by dispatching the event buttonsContainer
// is listening for
buttonsContainer.dispatchEvent(startRequestsEvent);
// This function is a closure:
// - it accepts an argument
// - it returns a new function (the actual event listener)
// - the returned function has access to the variables defined
// in its outer scope
// Note that we don't care what elements are passed in - all we
// know is that we have a list of elements
function dispatchNextClickFactory(elements) {
let pendingElements = [...elements];
function dispatchNextClick() {
// get the first element that hasn't been clicked
const element = pendingElements.find(Boolean);
if (element) {
const clickEvent = new MouseEvent('click', {bubbles: true});
// dispatch a click on the element
element.dispatchEvent(clickEvent);
// remove the element from the pending elements
pendingElements = pendingElements.filter((_, i) => i > 0);
}
}
return dispatchNextClick;
}
// use event delegation to mitigate adding n number of listeners to
// n number of buttons - attach to a common parent
function handleButtonClick(event => {
const {target} = event
if (target.classList.contains('js-my-button')) {
fetch(myUrl)
.then(() => {
// dispatch event to DOM indicating request is complete when the
// request succeeds
const completeEvent = new CustomEvent('customrequestsuccess');
target.dispatchEvent(completeEvent);
})
}
})
There are a number of improvements that can be made here, but the main ideas here are that:
one should avoid magic numbers - we don't know how slowly or quickly requests are going to be processed
requests are asynchronous - we can determine explicitly when they succeed or fail
DOM events are powerful
when a DOM event is handled, we do something with the event
when some event happens that we want other things to know about, we can dispatch custom events. We can attach as many handlers to as many elements as we want for each event we dispatch - it's just an event, and any element may do anything with that event. e.g. we could make every element in the DOM flash if we wanted to by attaching a listener to every element for a specific event
Note: this code is untested

Puppeteer, distinguish between a redirect or a new html element (without timeouts)

TL;DR: using puppeteer, after triggering a button click, which one is the best way to understand what is happening to a page, knowing that either a redirect / history push could happen (and the url change, in a set of known ones, but not necessarily through redirect but also through push into history object) or a dialog might appear (with a known id)?
I'm trying to write a scraper using Puppeteer (very first experience with it, never used before) to navigate a website with the final goal of retrieving a text code, with the challenge that the path to get there is not always the same, and the code might actually not be given.
In the first page - full of ads, therefore slow as well -, I do something like this to wait for the "get code" button to appear (snippet 1):
// ... code to get the page instance ...
const sleep = (ms) => new Promise(resolve => setTimeout(resolve, ms));
while(true) {
// Puppeteer won't complain if I don't await for page reload (to avoid the ads),
// as long as I await for the container div before doing anything else.
page.reload(); // No await
await page.waitForSelector("#code-container");
const hasCode = await page.evaluate(() => {
// I cannot click on it already because I realised it could
// cause a "Execution context was destroyed" error
return document.querySelector('#get-code-button') != null;
});
if(!hasCode) {
await sleep(10000);
}
}
// out of the loop, "#get-code-button" exists
And then I click on it (snippet 2):
// For some reason, this method is more reliable than using
// await page.click('#get-code-button').
await page.evaluate(async () => {
document.querySelector('#get-code-button').click()
});
// ... at this point the real troubles begin ...
Now, after the snippet above, a few scenarios might happen:
A dialog might appear, with the "reveal code" button in it (happy days)
A redirect might happen (url change, but it could be either a redirect either a push in the history object), with ads. After clicking on the div with id "continue-without-ads" (to simplify), I end up in one of the next redirects.
A redirect might happen (as above, url change, but it could be either a redirect either a push in the history object), with the "reveal code" button in it (happy days)
A redirect might happen (same as above), with basically written "error: code not available". If I go back from this page, the "get code" button should stay in place, so I could skip snippet 1 and go straight for snippet 2.
Question is, how can I detect in which scenario am I, and act timely (e.g. without waiting for the waitForSelector timeout to happen if I want to check for element to be there)?
As well, is the idea of using page.goBack() to get to the initial link and make another attempt a stupid one (to avoid waiting for the "get-code-button" to appear again, since the page should now be cached in Chrome)?
I want to avoid the headache of myself mashing the refresh button, clicking the "get-code-button" once it appears and go back to retry until I get the code.
I found an escamotage, but I don't think it's the easiest way to achieve what I wanted, neither the most correct ...
My solution is to have two "aggregators" of waiters: (1) one for selectors (a list of IDs, but any selector is just fine), (2) one for page url changes (a list of urls which are gonna trigger the promise if navigated to). Both this aggregators accepts as input a list of string (in one case selectors, in the other urls), and returns the first one to succeed.
The code to check what changed in the page after the click:
/**
* #param page the page to monitor for changes
* #param urls the list of urls that should trigger the redirect monitor
* #param selectors the list of selectors that should trigger the page change
* #param triggerPromise the promise that triggers the events (e.g. mouse click on a button)
* #returns the url or the selector that resulted as a change
*/
async function waitForWinner(page: Page, urls: string[], selectors: string[], triggerPromise: Promise<any>) {
// waitForUrlChange takes in input a list of urls, and returns the first
// one to succeed
const urlChangeMonitor = waitForUrlChange(page, urls);
// hasSelectors takes in input the list of selectors, and returns
// the first that succeeds
const selectorsPromise = hasSelectors(page, selectors);
const results = await Promise.all([
triggerPromise,
Promise.race([ urlChangeMonitor.promise, selectorsPromise ])
]);
urlChangeMonitor.clear();
const winner = results[1];
// This check is quite stupid, but it works for me:
const isRedirect = !winner.startsWith("#");
const isSelector = winner.startsWith("#");
// ... other custom logic here
// Simplification:
return { winner, isRedirect, isSelector }
}
The hasSelectors is a bit trivial, and it's full of custom logic in my case (when there are cookies it accepts them and then keeps going again), the most interesting part is the one to wait for url change.
In my case I realised there is no redirect, thus I suppose it's a push in the history object. Regardless, this method succeeds in listening for url changes in the page:
const unboundResolve = (url: string) => logger.error("Resolved too early, error.");
const unboundReject = () => logger.error("Rejected too early, error.");
export function waitForUrlChange(page: Page, urls: string[], timeout=60000) {
if (urls.length === 0) {
throw Error("Cannot have 0 lenght array of urls.");
}
const deferred = {
resolve: unboundResolve,
reject: unboundReject
};
const promise: Promise<string> = new Promise((resolve, reject) => {
deferred.resolve = resolve;
deferred.reject = reject;
});
let promiseDone = false;
const checkForUrl = (frame: Frame) => {
const isRoot = frame.parentFrame() === null;
if (isRoot) {
// Frame might change, but page doesn't always change.
// Regardless, in this way I can detect url changes which
// occurs without redirect.
const currentUrl = page.url();
for(let u of urls) {
if (~currentUrl.indexOf(u)) {
// Resolve only once
if(!promiseDone) {
promiseDone = true;
deferred.resolve(currentUrl);
} else {
logger.warn(`Found another redirect of interest, but it's too late now.`);
}
clear();
break;
}
}
}
};
const clear = () => {
if (!promiseDone) {
deferred.reject();
promiseDone = true;
}
// Calling it multiple times doesn't make a difference
page.off("framenavigated", checkForUrl);
}
// If url doesn't change in one minute, call it a day
setTimeout(clear, timeout);
page.on("framenavigated", checkForUrl);
// Provide a way to turn off the listener from outside
return { clear, promise };
}
Main idea is to register to framenavigated event and listen for the url to contain one of the inputs (not the best, but works for me). Rather than directly returning a promise, I wrap it into an object which gives the possibility to clear the listener from outside, to keep things tidy.
The approach I presented has vast room for improvement (e.g. rather than having strings being passed around, I could add metadata and then the metadata is returned as well, to avoid very naive checks like .startsWith("#"), or the check for url could be a pattern or a callback), but it works and shows the main idea behind.

How to make Protractor's ElementArrayFinder 'each' function wait for the current action to complete before advancing to next iteration?

I am using protractor to do e2e tests in an Angular 8 app and I am currently having trouble making an 'each' loop work as I need.
In my page object I have created a method that returns an ElementArrayFinder.
public getCards(): ElementArrayFinder {
return element.all(
by.css(
`${this.baseSelector} .board-body cs-medication-card`,
),
);
}
For each of the elements returned I want to perform a set of actions that consists in clicking in a button that opens a menu list (while the menu is open there is an overlay element over all the view except the menu list) and pick one of the options. In my test I have this:
await page.board
.getCards()
.each(async (el: ElementFinder) => {
await until.invisibilityOf(await page.getOverlay());
await el
.element(by.css('.card-header .actions'))
.getWebElement()
.click();
await expect(page.isItemInMenu('X')).toBeTruthy();
await page
.getMenuItemByLabel('X')
.getWebElement()
.click();
});
I was expecting that for each card it would click the actions button, check if the option is in the list and click in the option.
What is happening is that it seems that protractor is trying to do everything at same time, since it says it cannot click on the actions button because the overlay is over it. The only way to the overlay be over the button is if the action in the previous iteration is not complete. I have already threw in the ExpectedCondition to wait for the overlay to be invisible but no luck. If only one element is returned by the ElementArrayFinder it does what is supposed.
I am out of ideas, any help would be much appreciated.
Protractor's each is asynchronous.
It's fine when you need to perform some action over an array of element (e.g get text or count) but it's not the best choice when you need to perform some kind of scenario.
I'm suggesting to use any other way like for loop.
Another workaround (which again might not work because of async nature of .each) is FIRST to wait for overlay to appear and the wait for it to disappear.
The each function is almost of no use. It even doesn't wait for the promise returned by each Function.
await $$('...').each((ele, index) => {
return ele.$('.fake_class').click().then(() => {
console.log(`the item ${index} is clicked.`)
return browser.wait(5000)
});
});
I think there are some issues in the function of each.
getWebElement() is unnecessary
await expect(page.isItemInMenu('X')).toBeTruthy(); seems wrong
await until.invisibilityOf(await page.getOverlay()); I'm not sure it work well.
Please try following code:
await page.board
.getCards()
.each(async (el: ElementFinder) => {
// await until.invisibilityOf(await page.getOverlay());
browser.sleep(10*1000)
await el
.element(by.css('.card-header .actions'))
.click();
// expect(await page.isItemInMenu('X')).toBeTruthy();
await page
.getMenuItemByLabel('X')
.click();
})
.catch((err)->{ // use catch() to capture any runtime error in previous each
console.log('err: ' + err)
})
If above code worked, you can revert back the expect and run again, then remove browser.sleep() and revert back your await until.invisibilityOf(....) and run again.

In Cypress tests, how do I retry a button click if an expected XHR request does not go out : waitUntil() with click XHR condition??

At a very high level, we click a button which commands a building-control point; turns a light on or off. The click is supposed to send a POST request to the server. The issue is sometimes, the button is clicked and the POST request does not go out. The button has no functionality to indicate if it has been clicked (minor enhancement) .
For the time being, I want to work around this using Cypress plug-in waitUntil() .
// define routes
cy.server();
cy.route('POST', '\*\*/pointcommands').as('sendCommand');
// setup checkFunction for cy.waitUntil()
const waitForPost200 = () => cy.wait('#sendCommand', {timeout: 10000}).then(xhr => cy.wrap(xhr).its('status').should('eq', 200));
// setup jQuery for cy.pipe()
const click = $el => $el.click();
// click the button
cy.get('.marengo-ok')
.should('be.visible')
.pipe(click);
// need to pass in a synchronous should() check so that the click is retried. How can we achieve this with waitUntil ?
// wait until checkFunction waitForPost200() returns truthy
cy.waitUntil( (): any => waitForPost200(), {
timeout: 10000,
interval: 1000,
errorMsg: 'POST not sent `enter code here`within time limit'
});
I think the underlying issue is that the test runner clicks on the button before an event listener gets attached. See the discussion in https://www.cypress.io/blog/2019/01/22/when-can-the-test-click/ My advice (aside from using waitUntil plugin), is to use https://github.com/NicholasBoll/cypress-pipe which provides cy.pipe and click the button using jQuery method
// cypress-pipe does not retry any Cypress commands
// so we need to click on the element using
// jQuery method "$el.click()" and not "cy.click()"
const click = $el => $el.click()
cy.get('.owl-dt-popup')
.should('be.visible')
.contains('.owl-dt-calendar-cell-content', dayRegex)
.pipe(click)
.should($el => {
expect($el).to.not.be.visible
})
As far as I know, there is no way to leverage the cy.wait('#sendCommand') command restoring the test in case of failure (in case the XHR call is not made).
The fastest solution that comes to my mind is the following:
let requestStarted = false; // will be set to true when the XHR request starts
cy.route({
method: 'POST',
url: '\*\*/pointcommands',
onRequest: () => (requestStarted = true)
}).as('sendCommand');
cy.get(".marengo-ok").as("button");
cy.waitUntil(() =>
cy
.get("#button")
.should('be.visible')
.click()
.then(() => requestStarted === true)
, {
timeout: 10000,
interval: 1000,
errorMsg: 'POST not sent `enter code here`within time limit'
});
cy.wait("#sendCommand");
You can play with a repo I prepared for you where I simulate your situation. That's the end result of the test in my repo.
Let me know if you need more help 😊

Protractor on AngularJS - 'script timeout: result was not received in 11 seconds' after login page change

Protractor hangs completely when trying to get any element property after logging in (idk if it's related to logging in or related just to switching pages).
it("Should get location of main container", async function() {
await LoginPage.validLogin();
// Works and logs in the dashboard
await browser.sleep(3000);
// Get the main container by class name
const container = await element(by.css(".main-container"));
// Logs properly the element functions (as expected)
console.log(container);
console.log(await container.getLocation()); // Hangs here
});
In this case, I'm trying to get the location of the main container element on the page. The first console.log fires and shows properly, while the second hangs completely, so I get the script timeout. Increasing the timeout time doesn't help at all...
I found online that misusing $timeout in AngularJS instead of using $interval may lead to this strange behaviour, but I really can't skim through the entire (very big!) project's codebase to change everything hoping that it just works, not to talk about the external libraries using $timeout.
I have SELENIUM_PROMISE_MANAGER = false; in my Protractor config so I disabled the built-in Control Flow in order to manually manage the promises using async/await, but even if I use the built-in Control Flow without using async/await I get the very same behaviour and error.
I'm using Jasmine as testing framework.
Maybe I'm missing something? Any help would be much appreciated, thanks!
This is caused by the fact that angular is not stable. Have a look at the link below. I found my answer there. When the page you are trying to test is open go to the browser dev tools and type in the console getAllAngularTestabilities(). There are a few properties here that indicate whether angular is ready to be tested. hasPendingMicrotasts needs to be false. hasPendingMacroTasks needs to be false. isStable needs to be true. I put a screenshot below. In my screenshot hasPendingMacrotasks is true and it must be false. So the page I looked at was not ready to be tested.
Failed: script timeout: result was not received in 11 seconds From: Task: Protractor.waitForAngular() - Locator: By(css selector, #my-btn)
Try something like this:
it("Should get location of main container", async function() {
await LoginPage.validLogin();
const container = await element(by.css(".main-container"));
await browser.wait(protractor.ExpectedConditions.presenceOf(container), 5000, 'Element taking too long to appear in the DOM');
await console.log(await container.getLocation());
});
I don't think that getLocation() exists in the Javascript bindings for selenium. I couldn't find it in the source code anyway. So that promise will never return which is why it hangs. But I the you can achieve basically the same thing with getRect():
it("Should get location of main container", async function() {
await LoginPage.validLogin();
const container = await element(by.css(".main-container"));
await browser.wait(protractor.ExpectedConditions.presenceOf(container), 5000, 'Element taking too long to appear in the DOM');
await console.log(await container.getRect());
});

Categories