How to wait for requests and validate responses using playwright? - javascript

This is my first time using playwright and I can't figure out how to wait for requests and validate responses.
I've been using cypress for a quite a long time, and it was pretty easy to manage network requests.
For example, I need to validate response after clicking a button and this is how I would do it with cypress:
cy.server()
cy.route('POST', '/api/contacts').as('newContact')
cy.get('.btn').click()
cy.wait('#newContact').then((response) => {
expect(response.status).to.eq(400)
expect(response.responseBody.data.name).to.eq('abcde')
})
And this is how I'm trying to do the same thing with playwright, but it validates GET request that was sent long before it even clicked save button. I can't figure out how to manage this request properly and that's a show stopper for my test suite:
await contacts.clickSaveBtn()
await page.waitForResponse((resp) => {
resp.url().includes('/api/contacts')
expect(resp.status()).toBe(400)
})
Any help or advice would be really appreciated

What you need to do is first start waiting for the response and then click, so the waitForResponse() can catch the actual response coming as a result of the click.
await Promise.all([
page.waitForResponse(resp => resp.url().includes('/api/contacts') && resp.status() === 400),
contacts.clickSaveBtn()
]);
This should handle the possible race condition.

Alternatively, you can assign a promise, then later wait for it:
const responsePromise = page.waitForResponse(resp => resp.url().includes('/api/contacts') && resp.status() === 400);
await contacts.clickSaveBtn();
const response = await responsePromise;
It's more readable and you get the response value.

Related

Extract the price only from api using node-fetch array

I am sorry for a basic question, I have been trying to extract only the price using node-fetch from API
const fetch = require('node-fetch');
fetch('https://api.binance.us/api/v3/avgPrice?symbol=DOGEUSD')
.then(res => res.text())
.then(text => console.log(text))
let AvgPrice = text.map(text => text.price);
The error I am receiving is
internal/modules/cjs/loader.js:968
throw err;
^
Please, any suggestion is greatly appreciated
There are several things that you need to check out
Errors reagarding cjs/loader.js have little or nothing to do with your code per se but rather the setup, for example how you run the code, naming of the file, etc,
internal/modules/cjs/loader.js:582 throw err
https://github.com/nodejs/help/issues/1846
This code will return Reference error: text is not defined.
The reason is that you never define the variable text and then you try to call map function on it.
Also fetch is a async function and nodejs is single threaded non-blocking. So what happens is that you send a http request (fetch) to the website, that takes times, but meanwhile your coding is still running, so continues to the next line in your code.
Lets add some console logs
const fetch = require('node-fetch');
console.log('1. lets start')
fetch('https://api.binance.us/api/v3/avgPrice?symbol=DOGEUSD')
.then(res => res.text())
.then(text => {
console.log('2. I got my text', text)
})
console.log('3. Done')
You might think that this will log out
lets start
I got my text {"mins":5,"price":"0.4998"}
Done
Nopes, it will log out
Lets start
Done
I got my text {"mins":5,"price":"0.4998"}
Because you fetched the data, then your program continued, it looged out 3. Done and THEN when it got the data from api.binance it logged out 2. I got my text (notice the keyword then, it happens later)
map is a function for arrays. What the api returns is an object. So when you fix your async code, you will get TypeError text.map is not a function
Since it returns an object you can access it's property right away text.price

Dark magic going through res.json()

To avoid the XY problem, here's my final goal: I want to fetch something, use the response body, and return the response (from which the user should be able to get the body) without returning the body separately. On the paper, this should work:
const res = await fetch("something")
const clone = res.clone()
const body = await clone.json()
// Use body
return res
If I get lucky, the process ends during await clone.json(). If I don't, it freezes. Forever.
As a reminder, res.json can't be called twice.
Does it think I'm not good enough to get an error message? Any help on that? Thank you.
Progess
I located the error: in node_modules/node-fetch/lib/index.js:416, the listener on the end of the action is never triggered. I still don't know why.
When replacing the URL with "https://jsonplaceholder.typicode.com/todos/1", everything works. This may be related to the server. I don't see any link between the origin of the request and whether the request could be cloned...
Placing res.text() before clone.json() magically fixes it, but it makes everything lose its purpose.
By messing with the module's code, I found that the response actually never ends. The JSON gets cut, and the chunk from the last data event call isn't the end of the JSON, so the end event never gets called. Extremely confusing.
I spent too much time on this issue, I will simply avoid it by returning the body separately.
This seems to work just fine
const myFetch = async (url) => {
const resp = await fetch(url),
clone = resp.clone()
json = await clone.json()
console.log('inner:', json.title)
return resp
}
myFetch('https://jsonplaceholder.typicode.com/todos/1')
.then(resp => resp.json())
.then(json => console.log('outer:', json.title))

Manage a long-running operation node.js

I am creating a telegram bot, which allows you to get some information about the destiny 2 game world, using the Bungie API. The bot is based on the Bot Framework and uses Telegram as a channel (as a language I am using JavaScript).
now I find myself in the situation where when I send a request to the bot it sends uses series of HTTP calls to the EndPoints of the API to collect information, format it and resubmit it via Adaptive cards, this process however in many cases takes more than 15 seconds showing in chat the message "POST to DestinyVendorBot timed out after 15s" (even if this message is shown the bot works perfectly).
Searching online I noticed that there doesn't seem to be a way to hide this message or increase the time before it shows up. So the only thing left for me to do is to make sure it doesn't show up. To do this I tried to refer to this documentation article. But the code shown is in C #, could someone give me an idea on how to solve this problem of mine or maybe some sample code?
I leave here an example of a call that takes too long and generates the message:
//Mostra l'invetraio dell'armaiolo
if (LuisRecognizer.topIntent(luisResult) === 'GetGunsmith') {
//Take more 15 seconds
const mod = await this.br.getGunsmith(accessdata, process.env.MemberShipType, process.env.Character);
if (mod.error == 0) {
var card = {
}
await step.context.sendActivity({
text: 'Ecco le mod vendute oggi da Banshee-44:',
attachments: [CardFactory.adaptiveCard(card)]
});
} else {
await step.context.sendActivity("Codice di accesso scaduto.");
await this.loginStep(step);
}
}
I have done something similar where you call another function and send the message once the function is complete via proactive message. In my case, I set up the function directly inside the bot instead of as a separate Azure Function. First, you need to save the conversation reference somewhere. I store this in conversation state, and resave it every turn (you could probably do this in onMembersAdded but I chose onMessage when I did it so it resaves the conversation reference every turn). You'll need to import const { TurnContext } = require('botbuilder') for this.
// In your onMessage handler
const conversationData = await this.dialogState.get(context, {});
conversationData.conversationReference = TurnContext.getConversationReference(context.activity);
await this.conversationState.saveChanges(context);
You'll need this for the proactive message. When it's time to send the API, you'll need to send a message (well technically that's optional but recommended), get the conversation data if you haven't gotten it already, and call the API function without awaiting it. If your API is always coming back around 15 seconds, you may just want a standard message (e.g. "One moment while I look that up for you"), but if it's going to be longer I would recommend setting the expectation with the user (e.g. "I will look that up for you. It may take up to a minute to get an answer. In the meantime you can continue to ask me questions."). You should be saving user/conversation state further down in your turn handler. Since you are not awaiting the call, the turn will end and the bot will not hang up or send the timeout message. Here is what I did with a simulation I created.
await dc.context.sendActivity(`OK, I'll simulate a long-running API call and send a proactive message when it's done.`);
const conversationData = await this.dialogState.get(context, {});
apiSimulation.longRunningRequest(conversationData.conversationReference);
// That is in a switch statement. At the end of my turn handler I save state
await this.conversationState.saveChanges(context);
await this.userState.saveChanges(context);
And then the function that I called. As this was just a simulation, I have just awaited a promise, but obviously you would call and await your API(s). Once that comes back you will create a new BotFrameworkAdapter to send the proactive message back to the user.
const request = require('request-promise-native');
const { BotFrameworkAdapter } = require('botbuilder');
class apiSimulation {
static async longRunningRequest(conversationReference) {
console.log('Starting simulated API');
await new Promise(resolve => setTimeout(resolve, 30000));
console.log('Simulated API complete');
// Set up the adapter and send the message
try {
const adapter = new BotFrameworkAdapter({
appId: process.env.microsoftAppID,
appPassword: process.env.microsoftAppPassword,
channelService: process.env.ChannelService,
openIdMetadata: process.env.BotOpenIdMetadata
});
await adapter.continueConversation(conversationReference, async turnContext => {
await turnContext.sendActivity('This message was sent after a simulated long-running API');
});
} catch (error) {
//console.log('Bad Request. Please ensure your message contains the conversation reference and message text.');
console.log(error);
}
}
}
module.exports.apiSimulation = apiSimulation;

The click action in mailinator page does not work with protractor

I'm trying to automate the verification code sent to an email in mailinator, when I run the test therror is: "TimeoutError: Wait timed out after 35001ms", I'm thinking that is a problem with the async functions but I'm not secure about that.
const emailRow = element(by.className("tr.even.pointer.ng-scope"));
this.setCode = async function() {
let windows = await browser.getAllWindowHandles();
await browser.switchTo().window(windows[1]);
await browser.wait(ExpectedConditions.visibilityOf(emailRow), 50000);
browser.actions().mouseMove(emailRow).click().perform();
await browser.wait(ExpectedConditions.visibilityOf(emailCode), 35000);
}
I also tried this
this.setCode = async function() {
let windows = await browser.getAllWindowHandles();
await browser.switchTo().window(windows[1]);
await browser.wait(ExpectedConditions.elementToBeClickable(emailRow), 50000);
emailRow.click();
await browser.wait(ExpectedConditions.visibilityOf(emailCode), 35000);
}
But I have the same problem, in the screen I can't see that the test perform the click, I put an sleep after the click in the emailRow but doesn't work, in the image there is the page that i want to perform the click.
I think your best bet is to use their api, instead of going to their website and read the email there. In protractor, this is very easy. depending on whether or not you have a premium account with them you can use a public or a private team inbox. In the case of a public inbox, perform something similar to the following:
const checkMailinatorPublicEmail = async () => {
process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
let requestUrl = 'https://mailinator.com/api/v2/domains/public/inboxes/your_inbox_name_here';
let responseBody = await fetch(requestUrl);
let responseJson = await responseBody.json();
return responseJson;
}
Now, you have all the email in the inbox in your response body as a json object. To keep this simple, do not use a static public team inbox, instead use a random inbox name so that each time you will only have one email in the inbox and you can parse that email to your needs.
I believe you should try the second approach. In the first approach, you are waiting for an element to be visible that does not guarantee an element is clickable.
By looking at second approach, the code looks fine. My suggestion is to try changing click method like below
browser.executeScript('arguments[0].click()', emailRow.getWebElement());
Hope this will help
Happy coding!

Best way to push one more scrape after all are done

I have following scenario:
My scrapes are behind a login, so there is one login page that I always need to hit first
then I have a list of 30 urls that can be scraped asynchronously for all I care
then at the very end, when all those 30 urls have been scraped I need to hit one last separate url to put the results of the 30 URL scrape into a firebase db and to do some other mutations (like geo lookups for addresses etc)
Currently I have all 30 urls in a request queue (through the Apify web-interface) and I'm trying to see when they are all finished.
But obviously they all run async so that data is never reliable
const queue = await Apify.openRequestQueue();
let pendingRequestCount = await queue.getInfo();
The reason why I need that last URL to be separate are two-fold:
Most obvious reason being that I need to be sure I have the
results of all 30 scrapes before I send everything to DB
neither of the 30 URL's allow me to do Ajax / Fetch calls, which
I need for sending to Firebase and do the geo lookups of addresses
Edit: Tried this based on answer from #Lukáš Křivka. handledRequestCount in the while loop reaches a max of 2, never 4 ... and Puppeteer just ends normally. I've put the "return" inside the while loop because otherwise requests never finish (of course).
In my current test setup I have 4 urls to be scraped (in the Start URLS input fields of Puppeteer Scraper (on Apify.com) and this code :
let title = "";
const queue = await Apify.openRequestQueue();
let {handledRequestCount} = await queue.getInfo();
while (handledRequestCount < 4){
await new Promise((resolve) => setTimeout(resolve, 2000)) // wait for 2 secs
handledRequestCount = await queue.getInfo().then((info) => info.handledRequestCount);
console.log(`Curently handled here: ${handledRequestCount} --- waiting`) // this goes max to '2'
title = await page.evaluate(()=>{ return $('h1').text()});
return {title};
}
log.info("Here I want to add another URL to the queue where I can do ajax stuff to save results from above runs to firebase db");
title = await page.evaluate(()=>{ return $('h1').text()});
return {title};
I would need to see your code to answer completely correctly but this has solutions.
Simply use Apify.PuppeteerCrawler for the 30 URLs. Then you run the crawler with await crawler.run().
After that, you can simply load the data from the default dataset via
const dataset = await Apify.openDataset();
const data = await dataset.getdata().then((response) => response.items);
And do whatever with the data, you can even create new Apify.PuppeteerCrawler to crawl the last URL and use the data.
If you are using Web Scraper though, it is a bit more complicated. You can either:
1) Create a separate actor for the Firebase upload and pass it a webhook from your Web Scraper to load the data from it. If you look at the Apify store, we already have a Firestore uploader.
2) Add a logic that will poll the requestQueue like you did and only when all the requests are handled, you proceed. You can create some kind of loop that will wait. e.g.
const queue = await Apify.openRequestQueue();
let { handledRequestCount } = await queue.getInfo();
while (handledRequestCount < 30) {
console.log(`Curently handled: ${handledRequestCount } --- waiting`)
await new Promise((resolve) => setTimeout(resolve, 2000)) // wait for 2 secs
handledRequestCount = await queue.getInfo().then((info) => info.handledRequestCount);
}
// Do your Firebase stuff
In the scenario where you have one async function that's called for all 30 URLs you scrape, first make sure the function returns its result after all necessary awaits, you could await for Promise.all(arrayOfAll30Promises) then run your last piece of code
Because I was not able to get consistent results with the {handledRequestCount} from getInfo() (see my edit in my original question), I went another route.
I'm basically keeping a record of which URL's have already been scraped via the key/value store.
urls = [
{done:false, label:"vietnam", url:"https://en.wikipedia.org/wiki/Vietnam"},
{done:false , label:"cambodia", url:"https://en.wikipedia.org/wiki/Cambodia"}
]
// Loop over the array and add them to the Queue
for (let i=0; i<urls.length; i++) {
await queue.addRequest(new Apify.Request({ url: urls[i].url }));
}
// Push the array to the key/value store with key 'URLS'
await Apify.setValue('URLS', urls);
Now every time I've processed an url I set its "done" value to true.
When they are all true I'm pushing another (final) url into the queue:
await queue.addRequest(new Apify.Request({ url: "http://www.placekitten.com" }));

Categories