Not able to capture image while generating pdf using puppeteer API - javascript

Node- v8.11.1 Headless Chrome
Im trying to generate PDF but somehow the background image is not captured in the PDF.
Below is the code. Any help is appreciated
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch({headless: true});
const page = await browser.newPage();
await page.goto('http://54.201.139.151/', {waitUntil : 'networkidle0'});
await page.pdf({path: 'hn40.pdf', printBackground: true, width: '1024px' , height: '768px'});
await browser.close();
})();

Update: page.emulateMedia() is dropped in favor of page.emulateMediaType()

As Rippo mentioned, you require page.emulateMedia("screen") for this to work properly. I have updated your script below, but I changed the page to google for testing.
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('http://google.ca/', {waitUntil : 'networkidle2'});
await page.emulateMedia('screen');
await page.pdf({path: 'hn40.pdf', printBackground: true, width: '1024px' , height: '768px'});
await browser.close();
})();

Related

chromium always shows "about:blank" and stops working on raspberry pi 3A+

on raspberry pi 3A+ chromium always shows "about:blank" and stops working unless I close the tab manually and enable new tab
const puppeteer = require("puppeteer");
async function test() {
const browser = await puppeteer.launch({
headless: false,
executablePath: "chromium-browser",
});
const [page] = await browser.pages();
await page.evaluate(() => window.open("https://www.example.com/"));
const page1 = await browser.newPage();
page.close();
await page1.goto("https://allegro.pl/");
await page1.screenshot({ path: "hello.png" });
await browser.close();
}
test();
enter image description here
try to working code on my raspberry pi 3A+

How to handle "accept cookies"?

I am trying to make a scraper that gets the reviews for hotel on tripadvisor.com. I was just working with pagination and testing if the browser would go all the way to the end, where there is no more pages.
Here is my code so far:
const puppeteer = require("puppeteer");
const cheerio = require("cheerio");
async function main() {
const browser = await puppeteer.launch({ headless: false });
const page = await browser.newPage();
await page.goto('https://www.tripadvisor.com/Hotels-g298656-Ankara-Hotels.html');
while(true) {
await page.click('a[class="nav next ui_button primary"]');
await page.waitForNavigation({waitUntil: 'networkidle0'});
}
}
main();
However, this stops when the 'accept cookies' popup appears. How can I handle this?

Puppeteer for scraping a page (with authentication)

I am using puppeteer for scraping a page (load test application) and I cannot add username and password into this page. Does anyone of you know puppeteer and may help me? This is the code:
(async () => {
const browser = await puppeteer.launch({headless: false});
const page = await browser.newPage();
await page.goto(“https://d22syekf1i694k.cloudfront.net/”, {waitUntil: ‘networkidle2’});
await page.waitForSelector(‘input[name=username]’);
await page.type(‘input[name=username]’, ‘Adenosine’);
await page.$eval(‘input[name=username]’, el => el.value = ‘Adenosine’);
await browser.close();
})(); ```

GPDR cookie popup in Playwright

Hello I try to make a screenshot with Playwright but I have cookie EU law popup on my screenshots. How can I remove them ?
Here is my browser parameters.
const browser = await playwright.firefox.launch({
headless: true,
firefoxUserPrefs: {
"network.cookie.cookieBehavior": 2
}
});
But it don't work.
Thank for your help.
Use the playwright API to click the element. I'm using the text selector in the example below, but you can use any selector.
const { webkit } = require('playwright');
(async() => {
const browser = await webkit.launch({ headless: false });
const page = await browser.newPage();
await page.goto('https://npmjs.com');
await page.click('text=Accept');
await page.screenshot({ path: 'screenshot.png' });
await browser.close();
})();

Puppeteer doesn't show a calendar that is shown after clicking a text field

I'm using Puppeteer for doing some web scraping and I'm having troubles. The website I'm trying to scrape is this one and I'm trying to create a screenshot of a calendar that appears after clicking the button "Reserve now" > "Dates".
const puppeteer = require('puppeteer');
const fs = require('fs');
void (async () => {
try {
const browser = await puppeteer.launch({headless: false});
const page = await browser.newPage();
await page.goto('https://www.marriott.com/hotels/travel/reumd-le-meridien-ra-beach-hotel-and-spa');
await page.setViewport({ width: 1920, height: 938 });
await page.waitForSelector('.m-hotel-info > .l-container > .l-header-section > .l-m-col-2 > .m-button');
await page.click('.m-hotel-info > .l-container > .l-header-section > .l-m-col-2 > .m-button');
await page.waitForSelector('.modal-content');
await page.waitFor(5000);
await page.waitForSelector('.js-recent-search-inputs .js-datepick-container .l-h-field-input')
await page.click('.js-recent-search-inputs .js-datepick-container .l-h-field-input');
await page.waitFor(5000);
await page.screenshot({ path: 'myscreenshot.png'});
await browser.close();
} catch (error) {
console.log(error);
}
})()
This is what myscreenshot.png should contain:
but I'm getting this instead:
As you can see, myscreenshot.png doesn't contain the calendar. I don't understand what I'm doing wrong since I click on the right node and I even give time enough to it for loading everything.
Thank you in advance!
Edit: I forgot to say that I have also tried Puppeteer recorder in order to achieve this and I haven't had luck either.
As you have many .l-h-field-input elements, I would try being more specific there.
This worked for me:
await page.click('.js-recent-search-inputs .js-datepick-container .l-h-field-input');

Categories