I have been looking through the Chrome headless browser documentation but unable to found this information so far.
Is it possible to capture the rendered font on a website? This information is available through the Chrome dev console.
Puppeteer doesn't expose this API directly, but it's possible to use the raw devtools protocol to get the "Rendered Fonts" information:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://www.stackoverflow.com/');
await page._client.send('DOM.enable');
await page._client.send('CSS.enable');
const doc = await page._client.send('DOM.getDocument');
const node = await page._client.send('DOM.querySelector', {nodeId: doc.root.nodeId, selector: 'h1'});
const fonts = await page._client.send('CSS.getPlatformFontsForNode', {nodeId: node.nodeId});
console.log(fonts);
await browser.close();
})();
The devtools protocol documentation for CSS.getPlatformFontsForNode can be found here: https://chromedevtools.github.io/devtools-protocol/tot/CSS#method-getPlatformFontsForNode
Related
When using puppeteer, i used to get new tab by using this lines of code:
const browser = await puppeteer.launch()
const [page] = await browser.pages()
await page.goto('http://example.com')
The main purpose of this is the fewer tabs number, my app is running lighter.
But when i using playwright, it seems that the context isn't contain any page yet.
const browser = await playwright.chromium.launch()
const context = await browser.newContext()
const [page] = await context.pages()
await page.goto('http://example.com')
My code is running, but i keep getting this error message:
(node:47248) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'goto' of undefined
Am i the only one who getting this kind of error?
That's the same behavior you would get in puppeteer if you use createIncognitoBrowserContext.
const browser = await puppeteer.launch();
const context = await browser.createIncognitoBrowserContext();
const [page] = await context.pages(); //Page is null here
await page.goto('http://example.com');
Both createIncognitoBrowserContext in puppeteer and newContext in playwright are created with no pages.
As you mentioned in your answer, you could use the default context or call newPage in the context you just created.
After trying to get this error gone, i'm getting the code to be like this:
const browser = await playwright.chromium.launch()
const context = await browser.defaultContext()
const [page] = await context.pages()
await page.goto('http://example.com')
I change newContext() to defaultContext().
Puppeteer exposes a page.screenshot() method for saving a screenshot locally on your machine. Here are the docs.
See: https://github.com/GoogleChrome/puppeteer
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
await page.screenshot({path: 'example.png'});
Is there a way to save a data file in a similar fashion. I'm seeking something analogous to...
page.writeToFile({data, path,});
Since any puppeteer script is an ordinary node.js script you can use anything you would use in node, say the good old fs module:
const fs = require('fs');
fs.writeFileSync('path/to/file.json', data);
I am using puppeteer to create a pdf from my static local html file. The PDF is created but it's corrupted. Adobe reader can't open the file and says - 'Bad file handle'. any suggestions?
I am using below standard code:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('local_html_file', {waitUntil: 'networkidle2'});
await page.pdf({path: 'hn.pdf', format: 'A4'});
await browser.close();
})();
I also tried setContent() but same result. The page.screenshot() function works however.
Probably your code triggers exception. You should check pdf file size is not "zero" and you can read your pdf file with less or cat command. Sometimes pdf creators software can write errors to top of the pdf file content.
const puppeteer = require('puppeteer');
(async () => {
try{
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('local_html_file', {waitUntil: 'networkidle2'});
await page.pdf({path: 'hn.pdf', format: 'A4'});
await browser.close();
}catch(e){
console.log(e);
}
})();
The issue was the pdf filename I gave - 'con.pdf'
This seems to be a reserved name in windows and hence bad file handle. :D
What a coincidence !!!
Thanks everyone.
I'm currently scraping a list of URLs on my site using the request-promise npm module.
This works well for what I need, however, I'm noticing that not all of my divs are appearing because some are rendered after the fact with JS. I know I can't run that JS code remotely to force the render, but is there any ways to be able to scrape the pages only after those elements are added in?
I'm doing this currently with Node, and would prefer to keep using Node if possible.
Here is what I have:
const urls ['fake.com/link-1', 'fake.com/link-2', 'fake.com/link-3']
urls.forEach(url => {
request(url)
.then(function(html){
//get dummy dom
const d_dom = new JSDOM(html);
....
}
});
Any thoughts on how to accomplish this? Or if there is currently an alternative to Selenium as an npm module?
You will want to use puppeteer which is a Chrome headless browser (owned and maintained by Chrome/Google) for loading and parsing dynamic web pages.
Use page.goto() to goto a specific page, then use page.content() to load the html content from the rendered page.
Here is an example of how to use it:
const { JSDOM } = require("jsdom");
const puppeteer = require('puppeteer')
const urls = ['fake.com/link-1', 'fake.com/link-2', 'fake.com/link-3']
urls.forEach(async url => {
let dom = new JSDOM(await makeRequest(url))
console.log(dom.window.document.title)
});
async function makeRequest(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url);
let html = await page.content()
await browser.close();
return html
}
I am performing some analysis on website complexity. What is the best way to extract all CSS (external stylesheets, <style> tags, and inline CSS), for all nodes in a web page, using headless Chrome/Puppeteer?
I'm ideally looking for compiled CSS, in similar format to the "Styles" tab in the Chrome dev-tools.
You ask for two different things:
Scraping
For web scraping in nodejs better use cheerio package.
Sniffing network requests
If you want to get css files requested you can go something like:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
page.on('response',async response => {
if(response.request().resourceType() === 'stylesheet') {
console.log(await response.text());
}
});
await page.goto('https://myurl.com');
await browser.close();
})();