How to write data to a file using Puppeteer? - javascript

Puppeteer exposes a page.screenshot() method for saving a screenshot locally on your machine. Here are the docs.
See: https://github.com/GoogleChrome/puppeteer
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
await page.screenshot({path: 'example.png'});
Is there a way to save a data file in a similar fashion. I'm seeking something analogous to...
page.writeToFile({data, path,});

Since any puppeteer script is an ordinary node.js script you can use anything you would use in node, say the good old fs module:
const fs = require('fs');
fs.writeFileSync('path/to/file.json', data);

Related

Capture HTML canvas of third-party site as image, from command line

I know one can use tools such as wget or curl to perform HTTP requests from the command line, or use HTTP client requests from various programming languages. These tools also support fetching images or other files that are referenced in the HTML code.
What I'm searching for is a mechanism that also executes the JavaScript of that web page that renders an image into an HTML canvas. I then want to extract that rendered image as an image file. The goal to achieve is to grab a time series of those images, e.g. weather maps or other diagrams that plot time-variant data into a constant DOM object, via a cron job.
I'd prefer a solution that works from a script. How could this be done?
You can use puppeteer to load the page inside a headless chrome instance
Open the page and wait for it to load
Using page.evaluate return the dataUrl of the canvas
Convert the dataUrl to a buffer and write the result to a file
const puppeteer = require('puppeteer');
const fs = require('fs');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://games.novatoz.com/jigsaw-puzzle');
const dataUrl = await page.evaluate(async () => {
const sleep = (time) => new Promise((resolve) => setTimeout(resolve, time));
await sleep(5000);
return document.getElementById('canvas').toDataURL();
});
const data = Buffer.from(dataUrl.split(',').pop(), 'base64');
fs.writeFileSync('image.png', data);
await browser.close();
})();

Puppeteer creates bad pdf

I am using puppeteer to create a pdf from my static local html file. The PDF is created but it's corrupted. Adobe reader can't open the file and says - 'Bad file handle'. any suggestions?
I am using below standard code:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('local_html_file', {waitUntil: 'networkidle2'});
await page.pdf({path: 'hn.pdf', format: 'A4'});
await browser.close();
})();
I also tried setContent() but same result. The page.screenshot() function works however.
Probably your code triggers exception. You should check pdf file size is not "zero" and you can read your pdf file with less or cat command. Sometimes pdf creators software can write errors to top of the pdf file content.
const puppeteer = require('puppeteer');
(async () => {
try{
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('local_html_file', {waitUntil: 'networkidle2'});
await page.pdf({path: 'hn.pdf', format: 'A4'});
await browser.close();
}catch(e){
console.log(e);
}
})();
The issue was the pdf filename I gave - 'con.pdf'
This seems to be a reserved name in windows and hence bad file handle. :D
What a coincidence !!!
Thanks everyone.

Way to scrape a JS-Rendered page?

I'm currently scraping a list of URLs on my site using the request-promise npm module.
This works well for what I need, however, I'm noticing that not all of my divs are appearing because some are rendered after the fact with JS. I know I can't run that JS code remotely to force the render, but is there any ways to be able to scrape the pages only after those elements are added in?
I'm doing this currently with Node, and would prefer to keep using Node if possible.
Here is what I have:
const urls ['fake.com/link-1', 'fake.com/link-2', 'fake.com/link-3']
urls.forEach(url => {
request(url)
.then(function(html){
//get dummy dom
const d_dom = new JSDOM(html);
....
}
});
Any thoughts on how to accomplish this? Or if there is currently an alternative to Selenium as an npm module?
You will want to use puppeteer which is a Chrome headless browser (owned and maintained by Chrome/Google) for loading and parsing dynamic web pages.
Use page.goto() to goto a specific page, then use page.content() to load the html content from the rendered page.
Here is an example of how to use it:
const { JSDOM } = require("jsdom");
const puppeteer = require('puppeteer')
const urls = ['fake.com/link-1', 'fake.com/link-2', 'fake.com/link-3']
urls.forEach(async url => {
let dom = new JSDOM(await makeRequest(url))
console.log(dom.window.document.title)
});
async function makeRequest(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url);
let html = await page.content()
await browser.close();
return html
}

Extract all CSS with Puppeteer?

I am performing some analysis on website complexity. What is the best way to extract all CSS (external stylesheets, <style> tags, and inline CSS), for all nodes in a web page, using headless Chrome/Puppeteer?
I'm ideally looking for compiled CSS, in similar format to the "Styles" tab in the Chrome dev-tools.
You ask for two different things:
Scraping
For web scraping in nodejs better use cheerio package.
Sniffing network requests
If you want to get css files requested you can go something like:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
page.on('response',async response => {
if(response.request().resourceType() === 'stylesheet') {
console.log(await response.text());
}
});
await page.goto('https://myurl.com');
await browser.close();
})();

Fetch rendered font using Chrome headless browser

I have been looking through the Chrome headless browser documentation but unable to found this information so far.
Is it possible to capture the rendered font on a website? This information is available through the Chrome dev console.
Puppeteer doesn't expose this API directly, but it's possible to use the raw devtools protocol to get the "Rendered Fonts" information:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://www.stackoverflow.com/');
await page._client.send('DOM.enable');
await page._client.send('CSS.enable');
const doc = await page._client.send('DOM.getDocument');
const node = await page._client.send('DOM.querySelector', {nodeId: doc.root.nodeId, selector: 'h1'});
const fonts = await page._client.send('CSS.getPlatformFontsForNode', {nodeId: node.nodeId});
console.log(fonts);
await browser.close();
})();
The devtools protocol documentation for CSS.getPlatformFontsForNode can be found here: https://chromedevtools.github.io/devtools-protocol/tot/CSS#method-getPlatformFontsForNode

Categories