Puppeteer to save image open in the browser - javascript

I have a link for a (gif) image, obtained manually via 'open in new tab'. I want Puppeteer to open the image and then save it to a file. If doing it in a normal browser I would click right button and choose 'save' from the context menu. Is there a simple way to perform this action in Puppeteer?

These lines of codes below will save Wikipedia image logo as filename logo.png
import * as fs from 'fs'
import puppeteer from 'puppeteer'
;(async () => {
const wikipedia = 'https://www.wikipedia.org/'
const browser = await puppeteer.launch()
const page = (await browser.pages())[0]
const get = await page.goto(wikipedia)
const image = await page.waitForSelector('img[src][alt="Wikipedia"]')
const imgURL = await image.evaluate(img => img.getAttribute('src'))
const pageNew = await browser.newPage()
const response = await pageNew.goto(wikipedia + imgURL, {timeout: 0, waitUntil: 'networkidle0'})
const imageBuffer = await response.buffer()
await fs.promises.writeFile('./logo.png', imageBuffer)
await page.close()
await pageNew.close()
await browser.close()
})()
Please select this as the right answer if this help you.

In Puppeteer it's possible to right click, but it's not possible to automate the navigation through the "save as" menu. However, there is a solution outlined in the top answer here:
How can I download images on a page using puppeteer?
You can write the images to disk directly from the page response.

Related

Get current page url with Playwright Automation tool?

How can I retrieve the current URL of the page in Playwright?
Something similar to browser.getCurrentUrl() in Protractor?
const {browser}=this.helpers.Playwright;
await browser.pages(); //list pages in the browser
//get current page
const {page}=this.helpers.Playwright;
const url=await page.url();//get the url of the current page
To get the URL of the current page as a string (no await needed):
page.url()
Where "page" is an object of the Page class. You should already have a Page object, and there are various ways to instantiate it, depending on how your framework is set up: https://playwright.dev/docs/api/class-page
It can be imported with
import Page from '#playwright/test';
or this
const { webkit } = require('playwright');
(async () => {
const browser = await webkit.launch();
const context = await browser.newContext();
const page = await context.newPage();
}

Puppeteer: Remove links from page

I am converting a webpage into a .pdf-file with the help of Node.js and Puppeteer.
This works fine, but I want to remove all links on this page before converting it to a .pdf-file because otherwise the .pdf-file includes these links which can't be opened in my app when someone clicks on them. Is there a way to do so?
The page is an .aspx page which uses javascript. The links all start with "javascript:__". It is an intranet page which shows our meals and I just want to display the mealplan as a .pdf.
What I have in my .js-file looks like this:
const puppeteer = require('puppeteer');
let url = 'http://my-url.de/meals.aspx'
let browser = await puppeteer.launch()
let page = await browser.newPage()
await page.goto(url, {waitUntil: 'networkidle2' })
await page.pdf({
format:"A4",
path:files[0],
displayHeaderFooter: false,
printBackground:true
})
In my app it says "URL can't be opened", thats why I want these links to be removed.
It seems that these are not proper links, at least they are not <a> tags with href pointing to a website.
Instead, you are dealing with links that require javascript to navigate and that's why these are not working in the pdf.
What you could do is transform all these invalid hrefs to something valid for a pdf before capturing the page.
Check my attempt below. Its possible that you need to modify it a bit to suit your case since I don't have access to the actual website you try to parse.
const puppeteer = require('puppeteer');
let url = 'http://my-url.de/meals.aspx'
(async() => {
let browser = await puppeteer.launch()
let page = await browser.newPage()
await page.goto(url, {
waitUntil: 'networkidle2'
})
// Modifing the page here
await page.evaluate(_ => {
// Capture all links that start with javascript on the href property
// and change it to # instead.
document.querySelectorAll('a[href^="javascript"]')
.forEach(a => {
a.href = '#'
})
});
await page.pdf({
format: "A4",
path: files[0],
displayHeaderFooter: false,
printBackground: true
})
})()

Capture HTML canvas of third-party site as image, from command line

I know one can use tools such as wget or curl to perform HTTP requests from the command line, or use HTTP client requests from various programming languages. These tools also support fetching images or other files that are referenced in the HTML code.
What I'm searching for is a mechanism that also executes the JavaScript of that web page that renders an image into an HTML canvas. I then want to extract that rendered image as an image file. The goal to achieve is to grab a time series of those images, e.g. weather maps or other diagrams that plot time-variant data into a constant DOM object, via a cron job.
I'd prefer a solution that works from a script. How could this be done?
You can use puppeteer to load the page inside a headless chrome instance
Open the page and wait for it to load
Using page.evaluate return the dataUrl of the canvas
Convert the dataUrl to a buffer and write the result to a file
const puppeteer = require('puppeteer');
const fs = require('fs');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://games.novatoz.com/jigsaw-puzzle');
const dataUrl = await page.evaluate(async () => {
const sleep = (time) => new Promise((resolve) => setTimeout(resolve, time));
await sleep(5000);
return document.getElementById('canvas').toDataURL();
});
const data = Buffer.from(dataUrl.split(',').pop(), 'base64');
fs.writeFileSync('image.png', data);
await browser.close();
})();

How to write data to a file using Puppeteer?

Puppeteer exposes a page.screenshot() method for saving a screenshot locally on your machine. Here are the docs.
See: https://github.com/GoogleChrome/puppeteer
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://example.com');
await page.screenshot({path: 'example.png'});
Is there a way to save a data file in a similar fashion. I'm seeking something analogous to...
page.writeToFile({data, path,});
Since any puppeteer script is an ordinary node.js script you can use anything you would use in node, say the good old fs module:
const fs = require('fs');
fs.writeFileSync('path/to/file.json', data);

Puppeteer creates bad pdf

I am using puppeteer to create a pdf from my static local html file. The PDF is created but it's corrupted. Adobe reader can't open the file and says - 'Bad file handle'. any suggestions?
I am using below standard code:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('local_html_file', {waitUntil: 'networkidle2'});
await page.pdf({path: 'hn.pdf', format: 'A4'});
await browser.close();
})();
I also tried setContent() but same result. The page.screenshot() function works however.
Probably your code triggers exception. You should check pdf file size is not "zero" and you can read your pdf file with less or cat command. Sometimes pdf creators software can write errors to top of the pdf file content.
const puppeteer = require('puppeteer');
(async () => {
try{
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('local_html_file', {waitUntil: 'networkidle2'});
await page.pdf({path: 'hn.pdf', format: 'A4'});
await browser.close();
}catch(e){
console.log(e);
}
})();
The issue was the pdf filename I gave - 'con.pdf'
This seems to be a reserved name in windows and hence bad file handle. :D
What a coincidence !!!
Thanks everyone.

Categories