I got this code from another Stackoverflow Question:
import electron from "electron";
import puppeteer from "puppeteer-core";
const delay = (ms: number) =>
new Promise(resolve => {
setTimeout(() => {
resolve();
}, ms);
});
(async () => {
try {
const app = await puppeteer.launch({
executablePath: electron,
args: ["."],
headless: false,
});
const pages = await app.pages();
const [page] = pages;
await page.setViewport({ width: 1200, height: 700 });
await delay(5000);
const image = await page.screenshot();
console.log(image);
await page.close();
await delay(2000);
await app.close();
} catch (error) {
console.error(error);
}
})();
Typescript compiler complains about executablePath property of launch method options object cause it needs to be of type string and not Electron. So how to pass electron chromium executable path to puppeteer?
You cannot use electron executable with Puppeteer directly without some workarounds and flag changes. They have tons of differences in the API. Specially electron doesn't have all of the chrome.* API which is needed for chromium browser to work properly, many flags still doesn't have proper replacements such as the headless flag.
Below you will see two ways to do it. However you need to make sure of two points,
Make sure the puppeteer is connected before the app is initiated.
Make sure you get the correct version puppeteer or puppeteer-core for the version of Chrome that is running in Electron!
Use puppeteer-in-electron
There are lots of workarounds, but most recently there is a puppeteer-in-electron package which allows you to run puppeteer within electron app using the electron.
First, install the dependencies,
npm install puppeteer-in-electron puppeteer-core electron
Then run it.
import {BrowserWindow, app} from "electron";
import pie from "puppeteer-in-electron";
import puppeteer from "puppeteer-core";
const main = async () => {
const browser = await pie.connect(app, puppeteer);
const window = new BrowserWindow();
const url = "https://example.com/";
await window.loadURL(url);
const page = await pie.getPage(browser, window);
console.log(page.url());
window.destroy();
};
main();
Get the debugging port and connect to it
The another way is to get the remote-debugging-port of the electron app and connect to it. This solution is shared by trusktr on electron forum.
import {app, BrowserWindow, ...} from "electron"
import fetch from 'node-fetch'
import * as puppeteer from 'puppeteer'
app.commandLine.appendSwitch('remote-debugging-port', '8315')
async function test() {
const response = await fetch(`http://localhost:8315/json/versions/list?t=${Math.random()}`)
const debugEndpoints = await response.json()
let webSocketDebuggerUrl = debugEndpoints['webSocketDebuggerUrl ']
const browser = await puppeteer.connect({
browserWSEndpoint: webSocketDebuggerUrl
})
// use puppeteer APIs now!
}
// ... make your window, etc, the usual, and then: ...
// wait for the window to open/load, then connect Puppeteer to it:
mainWindow.webContents.on("did-finish-load", () => {
test()
})
Both solution above uses webSocketDebuggerUrl to resolve the issue.
Extra
Adding this note because most people uses electron to bundle the app.
If you want to build the puppeteer-core and puppeteer-in-electron, you need to use hazardous and electron-builder to make sure get-port-cli works.
Add hazardous on top of main.js
// main.js
require ('hazardous');
Make sure the get-port-cli script is unpacked, add the following on package.json
"build": {
"asarUnpack": "node_modules/get-port-cli"
}
Result after building:
the toppest answer dones't work for me use electron 11 and puppeteer-core 8.
but start puppeteer in main process other then in the renderer process works for me.you can use ipcMain and ipcRenderer to comunicate each other.the code below
main.ts(main process code)
import { app, BrowserWindow, ipcMain } from 'electron';
import puppeteer from 'puppeteer-core';
async function newGrabBrowser({ url }) {
const browser = await puppeteer.launch({
headless: false,
executablePath:
'/Applications/Google Chrome.app/Contents/MacOS/Google Chrome',
});
const page = await browser.newPage();
page.goto(url);
}
ipcMain.on('grab', (event, props) => {
newGrabBrowser(JSON.parse(props));
});
home.ts (renderer process code)
const { ipcRenderer } = require('electron');
ipcRenderer.send('grab',JSON.stringify({url: 'https://www.google.com'}));
There is also another option, which works for electron 5.x.y and up (currently up to 7.x.y, I did not test it on 8.x.y beta yet):
// const assert = require("assert");
const electron = require("electron");
const kill = require("tree-kill");
const puppeteer = require("puppeteer-core");
const { spawn } = require("child_process");
let pid;
const run = async () => {
const port = 9200; // Debugging port
const startTime = Date.now();
const timeout = 20000; // Timeout in miliseconds
let app;
// Start Electron with custom debugging port
pid = spawn(electron, [".", `--remote-debugging-port=${port}`], {
shell: true
}).pid;
// Wait for Puppeteer to connect
while (!app) {
try {
app = await puppeteer.connect({
browserURL: `http://localhost:${port}`,
defaultViewport: { width: 1000, height: 600 } // Optional I think
});
} catch (error) {
if (Date.now() > startTime + timeout) {
throw error;
}
}
}
// Do something, e.g.:
// const [page] = await app.pages();
// await page.waitForSelector("#someid")//
// const text = await page.$eval("#someid", element => element.innerText);
// assert(text === "Your expected text");
// await page.close();
};
run()
.then(() => {
// Do something
})
.catch(error => {
// Do something
kill(pid, () => {
process.exit(1);
});
});
Getting the pid and using kill is optional. For running the script on some CI platform it does not matter, but for local environment you would have to close the electron app manually after each failed try.
Please see this sample repo.
Related
I'm making a discord bot and I am following a guide, the following code is copied from the guide and is for registering slash commands
const { REST } = require('#discordjs/rest');
const { Routes } = require('discord-api-types/v10');
const { clientId, token } = require('./config.json');
const fs = require('node:fs');
const commands = [];
// Grab all the command files from the commands directory you created earlier
const commandFiles = fs.readdirSync('./commands').filter(file => file.endsWith('.js'));
// Grab the SlashCommandBuilder#toJSON() output of each command's data for deployment
for (const file of commandFiles) {
const command = require(`./commands/${file}`);
commands.push(command.data.toJSON());
}
// Construct and prepare an instance of the REST module
const rest = new REST({ version: '10' }).setToken(token);
// and deploy your commands!
(async () => {
try {
console.log(`Started refreshing ${commands.length} application (/) commands.`);
// The put method is used to fully refresh all commands in the guild with the current set
await rest.put(
Routes.applicationCommands(clientId),
{ body: commands },
);
console.log(`Successfully reloaded ${data.length} application (/) commands.`);
} catch (error) {
// And of course, make sure you catch and log any errors!
console.error(error);
}
})();
The commands folder only has 1 file right now and it is ping.js: (also copied from the guide)
const { SlashCommandBuilder } = require('discord.js');
module.exports = {
data: new SlashCommandBuilder()
.setName('ping')
.setDescription('Replies with Pong!'),
async execute(interaction) {
await interaction.reply('Pong!');
},
};
This code worked before, I tried it and it worked fine even with 2 commands. But when I tried adding a third one (by copying ping and just changing the values) it started saying: "ReferenceError: data is not defined" whenever I tried to run it. So I deleted the file and tried running it with the 2 that already worked but now it gave this error with those 2 as well. So I tried only with the ping file from the guide itself and even tried copying from the guide again and I couldn't get it to work again.
I am trying to build a scraper to monitor web projects automatically.
So far so good, the script is running, but now I want to add a feature that automatically analyses what libraries I used in the projects. The most powerful script for this job is wappalyser. They have a node package (https://www.npmjs.com/package/wappalyzer) and it's written that you can use it combined with pupperteer.
I managed to run pupperteer and to log the source code of the sites in the console, but I don't get the right way to pass the source code to the wappalyzer analyse function.
Do you guys have a hint for me?
I tryed this code but a am getting a TypeError: url.split is not a function
function getLibarys(url) {
(async () => {
const browser = await puppeteer.launch({ headless: true });
const page = await browser.newPage();
await page.goto(url);
// get source code with puppeteer
const html = await page.content();
const wappalyzer = new Wappalyzer();
(async function () {
try {
await wappalyzer.init()
// Optionally set additional request headers
const headers = {}
const site = await wappalyzer.open(page, headers)
// Optionally capture and output errors
site.on('error', console.error)
const results = await site.analyze()
console.log(JSON.stringify(results, null, 2))
} catch (error) {
console.error(error)
}
await wappalyzer.destroy()
})()
await browser.close()
})()
}
Fixed it by using the sample code from wappalyzer.
function getLibarys(url) {
const Wappalyzer = require('wappalyzer');
const options = {
debug: false,
delay: 500,
headers: {},
maxDepth: 3,
maxUrls: 10,
maxWait: 5000,
recursive: true,
probe: true,
proxy: false,
userAgent: 'Wappalyzer',
htmlMaxCols: 2000,
htmlMaxRows: 2000,
noScripts: false,
noRedirect: false,
};
const wappalyzer = new Wappalyzer(options)
;(async function() {
try {
await wappalyzer.init()
// Optionally set additional request headers
const headers = {}
const site = await wappalyzer.open(url, headers)
// Optionally capture and output errors
site.on('error', console.error)
const results = await site.analyze()
console.log(JSON.stringify(results, null, 2))
} catch (error) {
console.error(error)
}
await wappalyzer.destroy()
})()
}
I do not know if you still need an answer to this. But this is what a wappalyzer collaborator told me:
Normally you'd run Wappalyzer like this:
const Wappalyzer = require('wappalyzer')
const wappalyzer = new Wappalyzer()
await wappalyzer.init() // Launches a Puppeteer instance
const site = await wappalyzer.open(url)
If you want to use your own browser instance, you can skip wappalyzer.init() and assign the instance to wappalyzer.browser:
const Wappalyzer = require('wappalyzer')
const wappalyzer = new Wappalyzer()
wappalyzer.browser = await puppeteer.launch() // Use your own Puppeteer launch logic
const site = await wappalyzer.open(url)
You can find the discussion here.
Hope this helps.
I'm trying to mock a function using Frisby and Jest.
Here are some details about my code:
dependencies
axios: "^0.26.0",
dotenv: "^16.0.0",
express: "^4.17.2"
devDependencies
frisby: "^2.1.3",
jest: "^27.5.1"
When I mock using Jest, the correct response from API is returned, but I don't want it. I want to return a fake result like this: { a: 'b' }.
How to solve it?
I have the following code:
// (API Fetch file) backend/api/fetchBtcCurrency.js
const axios = require('axios');
const URL = 'https://api.coindesk.com/v1/bpi/currentprice/BTC.json';
const getCurrency = async () => {
const response = await axios.get(URL);
return response.data;
};
module.exports = {
getCurrency,
};
// (Model using fetch file) backend/model/cryptoModel.js
const fetchBtcCurrency = require('../api/fetchBtcCurrency');
const getBtcCurrency = async () => {
const responseFromApi = await fetchBtcCurrency.getCurrency();
return responseFromApi;
};
module.exports = {
getBtcCurrency,
};
// (My test file) /backend/__tests__/cryptoBtc.test.js
require("dotenv").config();
const frisby = require("frisby");
const URL = "http://localhost:4000/";
describe("Testing GET /api/crypto/btc", () => {
beforeEach(() => {
jest.mock('../api/fetchBtcCurrency');
});
it('Verify if returns correct response with status code 200', async () => {
const fetchBtcCurrency = require('../api/fetchBtcCurrency').getCurrency;
fetchBtcCurrency.mockImplementation(() => (JSON.stringify({ a: 'b'})));
const defaultExport = await fetchBtcCurrency();
expect(defaultExport).toBe(JSON.stringify({ a: 'b'})); // This assert works
await frisby
.get(`${URL}api/crypto/btc`)
.expect('status', 200)
.expect('json', { a: 'b'}); // Integration test with Frisby does not work correctly.
});
});
Response[
{
I hid the lines to save screen space.
}
->>>>>>> does not contain provided JSON [ {"a":"b"} ]
];
This is a classic lost reference problem.
Since you're using Frisby, by looking at your test, it seems you're starting the server in parallel, correct? You first start your server with, say npm start, then you run your test with npm test.
The problem with that is: by the time your test starts, your server is already running. Since you started your server with the real fetchBtcCurrency.getCurrency, jest can't do anything from this point on. Your server will continue to point towards the real module, not the mocked one.
Check this illustration: https://gist.githubusercontent.com/heyset/a554f9fe4f34101430e1ec0d53f52fa3/raw/9556a9dbd767def0ac9dc2b54662b455cc4bd01d/illustration.svg
The reason the assertion on the import inside the test works is because that import is made after the mock replaces the real file.
You didn't share your app or server file, but if you are creating the server and listening on the same module, and those are "hanging on global" (i.e: being called from the body of the script, and not part of a function), you'll have to split them. You'll need a file that creates the server (appending any route/middleware/etc to it), and you'll need a separate file just to import that first one and start listening.
For example:
app.js
const express = require('express');
const { getCurrency } = require('./fetchBtcCurrency');
const app = express()
app.get('/api/crypto/btc', async (req, res) => {
const currency = await getCurrency();
res.status(200).json(currency);
});
module.exports = { app }
server.js
const { app } = require('./app');
app.listen(4000, () => {
console.log('server is up on port 4000');
});
Then, on your start script, you run the server file. But, on your test, you import the app file. You don't start the server in parallel. You'll start and stop it as part of the test setup/teardown.
This will give jest the chance of replacing the real module with the mocked one before the server starts listening (at which point it loses control over it)
With that, your test could be:
cryptoBtc.test.js
require("dotenv").config();
const frisby = require("frisby");
const URL = "http://localhost:4000/";
const fetchBtcCurrency = require('./fetchBtcCurrency');
const { app } = require('./app');
jest.mock('./fetchBtcCurrency')
describe("Testing GET /api/crypto/btc", () => {
let server;
beforeAll((done) => {
server = app.listen(4000, () => {
done();
});
});
afterAll(() => {
server.close();
});
it('Verify if returns correct response with status code 200', async () => {
fetchBtcCurrency.getCurrency.mockImplementation(() => ({ a: 'b' }));
await frisby
.get(`${URL}api/crypto/btc`)
.expect('status', 200)
.expect('json', { a: 'b'});
});
});
Note that the order of imports don't matter. You can do the "mock" below the real import. Jest is smart enough to know that mocks should come first.
I am using JEST + Puppeteer to run functional tests on hosted web app.
here is test code:
const puppeteer = require('puppeteer');
const url = 'https://somewebsite.com';
const login = (async(page, login, password) =>{
await page.goto(url)
await page.waitForSelector('#mat-input-0')
await page.type('#mat-input-0', login)
await page.type('#mat-input-1', password)
await page.click('button')
})
beforeEach(async () => {
browser = await puppeteer.launch({ headless: false });
page = await browser.newPage();
});
afterEach(async () => {
await browser.close();
});
describe('login to website test', () => {
test('non existent user try', async() => {
jest.setTimeout(300000);
await login(page, 'user#email.com', 'upsiforgoTTThepassword')
await page.waitFor(1000)
var element = await page.$eval('.mat-simple-snackbar', (element) => {
return element.textContent.trim()
})
expect(element).toBe('User not Found')
})
})
And the problem I got is, that if I use puppeteer function await browser.close(); to exit browser after test ends It is automatically failed and I get the error in terminal:
● Test suite failed to run
Protocol error: Connection closed. Most likely the page has been closed.
and if I don't close browser after test ends it passes as it should.
I found out if I comment out preset in my jest.config.js, the error stops to occur:
// preset: "jest-puppeteer",
I am trying to do BDD with cucumber-js and drive the browser testing with Headless Chrome and puppeteer.
Using the documentation from cucumber node example and headless chrome, I get the following errors, the entire code base is avaliable here: github repo.
Errors:
TypeError: this.browser.newPage is not a function
TypeError: this.browser.close is not a function
// features/support/world.js
const puppeteer = require('puppeteer');
var {defineSupportCode} = require('cucumber');
function CustomWorld() {
this.browser = puppeteer.launch();
}
defineSupportCode(function({setWorldConstructor}) {
setWorldConstructor(CustomWorld)
})
// features/step_definitions/hooks.js
const puppeteer = require('puppeteer');
var {defineSupportCode} = require('cucumber');
defineSupportCode(function({After}) {
After(function() {
return this.browser.close();
});
});
// features/step_definitions/browser_steps.js
const puppeteer = require('puppeteer');
var { defineSupportCode } = require('cucumber');
defineSupportCode(function ({ Given, When, Then }) {
Given('I am on the Cucumber.js GitHub repository', function (callback) {
const page = this.browser.newPage();
return page.goto('https://github.com/cucumber/cucumber-js/tree/master');
});
When('I click on {string}', function (string, callback) {
// Write code here that turns the phrase above into concrete actions
callback(null, 'pending');
});
Then('I should see {string}', function (string, callback) {
// Write code here that turns the phrase above into concrete actions
callback(null, 'pending');
});
});
puppeteer is completely async, so you have to wait it's initialization before using this.browser.
But setWorldConstructor is sync function, so you can't wait there. In my example I used Before hook
My example:
https://gist.github.com/dmitrika/7dee618842c00fbc35418b901735656b
We created puppeteer-cucumber-js to simplify working with Puppeteer and Cucumber:
Run npm install puppeteer-cucumber-js
Create a features folder in the root of your project
Add a feature-name.feature file with your Given, When, Then statements
Create a features/step-definitions folder
Add JavaScript steps to execute for each of your features steps
Run tests node ./node_modules/puppeteer-cucumber-js/index.js --headless
Source code with a working example on GitHub
Cucumber has been since updated. This is how I have implemented my async puppeteer setup with cucumber. Gist here
const { BeforeAll, Before, AfterAll, After } = require('cucumber');
const puppeteer = require('puppeteer');
Before(async function() {
const browser = await puppeteer.launch({ headless: false, slowMo: 50 });
const page = await browser.newPage();
this.browser = browser;
this.page = page;
})
After(async function() {
// Teardown browser
if (this.browser) {
await this.browser.close();
}
// Cleanup DB
})