I've got problem in proxy authentication using chrome extension (due to problems with v3 my code is based on manifest v2). The connection works using the following curl command:
curl -v -x https://11.111.11.111:222 https://www.google.com/ --proxy-header "proxy-authorization: Basic abc" --proxy-insecure
And I tried to implement it in extension. Here's what I have in background.js file:
chrome.runtime.onStartup.addListener(() => {
const config = {
mode: 'fixed_servers',
rules: {
singleProxy: {
host: '11.111.11.111',
port: 222,
scheme: 'https',
},
},
}
chrome.proxy.settings.set({ value: config, scope: 'regular' })
})
chrome.webRequest.onBeforeSendHeaders.addListener(
(details) => {
const requestHeaders = details.requestHeaders || []
requestHeaders.push({
name: 'proxy-authorization',
value: 'Basic abc',
})
return { requestHeaders }
},
{ urls: ['<all_urls>'] },
['blocking', 'requestHeaders', 'extraHeaders']
)
chrome.webRequest.onSendHeaders.addListener(
(details) => {
console.log('sending', details)
return details
},
{ urls: ['<all_urls>'] },
['requestHeaders', 'extraHeaders']
)
I have the following permissions in manifest.json:
"permissions": [
"cookies",
"storage",
"tabs",
"activeTab",
"proxy",
"webRequest",
"webRequestBlocking",
"<all_urls>"
]
And these are the headers that are printed in onSendHeaders function:
You see that proxy-authorization header is present. But I get ERR_PROXY_CERTIFICATE_INVALID while trying to browse any page. Proxy headers should be set in a different way? Because I use --proxy-header flag in the curl command, not --header.
PS. I tried using the method that is mentioned many times:
chrome.webRequest.onAuthRequired.addListener(function(details, callbackFn) {
callbackFn({
authCredentials: { username: username, password: password }
});
},{urls: ["<all_urls>"]},['asyncBlocking']);
too, but this function is never called (I put console.log inside).
Have you solved this problem?
If not try this:
chrome.webRequest.onAuthRequired.addListener(function(details, callbackFn) {
callbackFn({
authCredentials: { username: username, password: password }
});
},{urls: ["<all_urls>"]},['blocking']);
Main change - AsyncBlocking => Blocking
Related
TL;DR - Can I access Module Federation remotes from within a content script of a chrome extension?
I'm currently developing a Chrome extension and faced the problem that can be represented by the following.
Let's say I have 2 applications - extension and actionsLogger.
These applications are linked via Webpack Module Federation like that:
actionsLogger/webpack.js
{
...,
plugins: [
new ModuleFederationPlugin({
name: 'actionsLogger',
library: { type: 'var', name: 'actionsLogger' },
filename: 'remoteEntry.js',
exposes: {
'./logClick': './actions/logClick',
}
}),
],
devServer: {
port: 3000,
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE, PATCH, OPTIONS',
'Access-Control-Allow-Headers': 'X-Requested-With, content-type, Authorization'
},
static: {
directory: path.resolve(__dirname, 'dist'),
}
},
...
}
extension/webpack.js
{
...,
plugins: [
new webpack.ProvidePlugin({
browser: 'webextension-polyfill'
}),
new ModuleFederationPlugin({
name: 'extension',
remotes: {
actionsLogger: 'actionsLogger#http://localhost:3000/remoteEntry.js',
},
}),
],
...
}
So, as you can see, actionsLogger is running on port 3000 and extension is referring to it via Module Federation. actionsLogger contains a simple function to get position of a cursor in case of a click event.
actionsLogger/actions/logClick.js
function logClick(event) {
return { X: event.clientX, Y: event.clientY };
}
export default logClick;
Other application - extension, contains all the code for the chrome extension together with this particular script that imports logClick from actionsLogger/logClick and sends the position of a cursor to the background page whenever a click happens:
extension/tracker.js
import('actionsLogger/logClick').then((module) => {
const logClick = module.default;
document.addEventListener("click", (event) => {
const click = logClick(event);
chrome.runtime.sendMessage(click);
});
});
So manifest.json in extension looks like this:
extension/manifest.json
{
...
"content_scripts": [{
"matches": ["<all_urls>"],
"js": ["tracker.js"]
}],
...
}
And here comes the problem. If I try to open some web page with the extension installed and running, I get the following error:
Uncaught (in promise) ScriptExternalLoadError: Loading script failed.
(missing: http://localhost:3000/remoteEntry.js)
while loading "./logClick" from webpack/container/reference/actionsLogger
at webpack/container/reference/actionsLogger (remoteEntry.js":1:1)
at __webpack_require__ (bootstrap:18:1)
at handleFunction (remotes loading:33:1)
at remotes loading:52:1
at Array.forEach (<anonymous>)
at __webpack_require__.f.remotes (remotes loading:15:1)
at ensure chunk:6:1
at Array.reduce (<anonymous>)
at __webpack_require__.e (ensure chunk:5:1)
at iframe.js:44:1
First of all, I thought that I misconfigured something in my Module Federation settings, but then I tried the following - added inject.js:
extension/inject.js
const script = document.createElement('script');
script.src = "chrome-extension://ddgdsaidlksalmcgmphhechlkdlfocmd/tracker.js";
document.body.appendChild(script);
Modified manifest.json in extension:
extension/manifest.json
{
...
"content_scripts": [{
"matches": ["<all_urls>"],
"js": ["inject.js"]
}],
"web_accessible_resources": [
"tracker.js",
]
...
}
And now Module Federation works fine, but since extension/tracker.js imports import('actionsLogger/logClick') from a remote, some websites that have content security policy defined might block this request (e.g. IKEA). So this approach also won't always work.
And the initial problem with accessing MF module from content script probably happens because content scripts are running in their own isolated environment and module federation can't resolve there properly. But maybe there are some configuration flags/options or some other/better ways to make it work?
Would appreciate any advice.
Of course, minutes after I posted the question I came up with a working solution. So, basically, I decided to use the second approach where extension/tracker.js is injected via <script /> tag on a page by extension/inject.js and for the CSP issue I've added a utility that blocks a CSP header on a request for the page.
manifest.json
{
"permissions": [
...,
"webRequest",
"webRequestBlocking",
"browsingData",
...
]
}
extension/disableCSP.js
function disableCSP() {
chrome.browsingData.remove({}, { serviceWorkers: true }, function () {});
const onHeaderFilter = { urls: ['*://*/*'], types: ['main_frame', 'sub_frame'] };
browser.webRequest.onHeadersReceived.addListener(
onHeadersReceived, onHeaderFilter, ['blocking', 'responseHeaders']
);
};
function onHeadersReceived(details) {
for (let i = 0; i < details.responseHeaders.length; i++) {
if (details.responseHeaders[i].name.toLowerCase() === 'content-security-policy') {
details.responseHeaders[i].value = '';
}
}
return { responseHeaders: details.responseHeaders };
};
export default disableCSP;
And then just call disableCSP() on the background script.
I'm a bit unsure about security drawbacks of such an approach, so feel free to let me know if a potential high risk vulnerability is introduced by this solution.
I put two pieces of code.
The first contains the chrome extension manifest version 2 files.
Here if I click on anchor with href pointed to zip file, then extension redirect to page from extension.
This is a worked example.
I am trying to achieve this for chrome extension with manifest version 3.
This is a second pieces of code.
First part.
Extension manifest version 2
manifest.json
{
"name": "Test app mv2",
"version": "0.1",
"manifest_version": 2,
"description": "test mv2",
"background": {
"scripts": [
"background.js"
]
},
"content_security_policy": "script-src 'self' 'unsafe-eval'; object-src 'self'",
"icons": {
"128": "128.png"
},
"permissions": [
"webRequest",
"webRequestBlocking",
"<all_urls>"
],
"web_accessible_resources": [
"web/main.html"
]
}
background.js
function getHeaderFromHeaders(headers, headerName) {
for (var i=0; i<headers.length; ++i) {
var header = headers[i];
if (header.name.toLowerCase() === headerName) {
return header;
}
}
}
function isAllowed(details) {
var header = getHeaderFromHeaders(details.responseHeaders, 'content-type');
if (header) {
var headerValue = header.value.toLowerCase().split(';',1)[0].trim();
var mimeTypes = [
'application/zip'
];
return (mimeTypes.indexOf(headerValue) !== -1);
}
}
chrome.webRequest.onHeadersReceived.addListener(
function(details) {
if (details.method !== 'GET') {
// Don't intercept POST requests until http://crbug.com/104058 is fixed.
return;
}
if (!isAllowed(details)) {
return;
}
return { redirectUrl: chrome.runtime.getURL('web/main.html') };
},
{
urls: [
'<all_urls>'
],
types: ['main_frame', 'sub_frame']
},
['blocking','responseHeaders']
);
Full source for mv2
Second part.
Extension manifest version 3
manifest.json
{
"name": "Test app mv3",
"manifest_version": 3,
"version": "0.1",
"background": {
"service_worker": "./background.js"
},
"action": {
"default_title": "SW3"
},
"host_permissions": [
"<all_urls>"
],
"permissions": [
"webRequest",
"declarativeNetRequest",
"declarativeNetRequestFeedback",
"declarativeNetRequestWithHostAccess"
],
"web_accessible_resources": [{
"resources": ["web/main.html"],
"matches": ["<all_urls>"]
}],
"content_security_policy": {
"extension_pages": "script-src 'self'; object-src 'self'"
}
}
background.js
function getHeaderFromHeaders(headers, headerName) {
for (var i=0; i<headers.length; ++i) {
var header = headers[i];
if (header.name.toLowerCase() === headerName) {
return header;
}
}
}
function isAllowed(details) {
var header = getHeaderFromHeaders(details.responseHeaders, 'content-type');
if (header) {
var headerValue = header.value.toLowerCase().split(';',1)[0].trim();
var mimeTypes = [
'application/zip'
];
return (mimeTypes.indexOf(headerValue) !== -1);
}
}
chrome.webRequest.onHeadersReceived.addListener(
function(details) {
if (details.method !== 'GET') {
return;
}
if (!isAllowed(details)) {
return;
}
chrome.declarativeNetRequest.updateSessionRules({
addRules: [{
'id': 2001,
'priority': 1,
'action': {
'type': 'redirect',
'redirect': {
url: chrome.runtime.getURL('web/main.html')
}
},
'condition': {
'urlFilter': details.url,
'resourceTypes': ['main_frame']
}
}],
removeRuleIds: [2001]
});
return { redirectUrl: chrome.runtime.getURL('web/main.html') };
},
{
urls: [
'<all_urls>'
],
types: ['main_frame', 'sub_frame']
},
['responseHeaders']
);
Full source for mv3
For extension with mv3, above code achieved similar action as code for mv2.
The difference is that: when I click on anchor that pointed to zip file, then on the first click the dialog "save as" is shown and if I click on same zip anchor for second time, then redirect occurs.
For other zip files above actions are repeated.
How I can modify mv3 code to achieve same results as mv2?
This is still possible, but requires abandoning the chrome.webRequest API in favor of chrome.debugger. This allows your extension talk to the CDP protocol, where you can hook into the request/response lifecycle.
The first step is to attach the debugger the tabId in question, and send enable commands (if required) for the APIs you are using. Here you can find many examples of how to do this. For example, let's assume we want to use Fetch, so we need to send Fetch.enable.
Now, if you want to modify a request/response before it is processed, on a high level you need to:
Set up a handler for Fetch.requestPaused, configuring your target URL patterns (or * for all).
Check the request stage as described in the doc.
If Request: modify headers, and send Fetch.continueRequest.
If Response: Get the response body by sending Fetch.getResponseBody. If desired, modify body/status/headers. Fulfill the response to client by sending Fetch.fulfillRequest.
Helpful:
https://grep.app/search?q=chrome.debugger.sendCommand
https://grep.app/search?q=Fetch.fulfillRequest
I am using this approach successfully in a chrome extension project. MV3 killed the native extension request API (or at least, severely limited it) so for now this is the only option I am aware of. The downside is, an annoying bar appears under the tab, saying this browser is currently being debugged by $EXT_NAME.
I am developing chrome extension with manifest v3. I tried the below code to register service worker as in many resources, I found that
[Service Worker Registration] Registered extension scheme to allow service workers
Service Workers require a secure origin, such as HTTPS.
chrome-extension:// pages are not HTTP/HTTPS, but are secure so this change
becomes a necessary step to allow extensions to register a Service Worker.
"chrome-extension" is added as a scheme allowing service workers.
but after using the below code also, service worker is not working, and background.js file is not working due to this.
Scope URL:
chrome-extension://ifcfmjiflkcjbbddpbeccmkjhhimbphj/trigger/trigger.html
serviceWorker.js
self.addEventListener('load', function () {
navigator.serviceWorker.register('./background.js').then(function (registration) {
// Registration was successful
console.log('ServiceWorker registration successful with scope: ', registration.scope);
}, function (err) {
// registration failed :(
console.log('ServiceWorker registration failed: ', err);
});
});
self.addEventListener('install', (event) => {
event.waitUntil(
caches.open('v1').then((cache) => {
return cache.addAll([
]);
})
);
});
self.addEventListener('fetch', (event) => {
event.respondWith(
caches.match(event.request).then((resp) => {
return resp || fetch(event.request).then((response) => {
return caches.open('v1').then((cache) => {
// cache.put(event.request, response.clone());
return response;
});
});
})
);
});
self.addEventListener('activate', (event) => {
var cacheKeeplist = ['v2'];
event.waitUntil(
caches.keys().then((keyList) => {
return Promise.all(keyList.map((key) => {
if (cacheKeeplist.indexOf(key) === -1) {
return caches.delete(key);
}
}));
})
);
});
manifest.json
{
"manifest_version": 3,
"name": "TestExtension",
"description": "This extension is for testing purpose only",
"version": "0.0.1",
"background":{
"service_worker":"./serviceWorker.js"
},
"declarative_net_request": {
"rule_resources": [{
"id": "ruleset_1",
"enabled": true,
"path": "rules.json"
}]
},
"permissions":["storage", "tabs", "activeTab", "scripting", "declarativeNetRequest", "declarativeNetRequestFeedback","browsingData","contextMenus"],
"host_permissions":["<all_urls>"],
"web_accessible_resources": [
{
"resources": [ "css/notification.css" ],
"matches": [ "<all_urls>" ]
}
],
"content_scripts":[
{
"js":["content-scripts/content.js"],
"matches":["<all_urls>"],
"all_frames":true,
"exclude_matches":["*://extensions/*"],
"run_at":"document_end"
}
],
"icons":{
"16":"assets/baby-logo-black-icon.png",
"48":"assets/baby-logo-black-icon.png",
"128":"assets/baby-logo-black-icon.png"
},
"action":{
"default_title":"TestExtension",
"default_popup":"trigger/trigger.html"
}
}
anyone please help me and guide me on this whether I have missed anything in serviceWorker.js or manifest.json? or made any mistake on this? Kindly help me to make serviceWorker.js and background.js to work.
Note: With the above code, I am not getting any errors on serviceWorker.js, and its not working also.
There's no need to register anything.
The browser does it automatically when you install the extension.
Remove the registration code and serviceWorker.js.
Use "service_worker": "background.js" in manifest.json
So I have Hapi (v17.5.1) and when I have my plugins array as
[
{
plugin: good,
options: {
reporters: {
errorReporter: [
{
module: 'good-squeeze',
name: 'Squeeze',
args: [{ error: '*' }],
}, {
module: 'good-console',
},
'stderr',
],
infoReporter: [
{
module: 'good-squeeze',
name: 'Squeeze',
args: [{ log: '*', response: '*' }],
}, {
module: 'good-console',
},
'stdout',
],
},
}
]
Let's save it in a variable goodPlugin for the next example.
That is, only with the good plugin and it works fine but when I go and try to add Inert, Vision or Hapi-Swagger, it breaks giving the error Cannot start server before plugins finished registration.
An example:
const HapiSwagger = require('hapi-swagger');
const Inert = require('inert');
const Vision = require('vision');
const Pack = require('../package');
module.exports = [
Inert,
Vision,
// goodPlugin,
{
plugin: HapiSwagger,
options: {
info: {
title: Pack.description,
version: Pack.version,
},
},
}
];
Where am I going wrong? I even tried to add this only when the development mode is on, but it gave me the same error.
Do you use await when registering plugins? As suggested per documentation, the plugin registration part should look like this:
const init = async () => {
await server.register({
plugin: require('hapi-pino')
});
await server.start();
console.log(`Server running at: ${server.info.uri}`);
};
init();
I want to set my Chrome proxy using an extension. This is my background.js file:
var config = {
mode: "fixed_servers",
rules: {
proxyForHttp: {
scheme: "http",
host: "My IP",
port: // My port as INT
}
}
};
chrome.proxy.settings.set(
{
value: config,
scope: "regular"
}, function() {});
It doesn't change my IP, what am I doing wrong?
And also, how can I set the proxy for just one website and how can I turn it off?