How to fix Error: Not implemented: navigation (except hash changes) - javascript

I am implementing unit test for a file that contain window.location.href and I need to check it.
My jest version is 22.0.4. Everything is fine when I run my test on node version >=10
But I get this error when I run it on v8.9.3
console.error node_modules/jsdom/lib/jsdom/virtual-console.js:29
Error: Not implemented: navigation (except hash changes)
I have no idea about it. I have searched on many page to find out the solution or any hint about this to figure out what happened here.
[UPDATE] - I took a look deep to source code and I think this error is from jsdom.
at module.exports (webapp/node_modules/jsdom/lib/jsdom/browser/not-implemented.js:9:17)
at navigateFetch (webapp/node_modules/jsdom/lib/jsdom/living/window/navigation.js:74:3)
navigation.js file
exports.evaluateJavaScriptURL = (window, urlRecord) => {
const urlString = whatwgURL.serializeURL(urlRecord);
const scriptSource = whatwgURL.percentDecode(Buffer.from(urlString)).toString();
if (window._runScripts === "dangerously") {
try {
return window.eval(scriptSource);
} catch (e) {
reportException(window, e, urlString);
}
}
return undefined;
};
exports.navigate = (window, newURL, flags) => {
// This is NOT a spec-compliant implementation of navigation in any way. It implements a few selective steps that
// are nice for jsdom users, regarding hash changes and JavaScript URLs. Full navigation support is being worked on
// and will likely require some additional hooks to be implemented.
const document = idlUtils.implForWrapper(window._document);
const currentURL = document._URL;
if (!flags.reloadTriggered && urlEquals(currentURL, newURL, { excludeFragments: true })) {
if (newURL.fragment !== currentURL.fragment) {
navigateToFragment(window, newURL, flags);
}
return;
}
// NOT IMPLEMENTED: Prompt to unload the active document of browsingContext.
// NOT IMPLEMENTED: form submission algorithm
// const navigationType = 'other';
// NOT IMPLEMENTED: if resource is a response...
if (newURL.scheme === "javascript") {
window.setTimeout(() => {
const result = exports.evaluateJavaScriptURL(window, newURL);
if (typeof result === "string") {
notImplemented("string results from 'javascript:' URLs", window);
}
}, 0);
return;
}
navigateFetch(window);
};
not-implemented.js
module.exports = function (nameForErrorMessage, window) {
if (!window) {
// Do nothing for window-less documents.
return;
}
const error = new Error(`Not implemented: ${nameForErrorMessage}`);
error.type = "not implemented";
window._virtualConsole.emit("jsdomError", error);
};
I see some weird logics in these file.
const scriptSource = whatwgURL.percentDecode(Buffer.from(urlString)).toString();
then check string and return error

Alternative version that worked for me with jest only:
let assignMock = jest.fn();
delete window.location;
window.location = { assign: assignMock };
afterEach(() => {
assignMock.mockClear();
});
Reference:
https://remarkablemark.org/blog/2018/11/17/mock-window-location/

Alternate solution: You could mock the location object
const mockResponse = jest.fn();
Object.defineProperty(window, 'location', {
value: {
hash: {
endsWith: mockResponse,
includes: mockResponse,
},
assign: mockResponse,
},
writable: true,
});

I faced a similar issue in one of my unit tests. Here's what I did to resolve it.
Replace window.location.href with window.location.assign(url) OR
window.location.replace(url)
JSDOM will still complain about window.location.assign not implemented.
Error: Not implemented: navigation (except hash changes)
Then, in one of your unit tests for the above component / function containing window.assign(url) or window.replace(url) define the following
sinon.stub(window.location, 'assign');
sinon.stub(window.location, 'replace');
Make sure you import sinon import sinon from 'sinon';
Hopefully, this should fix the issue for you as it did for me.
The reason JSDOM complains about the Error: Not implemented: navigation (except hash changes) is because JSDOM does not implement methods like window.alert, window.location.assign, etc.
References:
http://xxd3vin.github.io/2018/03/13/error-not-implemented-navigation-except-hash-changes.html
https://www.npmjs.com/package/jsdom#virtual-consoles

You can use jest-location-mock package for that
Usage example with CRA (create-react-app)
// src/setupTests.ts
import "jest-location-mock";

I found a good reference that explain and solve the problem: https://remarkablemark.org/blog/2018/11/17/mock-window-location/
Because the tests are running under Node, it can't understand window.location, so we need to mock the function:
Ex:
delete window.location;
window.location = { reload: jest.fn() }

Following worked for me:
const originalHref = window.location.href;
afterEach(() => {
window.history.replaceState({}, "", decodeURIComponent(originalHref));
});
happy coding :)

it('test', () => {
const { open } = window;
delete window.open;
window.open = jest.fn();
jest.spyOn(window, 'open');
// then call the function that's calling the window
expect(window.open).toHaveBeenCalled();
window.open = open;
});

Related

How to chrome.runtime.reload() a Chrome Extension when building it with Webpack 5 Boilerplate?

I am using https://github.com/lxieyang/chrome-extension-boilerplate-react as the basis to build a chrome extension. It all works fine, and everything does hot-reloading (popup, background, options, newtab) except for the content-script. Reloading the matching pages, does not reload the underlying .js. It takes to reload/turn-off-on the whole extension in order for the changes to go into effect.
So, in webpack.config.js i commented out 'contentScript' hoping for it to fix that, but it makes no difference.
...
chromeExtensionBoilerplate: {
notHotReload: [
//'contentScript'
],
},
...
In src/pages/Content/index.js it actually states
console.log('Must reload extension for modifications to take effect.');
When developing another extension in plain vanilla js, i dropped a hot-reload.js from https://github.com/xpl/crx-hotreload which worked perfectly. From what i understand it is the 'chrome.runtime.reload()' call that makes chrome completely reload the extension.
So my question(s) actually is:
When changing src/pages/Content/index.js, webpack does re-build the build/contentScript.bundle.js. But why doesn't manually reloading the tab/page recognize these changes, when for popup, background, etc. it does?
And if there is no way to let the above boilerplate reload the extension (i don't mind the hard reload) how would i be able to integrate the hot-reload.js (or its effect actually) into this boilerplate? That is, how do i reload the extension when build/contentScript.bundle.js is changed?
Thanks in advance!
For who is interested. I ended up placing mentioned hot-reload.js in my extension, and loading it from within the background script. That breaks webpack's hot-reloading, by reloading the entire extension on any file-change. But as long as i only work on the content script, thats fine. I can remove it once im done, or if i work on other scripts.
Use server-sent-events:
start.js
const SSEStream = require('ssestream').default;
let sseStream;
...
setupMiddlewares: (middlewares, _devServer) => {
if (!_devServer) {
throw new Error('webpack-dev-server is not defined');
}
/** 改动:/reload path SSE */
middlewares.unshift({
name: 'handle_content_change',
// `path` is optional
path: '/reload',
middleware: (req, res) => {
console.log('sse reload');
sseStream = new SSEStream(req);
sseStream.pipe(res);
res.on('close', () => {
sseStream.unpipe(res);
});
},
});
return middlewares;
}
webpack.compiler.hook
let contentOrBackgroundIsChange = false;
compiler.hooks.watchRun.tap('WatchRun', (comp) => {
if (comp.modifiedFiles) {
const changedFiles = Array.from(comp.modifiedFiles, (file) => `\n ${file}`).join('');
console.log('FILES CHANGED:', changedFiles);
if(watchRunDir.some(p => changedFiles.includes(p))) {
contentOrBackgroundIsChange = true;
}
}
});
compiler.hooks.done.tap('contentOrBackgroundChangedDone', () => {
if(contentOrBackgroundIsChange) {
contentOrBackgroundIsChange = false;
console.log('--------- 发起 chrome reload 更新 ---------');
sseStream?.writeMessage(
{
event: 'content_changed_reload',
data: {
action: 'reload extension and refresh current page'
}
},
'utf-8',
(err) => {
sseStream?.unpipe();
if (err) {
console.error(err);
}
},
);
}
});
crx background
if(process.env.NODE_ENV === 'development') {
const eventSource = new EventSource(`http://${process.env.REACT_APP__HOST__}:${process.env.REACT_APP__PORT__}/reload/`);
console.log('--- start listen ---');
eventSource.addEventListener('content_changed_reload', async ({ data }) => {
const [tab] = await chrome.tabs.query({ active: true, lastFocusedWindow: true });
const tabId = tab.id || 0;
console.log(`tabId is ${tabId}`);
await chrome.tabs.sendMessage(tabId, { type: 'window.location.reload' });
console.log('chrome extension will reload', data);
chrome.runtime.reload();
});
}

Loading webpack bundle via script tag for microfrontend approach

I'm following a couple articles on how to implement a simple micro-frontend approach with React (here and here, see sample repos at bottom of question).
It works perfectly when the two apps (the root app, and the subapp) are running in their respective developoment servers. However, when I deploy the build artifacts to the real web server, it doesn't work. This is the important code in the root app:
function MicroFrontend({name, host, history}) {
useEffect(() => {
const scriptId = `micro-frontend-script-${name}`;
const renderMicroFrontend = () => {
window["render" + name](name + "-container", history);
};
if (document.getElementById(scriptId)) {
renderMicroFrontend();
return;
}
fetch(`${host}/asset-manifest.json`)
.then((res) => res.json())
.then((manifest) => {
const script = document.createElement("script");
script.id = scriptId;
script.crossOrigin = "";
script.src = `${host}${manifest.files["main.js"]}`;
script.onload = () => {
renderMicroFrontend();
};
document.head.appendChild(script);
});
return () => {
window[`unmount${name}`] && window[`unmount${name}`](`${name}-container`);
};
});
return <main id={`${name}-container`}/>;
}
MicroFrontend.defaultProps = {
document,
window,
};
When I click on the button that loads the main.js for the micro-app, it works fine up till the point where it calls the renderMicroFrontend above. Then I get this error in my browser: Uncaught TypeError: window[("render" + t)] is not a function
This is because it can't find the function that loads the microfrontend that is supposed to be on window. When I run the two apps in the dev server, I have the correct function on window and it works. When I follow the same steps with the two apps deployed to a real server (after running npm run build), instead of having the renderMicroApp function on my window, I have a different variable called: webpackJsonpmicro-app.
I figured out that this is due to the output.library option (and/or related options) in webpack, from the webpack docs:
output.jsonpFunction.
string = 'webpackJsonp'
Only used when target is set to 'web', which uses JSONP for loading on-demand chunks.
If using the output.library option, the library name is automatically concatenated with output.jsonpFunction's value.
I basically want the bundle/script to be loaded and evaluated, so that the renderMicroApp function is available on the window, but I'm lost regarding what webpack settings I need for this to work, since there are lots of different permutations of options.
For reference, I'm using the following config-overrides in the micro-app (atop react-app-rewired):
module.exports = {
webpack: (config, env) => {
config.optimization.runtimeChunk = false;
config.optimization.splitChunks = {
cacheGroups: {
default: false,
},
};
config.output.filename = "static/js/[name].js";
config.plugins[5].options.filename = "static/css/[name].css";
config.plugins[5].options.moduleFilename = () => "static/css/main.css";
return config;
},
};
And in my index.tsx (in the micro app) I'm exposing the function on the window:
window.renderMicroApp = (containerId, history) => {
ReactDOM.render(<AppRoot />, document.getElementById(containerId));
serviceWorker.unregister();
};
Some sample repos:
https://github.com/rehrumesh/react-microfrontend-container
https://github.com/rehrumesh/react-microfrontend-container-cats-app.git
https://github.com/rehrumesh/react-microfrontend-container-dogs-app
You could try with terser-webpack-plugin.
const TerserPlugin = require('terser-webpack-plugin');
module.exports = {
webpack: (config, env) => {
config.optimization.minimize = true;
config.optimization.minimizer = [new TerserPlugin({
terserOptions: { keep_fnames: true }
})];
config.optimization.runtimeChunk = false;
config.optimization.splitChunks = {
cacheGroups: {
default: false,
},
};
config.output.filename = "static/js/[name].js";
config.plugins[5].options.filename = "static/css/[name].css";
config.plugins[5].options.moduleFilename = () => "static/css/main.css";
return config;
}
};

Web Worker with imported modules in React

I'm trying to make a web worker to prevent stalling the React main thread. The worker is supposed to read an image and do various things.
The app was created using create-react-app.
Currently I have
WebWorker.js
export default class WebWorker {
constructor(worker) {
const code = worker.toString();
const blob = new Blob(['('+code+')()'], {type: "text/javascript"});
return new Worker(URL.createObjectURL(blob), {type: 'module'});
}
}
readimage.worker.js
import Jimp from "jimp";
export default () => {
self.addEventListener('message', e => { // eslint-disable-line no-restricted-globals
if (!e) return;
console.log('Worker reading pixels for url', e.data);
let data = {};
Jimp.read(e.data).then(image => {
// jimp does stuff
console.log('Worker Finished processing image');
})
postMessage(data);
})
};
And then in my React component AppContent.js I have
import WebWorker from "./workers/WebWorker";
import readImageWorker from './workers/readimage.worker.js';
export default function AppContent() {
const readWorker = new ReadImageWorker(readImageWorker);
readWorker.addEventListener('message', event => {
console.log('returned data', event.data);
setState(data);
});
// callback that is executed onClick from a button component
const readImageContents = (url) => {
readWorker.postMessage(url);
console.log('finished reading pixels');
};
}
But when I run it, I get the error
Uncaught ReferenceError: jimp__WEBPACK_IMPORTED_MODULE_0___default is not defined
How can I properly import a module into a web worker?
EDIT:
As per suggestions from Kaiido, I have tried installing worker-loader, and edited my webpack.config.js to the following:
module.exports = {
module: {
rules: [
{
test: /\.worker\.js$/,
use: { loader: 'worker-loader' }
}
]
}
};
But when I run it, I still get the error
Uncaught ReferenceError: jimp__WEBPACK_IMPORTED_MODULE_0__ is not defined
I'm not too much into React, so I can't tell if the module-Worker is the best way to go (maybe worker-loader would be a better solution), but regarding the last error you got, it's because you didn't set the type of your Blob when you built it.
In this case, it does matter, because it will determine the Content-Type the browser sets when serving it to the APIs that fetch it.
Here Firefox is a bit more lenient and somehow allows it, but Chrome is picky and requires you set this type option to one of the many javascript MIME-types.
const script_content = `postMessage('running');`;
// this one will fail in Chrome
const blob1 = new Blob([script_content]); // no type option
const worker1 = new Worker(URL.createObjectURL(blob1), { type: 'module'});
worker1.onerror = (evt) => console.log( 'worker-1 failed' );
worker1.onmessage = (evt) => console.log( 'worker-1', evt.data );
// this one works in Chrome
const blob2 = new Blob([script_content], { type: "text/javascript" });
const worker2 = new Worker(URL.createObjectURL(blob2), { type: 'module'});
worker2.onerror = (evt) => console.log( 'worker-2 failed' );
worker2.onmessage = (evt) => console.log( 'worker-2', evt.data );
But now that this error is fixed, you'll face an other error, because the format import lib from "libraryname" is still not supported in browsers, so you'd have to change "libraryname" to the path to your actual script file, keeping in mind that it will be relative to your Worker's base URI, i.e probably your main-page's origin.
I experienced the same problem. Firefox could not show me where exactly the error was (in fact it was plain misleading...) but Chrome did.
I fixed my problem by not relying on an import statement (importing one of my other files) which would only have worked within a React context. When you load a Worker script (via the blob()
/ URL() hack), it has no React context (as it is loaded at runtime and not at transpile time). So all the React paraphernalia __WEBPACK__blah_blah is not going to exist / be visible.
So... within react... import statements in worker files will not work.
I haven't thought of a workaround yet.

mocking a conditional window.open function call with jest

I am trying to write a test to make sure that, when and when i pass a valid URL arg to a function it runs windows.open(args) to open it. then to make sure that i focus on it.
Test link validity:
export function isValidURL(url: string): boolean {
try {
new URL(url)
return true
} catch (e) {
console.warn(`Invalid URL: ${url}`)
return false
}
}
Open link:
export function openURL(url: string): void {
if (isValidURL(url)) {
const exTab = window.open(url, "_blank")
if (exTab) exTab.focus()
}
}
I thinked that i should mock some function or maybe to fake it's code, then wait for it's number of call or something like that. but i'm new with jest and testing and i feel so confused of how that can be done.
My issay:
describe("Test tools.openURL()", () => {
test("it should open link if valid.", () => {
const { open } = window
delete window.open
window.open = jest.fn()
openURL("htts//url2.de9v")
expect(window.open).not.toHaveBeenCalled()
openURL("https://www.url1.dev")
expect(window.open).toHaveBeenCalled()
window.open = open
// test focus here
})
})
With this code i have succeeded to test the open, now i just need to test focus.
'open' is readonly property.
Instead of jest.spyOn(window, "open") Try:
Object.defineProperty(window, 'open', { value: <your mock> });
in place of <your mock> locate mock object or object that should be returned.

#sentry/node integration to wrap bunyan log calls as breadcrumbs

Sentry by defaults has integration for console.log to make it part of breadcrumbs:
Link: Import name: Sentry.Integrations.Console
How can we make it to work for bunyan logger as well, like:
const koa = require('koa');
const app = new koa();
const bunyan = require('bunyan');
const log = bunyan.createLogger({
name: 'app',
..... other settings go here ....
});
const Sentry = require('#sentry/node');
Sentry.init({
dsn: MY_DSN_HERE,
integrations: integrations => {
// should anything be handled here & how?
return [...integrations];
},
release: 'xxxx-xx-xx'
});
app.on('error', (err) => {
Sentry.captureException(err);
});
// I am trying all to be part of sentry breadcrumbs
// but only console.log('foo'); is working
console.log('foo');
log.info('bar');
log.warn('baz');
log.debug('any');
log.error('many');
throw new Error('help!');
P.S. I have already tried bunyan-sentry-stream but no success with #sentry/node, it just pushes entries instead of treating them as breadcrumbs.
Bunyan supports custom streams, and those streams are just function calls. See https://github.com/trentm/node-bunyan#streams
Below is an example custom stream that simply writes to the console. It would be straight forward to use this example to instead write to the Sentry module, likely calling Sentry.addBreadcrumb({}) or similar function.
Please note though that the variable record in my example below is a JSON string, so you would likely want to parse it to get the log level, message, and other data out of it for submission to Sentry.
{
level: 'debug',
stream:
(function () {
return {
write: function(record) {
console.log('Hello: ' + record);
}
}
})()
}

Categories