If we have different bundles created by webpack and we require.ensure something to dynamically transfer+eval it at a later point in time, it happens via jsonPadding and some webpack js magic. If we have
require.ensure([ ], ( require ) => {
console.log('before...');
var data = require( './myModule.js' );
console.log('after...');
}, 'myModule')
"after..." will get encountered when that module was entirely transferred and evaluated. If it happens to be that this chunk / module is pretty big, contains images, css and whatnot, the loading will pretty much lock down a browser while the webpack javascript code unpacks the bundle with all its components.
Question: Is there any way to "hook" into that require magic? For instance, it would be a dream scenario to have callbacks for:
whole file / chunk was transferred
image[1] was evaluated
css[1] was evaluated / style tag was injected
javascript was evaluated
and so forth, assuming that our transferred bundle contains a lot of data. In general it just bothers me pretty hard to have a nice option to asynchronously transfer whole bundles dynamically, but still have to load that very bundle in full sync / blocking fashion.
Let me preface by saying I know this might be an 'annoying' answer, because it doesn't answer your question directly but offers an
alternative, pragmatic, solution to the browser hanging problem. I
used this pattern myself to manage asset loading within the context of
a heavy 3D web game.
I'm writing this as an answer and not as a comment so it might serve
others who come across the same problem. If this does answer your
case, I'll be happy to provide actual code to implement and generify
these sort of modules.
If I understand correctly, essentially what you want is a way to break down MyModule into discrete components which can be atomically loaded and evaluated within the context of one require.ensure, but handle evaluating so that not everything is evaluated in one go resulting in browser hang.
A different way to look at this is to use the require and ensure methods themselves as the loading/evaluation mechanisms. Consider MyModule.js, which is a huge-loading module with the dependencies Css1, Css2, ... CssN as well as JS1, JS2, ... JSN and images.
My suggestion is to break it down into SuperMyModule.js which requires MyModuleLogic.js as well as all the CSS, images and JS.
Node, in SuperMyModule.js you could do:
let myModuleLogic = require("myModuleLogic");
console.log('JS was evaluated');
require.ensure(['image1.png'], ( require ) => {
let data = require( './images1.png' );
console.log('image[1] was evaluated');
// register that resource was evaluated/fire event
})
require.ensure(['style1.css'], ( require ) => {
let data = require( './style1.css' );
console.log('css[1] was evaluated');
// register that resource was evaluated/fire event
})
//after all resources evaluated/fire callback or event
Then in your original file, like you requested:
require.ensure([ ], ( require ) => {
console.log('before...');
let myModule = require( './superMyModule.js' );
console.log('after...');
})
And if you set up your module instance as an event emitter possibly hook into the loading of resources like so:
require.ensure([ ], ( require ) => {
let myModule = require( './superMyModule.js' );
myModule.on("loadResource", myCallback)
})
I guess I was confused about the topic myself, so my question was probably not precise enough to get properly answered. However, my misunderstanding on the whole "commonJS dynamic module loading" context was, that require.ensure() will just transfer the Module Code (respectively the Chunk which webpack created) over the wire. After that the transferred Chunk which basically is just one big ECMAscript file just sits there in the browser, cached but not evaluated yet. Evaluation of the entire Chunk happens only on the actual require() call.
Having that said, it is entirely in your hands how you decouple and evaluate the individual parts of a Module / Chunk. If for example, like in my original question, a module requires() in some CSS Files, some Images and some HTML, that all gets asynchronously transferred on the require.ensure() call. In which manner you require() (and therefore evaluate) those parts is entirely up to you and you can decouple those call if necessary yourself.
For instance, a Module looks like this:
Module1.js
"use strict";
import { io } from 'socket.io-client';
document.getElementById( 'foo' ).addEventListener('click', ( event ) => {
let partCSS = require( 'style/usable!./someCoolCSS.css' ),
moarCSS = require( 'style/usable!./moarCoolCSS.css' ),
tmpl = require( './myTemplate.html' ),
image1 = require( './foo.jpg' ),
image2 = require( './bar.png' );
}, false);
Of course, all these files are already contained by the Chunk which gets transferred to the client, when some other Module calls:
require.ensure([ 'Module1.js' ], ( require ) => {
}, 'Module1');
This is were my confusion was. So now, we can just play with the require() calls within module1.js ourself. If we really require a lot of files that way, we could even use a setTimeout / setImmediate run-away-timer to decouple the synchronous evaluation between each require() call if necessary or wanted.
Actually a long answer for a pretty simple story.
TL;DR:
"require.ensure transfers a whole chunk over the wire. This chunk contains all files which are part of a require() call within the ensured Module. But those files do not get automatically evaluated. That happens only when the actual require() call is matched at runtime (which is represented by a webpackJSONP call at this point)"
You can offload the main thread and avoid blocking using the worker loader.
Downside: there's extra message passing to do between main window and the worker.
https://github.com/webpack/worker-loader
You can also try emmitting load events in the large module to track more granular progress.
Further reference:
MDN docs https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API
Browser support http://caniuse.com/#feat=webworkers
If you would like to kick off loading asynchronous JavaScript bundle through require.ensure as well as other Promises, here is how you can achieve that:
const requireEnsurePromise = new Promise((resolve) => {
require.ensure(['./modulePath'], function (requireEnsure) {
console.log('The module is fetched but not evaluated yet');
resolve(requireEnsure.bind(null, require.resolve('./modulePath')));
});
});
Promise.all([
fetch('/api/relevant/stuff').then(response => response.json()),
requireEnsurePromise,
]).then((values) => {
if (values[0]) {
// DO STUFF
}
console.log('right before module is evaluated');
const evaluatedModule = values[1]();
});
Webpack statically determined what the module path corresponds to Webpack internal representation (could be an integer or a string). Whenever Webpack recognizes the require.ensure([], fn) patten, it looks at the function body of the fn callback and do so. In order to delay the evaluation time after fetching the JavaScript bundle in a Promise fashion, require('./modulePath') cannot be present inside the require.ensure success callback as it will evaluate the module. Webpack translates require('./modulePath') to something like __webpack_require__(2343423), that's why you would like to avoid using it in this scenario.
Related
I'm using Webpack to bundle source code and assets for a game. I also use the CompressionPlugin() to make static gzip files available so that my web server can send the precompressed files when appropriate. Some of the game assets are large so I have a loading experience up front that shows a progress bar while the assets are fetched.
Unfortunately a problem arises on Chome during loading when receiving a gzip response for an XMLHttpRequest in that the onprogress total is always 0. There are some imperfect workarounds for this such as this solution but they're not entirely appropriate for my case.
Instead I'd like to inject the compressed & decompressed file sizes of specific bundled assets into the html or javascript so that they're immediately accessible to the loading javascript code. Injecting something as follows would be perfect:
<script>
const assetSizes = {
"game.25317abd3eb6cf0fb0f1.wasm": {uncompressed: 8192, compressed: 1024},
"game.25317abd3eb6cf0fb0f1.data": {uncompressed: 8192, compressed: 1024}
};
</script>
I'm somewhat new to webpack so I'm not entirely sure how to approach this. I've considered using the WebpackManifestPlugin and implementing a custom generate option function. This would allow me to control the output of the generated manifest.json but it's still not clear to me if this the right thing to do or how I'd go about subsequently injecting this files contents ahead of my own loading javascript code.
Perhaps there is a better approach that would be more appropriate?
Update: I've been trying to progress this further and it feels like a custom Webpack plugin might be the right direction to go. If I tap into the afterEmit compiler hook it seems I have access to the filesizes I need and I can construct an appropriate dictionary:
class InjectFileSizePlugin {
apply(compiler) {
compiler.hooks.afterEmit.tap(
"InjectFileSizePlugin",
(compilation) => {
const fileSizes = {};
for (const [assetPath, assetInfo] of compilation.assetsInfo) {
if (assetInfo.related) {
fileSizes[assetPath] = {
uncompressed: assetInfo.size,
compressed: -1
};
if (assetInfo.related.gzipped) {
const gzippedAssetInfo = compilation.assetsInfo.get(
assetInfo.related.gzipped
);
if (gzippedAssetInfo) {
fileSizes[assetPath].compressed =
gzippedAssetInfo.size;
}
}
}
}
console.log(fileSizes); // <-- output is as I'd like, how to inject it now?
}
);
}
}
What's not clear though is how I can now go about injecting this fileSizes data into the bundle as afterEmit is called very late in the compilation stage after the bundle javascript has been emitted. There is an additionalPass compiler hook but I currently can't figure out how it works.
I'm using Webdriver.io to run tests on a large number of pages. Because all the specs for the pages are in a JSON file, I have a special class that sets up the test. It looks like this:
module.exports = class PageTester {
suiteName = '';
browser = {};
constructor (suiteName, browser) {
this.suiteName = suiteName;
this.browser = browser;
}
testModel(currentModel) {
describe(this.suiteName + ' endpoint ' + currentModel.url, () => {
this.browser.url(currentModel.url);
/* it() statements for the test */
});
}
}
Then in my specs folder I have a file that loads the JSON and plugs it into the PageTester class, like this:
const PageTester = require('../modules/PageTester');
const models = require('/path/to/some/file.json');
const pageTester = new PageTester('Some Name', browser);
for (const modelName in models) {
pageTester.testModel(models[modelName]);
}
When I run this code, WebdriverIO gives me the following warning:
WARN #wdio/mocha-framework: Unable to load spec files quite likely because they rely on `browser` object that is not fully initialised.
`browser` object has only `capabilities` and some flags like `isMobile`.
Helper files that use other `browser` commands have to be moved to `before` hook.
Spec file(s): /suite/test/specs/test.js
All the tests seem to run fine, so I don't actually understand what this warning is complaining about and what negative consequences ignoring it may have. So I would like to a) understand why this is happening and b) how it would be possible to get rid of this warning given the way my code is set up.
In my case, I resolve it by fixing the path for the require files. I noticed that my path was wrong. But the error that wdio throws is not really helpful. :/
you can only interact with browser object inside it blocks because it is not fully accessible before the browser session is started.
See https://webdriver.io/blog/2019/11/01/spec-filtering.html for details.
You simply should ensure your spec file and respective page file are kept on a similar folder structure.
I am looking #Domenic's simple example of using requirejs, from this answer:
simple example for using require.js
which I am including here.
shirt.js:
define({
color: "black",
size : "large"
});
logger.js:
define(function (require) {
var shirt = require("./shirt");
return {
logTheShirt: function () {
console.log("color: " + shirt.color + ", size: " + shirt.size);
}
};
});
main.js:
define(function (require) {
var shirt = require("./shirt");
var logger = require("./logger");
alert("Shirt color is: " + shirt.color);
logger.logTheShirt();
});
main.html:
<script data-main="../js/main" src="../js/require.js"></script>
There's something very strange going on:
at the point where shirt.color is used in main.js,
shirt.js and logger.js have just been scheduled to be loaded, asynchonously (I presume),
so shirt.js hasn't actually been read yet. The reason I presume the loading is asynchronous is that my impression is that synchronous loading has been pretty much outlawed in javascript in chrome (XMLHttpRequest still has an option to be synchronous, but if used, it warns on the chrome console Synchronous XMLHttpRequest on the main thread is deprecated because of its detrimental effects to the end user's experience.).
And yet, this little app seems to work, reliably.
It even works reliably if I replace "./shirt.js" by a url referring to a resource on the other
side of the world, and I clear my browser cache before loading the html page.
How can this be??
If I look at timings in chrome dev console, it appears that the time-consuming shirt.js load
actually happened before the function that requested it even started.
I.e. somehow it knew to load shirt.js before anything in the program referred to "./shirt" at all.
It seems there is some very sneaky magic going on here.
So I'm interested to know:
how did requirejs know to load shirt.js before anything asked for it?
to what extent can this be relied on?
how would one modify this example to avoid relying on the sneaky magic?
for those of us who don't trust sneaky magic, is there a way to disable it when using requirejs?
how did requirejs know to load shirt.js before anything asked for it?
This is your module:
define(function (require) {
var shirt = require("./shirt");
var logger = require("./logger");
alert("Shirt color is: " + shirt.color);
logger.logTheShirt();
});
When define is called, RequireJS detects that it was called without a dependency list. So it scans the callback you pass for instances of the require call taking a single argument which is a string literal, and it grabs the single argument and makes a list of these arguments that it takes as the dependency list of the module. Your module becomes functionally equivalent to this:
define(["require", "./shirt", "./logger"], function (require) {
var shirt = require("./shirt");
var logger = require("./logger");
alert("Shirt color is: " + shirt.color);
logger.logTheShirt();
});
So ./shirt and ./logger are loaded before the callback is actually called. Then when require("./shirt") and require("./logger") are executed, they are just lookups in a map of already loaded modules. (And, because of this, calls to require with a single string argument can only work when called in a callback passed to define. Otherwise, you get the dreaded "Module has not been loaded yet for context" error.)
This capability is called the "CommonJS sugar" because a require call that uses a single parameter which is a string and returns a module is what CommonJS supports natively. The native AMD require call takes an array of dependencies as the first argument and an optional callback to which the resolved modules are passed.
to what extent can this be relied on?
I've relied on the CommonJS sugar for hundred of modules without problem.
The one limitation to this pattern is if you try to pass something else than a string literal to require. For instance if you do this:
define(function (require) {
var shirtName = "./shirt";
var shirt = require(shirtName);
This will throw off RequireJS. It won't detect that your module needs the ./shirt module and you'll get the error I mentioned above.
how would one modify this example to avoid relying on the sneaky magic?
define(["./shirt", "./logger"], function (shirt, logger) {
alert("Shirt color is: " + shirt.color);
logger.logTheShirt();
});
for those of us who don't trust sneaky magic, is there a way to disable it when using requirejs?
There's no flag you can use to prevent RequireJS from supporting the CommonJS sugar. If you want to avoid relying on it in your own code, you can code your modules like I've shown in the previous snippet: call define with a list of dependencies as the first argument, and get the modules as arguments of your callback.
This being said, I see no good reason to do that. I've used RequireJS for years and if anything I've been moving code that uses define with a list of dependencies to code that relies on the CommonJS sugar. I find that the latter works better with various development tools.
So I've just updated to webpack 2 and have my first working setup where webpack automatically creates chunks by looking at System.import calls. Pretty sweet!
However, I load the initial chunk with an ajax call so that I can show the progress while loading
So my question is, can I overwrite or change the function of System.import somehow so that it will use an ajax request that I can listen to for events, instead of loading the chunk with a <script> tag?
No, unfortunately not. webpack 2 translates System.import() to ordinary require.ensure() calls which just uses the <script> tag. Even the official WHATWG Loader Spec does not provide an API for this kind of event. I've created an issue for this question.
Regarding webpack: There is a way to implement your own require.ensure(). However, since chunk loading is an integral part of webpack, this requires to dive a little deeper. I'm not sure how important this is for you, but you might be interested how things work inside webpack, so let's take a look:
In webpack, all internal features are implemented as plugins. This way, webpack is able to support a lot of different features and environments. So, if you're interested how things are implemented in webpack, it's always a good idea to a) take a look at WebpackOptionsApply or b) search for a specific string/code snippet.
Chunk loading depends heavily on the given target, because you need different implementations for each environment. Webpack allows you to define custom targets. When you pass in a function instead of a string, webpack invokes the function with a compiler instance. There you can apply all the required plugins. Since our custom target is almost like the web target, we just copy all the stuff from the web target:
// webpack.config.js
const NodeSourcePlugin = require("webpack/lib/node/NodeSourcePlugin");
const FunctionModulePlugin = require("webpack/lib/FunctionModulePlugin");
const LoaderTargetPlugin = require("webpack/lib/LoaderTargetPlugin");
const JsonpChunkTemplatePlugin = require("webpack/lib/JsonpChunkTemplatePlugin");
const JsonpHotUpdateChunkTemplatePlugin = require("webpack/lib/JsonpHotUpdateChunkTemplatePlugin");
function customTarget(compiler) {
compiler.apply(
new JsonpTemplatePlugin(compiler.options.output),
new FunctionModulePlugin(compiler.options.output),
new NodeSourcePlugin(compiler.options.node),
new LoaderTargetPlugin("web")
);
}
module.exports = {
entry: require.resolve("./app/main.js"),
output: {
path: path.resolve(__dirname, "dist"),
filename: "bundle.js"
},
target: customTarget
};
If you take a look at each plugin, you will recognize that the JsonpTemplatePlugin is responsible for loading chunks. So let's replace that with out own implementation. We call it the XHRTemplatePlugin:
function customTarget(compiler) {
compiler.apply(
new XHRTemplatePlugin(compiler.options.output),
new FunctionModulePlugin(compiler.options.output),
new NodeSourcePlugin(compiler.options.node),
new LoaderTargetPlugin("my-custom-target")
);
}
Our XHRTemplatePlugin is responsible for providing the code in the main chunk, in each child chunk and for hot updates:
function XHRTemplatePlugin() {}
XHRTemplatePlugin.prototype.apply = function (compiler) {
compiler.plugin("this-compilation", function(compilation) {
compilation.mainTemplate.apply(new XHRMainTemplatePlugin());
compilation.chunkTemplate.apply(new XHRChunkTemplatePlugin());
compilation.hotUpdateChunkTemplate.apply(new XHRHotUpdateChunkTemplatePlugin());
});
};
Maybe, you can also re-use the JsonpChunkTemplatePlugin and JsonpHotUpdateChunkTemplatePlugin plugin, but this depends on your use-case/implementation.
Your XHRMainTemplatePlugin now may look like this:
function XHRMainTemplatePlugin() {}
XHRMainTemplatePlugin.prototype.apply = function (mainTemplate) {
mainTemplate.plugin("require-ensure", function(_, chunk, hash) {
return this.asString([
// Add your custom implementation here
"fetch()"
]);
});
};
I won't go any further here because I think this answer is already long enough. But I recommend to create a real small example project and to check the output created by webpack. The internal webpack plugins may look a little bit scary on first sight, but most of them are real short and do just one thing. You can also get some inspiration from them.
When you make a project with the Meteor framework, it packages all the files together, but there doesn't seem to be a way to explicitly say "I want this file to be loaded before that one".
Let's say, for example, I have 2 javascript files: foo.js and bar.js.
The file bar.js is actually containing code depending one the one inside foo.js but Meteor is loading bar.js before foo.js, breaking the project.
In node.js I would simply use require('./bar') inside foo.js
In the browser, I would put a <script> tag pointing to foo.js and another, after, pointing to bar.js, in order to load the files in the correct order.
How can we do that in Meteor?
According to the Meteor documentation, files are currently loaded in this order:
Files in [project_root]/lib are loaded first
Files are sorted by directory depth. Deeper files are loaded first.
Files are sorted in alphabetical order.
main.* files are loaded last.
Source:
http://docs.meteor.com/#structuringyourapp
Not a solution for all scenarios, but I think ideally anything that is dependent on other code would be placed in a Meteor.startup function, to ensure everything is already loaded.
You can always us a JS loader like yepnope.js and add it to the client.js file. This works for me.
I have a set of utility functions that I structured under common namespace (js global).
I.e.
// utils/utils.js
Utils = {};
and then in subfolders:
// utils/validation/validation.js
Utils.Validation = {};
// utils/validation/creditCard.js
Utils.Validation.creditCard = ... // validation logic etc
also I have bunch of code that uses Utils and it's subobjects.
Obviously, this structure doesn't work as Meteor load subfolders first.
To make it work as expected, I had to create /subfolder/subfolder/subfolder with meaningless names, and then shove root object in most deep subfolder, and branch objects in subfolders not so deep.
It is extremely counterintuitive for my taste and error-prone (suppose you have component that is even deeper in folder structure).
To address this issue, I used Q library with defers and promises. Solution still isn't clean as it makes you routine code repeating and checks but it gives you full control over the load order without messing with directory structure (hello to people who says you can organise meteor code as you want).
Example:
//utils.js
UtilsDefer = UtilsDefer || Q.defer();
UtilsDefer.resolve({
// here some root utils stuff
});
//cards.js
// here we'll depend on Utils but don't want to care about directory structure
UtilsDefer = UtilsDefer || Q.defer(); // it will be a) already
// resolved defer from utils.js, or b) new defer that will
// be resolved later in utils.js
UtilsDefer.then(function(Utils) {
// do something with utils usage, or for instance add some fields here
Utils.CreditCardDefer = Utils.CreditCardDefer || Q.defer();
Utils.CreditCardDefer.resolve({
// Credit card utils here
})
});
//someOtherFile.js
// it will be pain to use sub-objects with this method though:
UtilsDefer = UtilsDefer || Q.defer();
UtilsDefer.then(function(Utils) {
Utils.CreditCardDefer = Utils.CreditCardDefer || Q.defer();
Utils.CreditCardDefer.then(function(CreditCard) {
// do stuff with CreditCard _if_ you need to do it on startup stage
})
});
This is the example of rather narrow use case, as mostly you will be happy with handling these globals inside some user interaction callbacks or Meteor.startup where everything already initialised. Otherwise, if you want fine-grained control over initialisation order on very early stage, that could be a solution.