I'm trying to build several jsons using Modernizr at once, but it appears to break the scope of my function.
It's very hard to explain so have a look at this example, give it a go if you don't believe me:
[1,2,3,4,5].forEach(function(i){
require("modernizr").build({}, function (result) {
console.log(i);
});
})
outputs:
5
5
5
5
5
Instead of the expected 1, 2, 3, 4, 5, as would any similar function.
I have not come across this behaviour before in all my years of coding in ECMAScript like languages, and have built my project (and previous projects) around the idea that you cannot break a function's scope like that.
It breaks any system based on promises or even just simple callbacks.
It's baffled me all day, and I can't find an appropriate fix for it.
I'm have a very hard time even conceptualizing what it is that's causing this to happen.
Please help.
EDIT:
OK, it appears you're all hung up on the forEach...
Here's another example that will make it a little clearer:
function asd(i){
require("modernizr").build({}, function (result) {
console.log(i);
});
}
asd(1);
asd(2);
asd(3);
asd(4);
outputs
4
4
4
4
What on earth is happening?
The issue specific to Modernizr had to to with a global variable being clobbered.
the build command is basically a large requirejs configuration function, all powered by a large config object. There is some basic things that are true, always, that are established at the top of the function
{
optimize: 'none',
generateSourceMaps: false,
optimizeCss: 'none',
useStrict: true,
include: ['modernizr-init'],
fileExclusionRegExp: /^(.git|node_modules|modulizr|media|test)$/,
wrap: {
start: '\n;(function(window, document, undefined){',
end: '})(window, document);'
}
}
Then, since Modernizr works in both the browser and in node without changes, there needs to be a way for it to know if it should be loading its dependencies via the filesystem or via http. So we add some more options like basePath inside of a environment check
if (inBrowser) {
baseRequireConfig.baseUrl = '/i/js/modernizr-git/src';
} else {
baseRequireConfig.baseUrl = __dirname + '/../src';
}
At this point, the config object gets passed into requirejs.config, which wires up require and allows us to start calling build.
Finally, after all of that has been created, we have a build function that also ends up modifying the config object yet again for build specific settings (the actual detects in your build, regex to strip out some AMD crud, etc).
So here is a super simplified pseudocode version of what is ended up happening
var config = {
name: 'modernizr'
}
if (inBrowser) {
config.env = 'browser';
} else {
config.env = 'node';
}
requirejs.config(config);
module.exports = function(config, callback) {
config.out = function (output) {
//code to strip out AMD ceremony, add classPrefix, version, etc
callback(output)
}
requirejs.optimize(config)
}
spot the problem?
Since we are touching the .out method of the config object (whose scope is the entire module, and therefore its context is saved between build() calls) right before we run the asynchronous require.optimize function, the callback you were passing was rewriting the .out method every time build is called.
This should be fixed in a couple hours in Modernizr
The function block is called asynchronously, so this behavior is expected because this call is much slower than the walk of your foreach, so when you reach the function (result) {} block iis already five
Quite the same problem as described in Node.JS: How to pass variables to asynchronous callbacks? here and you should be able to use the same solution
[1,2,3,4,5].forEach(function(i){
(function(i) {
require("modernizr").build({}, function (result) {
console.log(i);
});
})(i);
})
untested but somethign like that should work
Related
File my_script.js:
(function() {
console.log("IMPORTED");
})();
Calling this file (run_me.js) should cause IMPORTED to print twice:
require("./my_script");
require("./my_script");
However it only prints once.
How can I change run_me.js so that IMPORTED is printed to the console twice.
Assume for this question, no changes can be made to my_script.js
require() caches its results. So, the first time a module is required, then its initialization code runs. After that, the cache just returns the value of module.exports without running the initialization code again. This is a very desirable feature of node.js modules.
If you want code to be run each time, then you should export a function that you can call after you require it like this:
Your module:
module.exports = function() {
console.log("IMPORTED");
}
Requiring it and running the code each time
require("./my_script")();
require("./my_script")();
Also, please note that there is no reason to use an IIFE in a module. The node.js module is automatically wrapped in a private function already so you don't need to do it again.
As you now say in a comment (but your question does not directly say), if you don't want to edit my_script at all (which is simply the wrong way to solve this issue), then you have to delete the module from the node.js cache before requiring it again which can be done like this:
delete require.cache[require.resolve('./my_script')];
I would not recommend this as a solution. It's not the proper way to code in node.js. It's a hack work-around. And, it is not compatible with ESM modules.
If you use jest and want code to be run each time for testing, you can use jest.isolateModules:
jest.isolateModules(() => {
require("./my_script");
});
jest.isolateModules(() => {
require("./my_script");
});
I don't think it is possible without modifying the myscript.js file. Especially since as you show it, it doesn't export anything.
It will execute the first time you require it (which is why you see "Imported" once), but then nothing will happen on future calls to require because the "cached" value (ie. module.exports) which is returned is empty.
See below for an example of what I think you want (except that myscript.js has been modified). The biggest difference is that in your original myscript.js file the function was actually executed, while in the example below it is just defined, and then actually executed in the require call in the run_me.js file.
File myscript.js:
module.exports = () => console.log("Imported");
File run_me.js:
require('myscript.js')(); // note the () at the end which actually calls the function
require('myscript.js')(); // note the () at the end which actually calls the function
You can use this package, it is an npm module that will clear the cache and load a module from source fresh each time.
https://www.npmjs.com/package/require-uncached
const requireUncached = require('require-uncached');
require('./foo')();
//=> 1
require('./foo')();
//=> 2
requireUncached('./foo')();
//=> 1
requireUncached('./foo')();
//=> 1
I wrote a piece of code to get CSS contents from a file and I want to get that data inside my helper function.
Server-side:
Meteor.methods({
'getCSS': function(filename) {
return '<style>' + Assets.getText('css/' + filename) + '</style>';
}
});
The css folder is located inside a private folder and consists of CSS files required for several pages. To my knowledge, the server-side code works correctly.
Client-side:
Template.home.helpers({
'css': function() {
var asyncFn = function(fn, cb) {
Meteor.call('getCSS', fn, function(err, res) {
console.log(res); // prints data correctly
cb && cb(null, res);
});
}
var syncFn = Meteor.wrapAsync(asyncFn);
var result = syncFn('home.css');
console.log(result); // undefined
return result;
}
});
After researching about how to use Meteor.wrapAsync this is the best solution that I could come up with. Not sure what I missed. I followed instructions from this blog.
You can't use Meteor.wrapAsync on the client, because on the server the illusion of synchronicity depends on Fibers and there is no such parallel on the client.
Fibers effectively inlines asynchronous functions so that other code can run while waiting for the callback. Among other things, it helps eliminate the callback pyramid of doom anti-pattern. However, it does make it harder to reason about your code since if Javascript objects are shared between fibers, you'll have to explicitly think about when your code might be yielding (voluntarily pre-empted, such as by making a database call).
In any case, it will probably be a while before something similar becomes available on the client - as you can see, Fibers is implemented as a C++ package for node and can't be done simply with Javascript, since it actually makes asynchronous function calls look synchronous.
In your case, the proper way of doing lazy loading of CSS (as opposed to just including it in the rest of the Meteor bundle) is to just put it in the public/ folder (or include from a package with {isAsset: true}) and use a <head> tag to load when you need it.
I don't exactly understand what you are trying to achieve here.
From my point of view, cas should be load and compile from the first loading. Don't think it is a good idea to load CSS on the fly... You'll not be able to unload.
If you're inside a tracker computation (i.e. into router for example, on rendered events) you could use ReactiveMethod package to have something like synchronous call. It use the tracker dependency to wait response.
Another thing, you could finally setup a server side route to served CSS files from private folder...
Here if you need it,
Cheers
Instead of storing the return value in a local variable, I used a Session. Now it works!
Template.home.helpers({
'css': function() {
Meteor.call('getCSS', 'home.css', function(err, res) {
Session.set('css', res);
});
return Session.get('css');
}
});
This question already has answers here:
requireJS optional dependency
(3 answers)
Closed 8 years ago.
I'm using require.js and I load a library that handles tracking. However, I have some users that block it from loading.
Since its not a critical part of my app, I would like everything to still work, even when my tracking library fails to load.
I've looked at the documentation for handling errors via errbacks, config fallbacks, and the global onError function.
I was thinking of something like:
requirejs.onError = function (err) {
var modules = err.requireModules;
for (var i = 0; i < modules.length; i++) {
if (modules[i] == 'tracking-lib') {
// Would be great if I could do something like define('modules[i]', [], null)
}
}
};
Similar questions (that don't solve my problem):
requireJS optional dependency
Null dependencies in RequireJS when ajax returns a 404
I have created a little Require plugin (code in GitHub) that can lazy-load AMD modules, e.g. like this:
define(["lazy!myModule"], function(myModule) {
myModule.get().then( // get() returns a promise
function(m) {
// handle success, module is in m argument
},
function(e) {
// handle error
}
);
});
You could use it as is. Alternatively, you could create a similar plugin, e.g. optional, without depending on require-lazy. The optional plugin could be used as:
define(["optional!myModule"], function(myModule) {
// code as above
// or there may be a way to make optional! return null, if loading failed
The code from my plugin might be of help and of course the docs for the plugin API.
Still a simpler - but IMHO dirtier - way could be to require the optional module inside the client module, using the global require function:
define([], function() { // USE THE GLOBAL require!!!
require(["myOptionalModule"],
function(myOptionalModule) {
// loaded successfully
},
function(error) {
// load failed
}
);
});
(Also take a look here - related to the last code)
In conclusion, I don't think there is a way to load a module optionally with the plain API. You will have to implement it yourself somehow and handle the asynchronicity with callbacks, as above, either inside the application code or in the plugin.
Let's assume I have an AMD module that conditionally requires a second module in some environments:
define(["require"], function(require) {
var myObj = {
foo: console.error.bind(console)
};
if(browserEnv)
require(["./conditional-polyfill"],function(polyfill){
myObj.foo = console.log.bind(console,polyfill) ;
});
return myObj; //returns before conditional require is satisfied
});
The question is: How can I delay the define() call to return/callback AFTER the conditional require has been completed?
I.e. the code below fails:
require(["module-from-above"],function(logger){
logger.foo("Hello!"); //console.error gets called
});
My thoughts on solutions to this issue:
If I inline ./polyfill, everything would work. However, that just circumvents the problem and doesn't work for every case. I want it modularized for a reason.
I could return a Deferred object instead of myObj that gets fulfilled by ./conditional-polyfill later. This would work, but it's really ugly to call loggerDeferred.then(function(logger){ ... }); all the time.
I could make a AMD loader plugin for this module. And call the callback as soon as everything is ready. Again, this would work, but own loader plugins don't work with my build tool.
All solutions I can think of are more hacks than good code. However, I think that my issue isn't too far-fetched. So, how to handle this?
Push conditional outside of "factory function" (the name commonly used in AMD community to refer to the require's and define's callback function)
;(function() {
function factory(require, polyfill){
var myObj = {
foo: console.error.bind(console)
}
if(polyfill){
myObj.foo = console.log.bind(console, polyfill)
}
return myObj
}
var need = ['require']
if(browserEnv){
need.push("./conditional-polyfill")
}
define(need, factory)
})();
I would use a Deferred, as you say.
The deferred pattern is the ideal solution to this kind of issue, because it allows you to tie complex async operations together in a consistent way.
It will make your code a bit larger, but its a simple solution compared to modifying the loader and the build tools.
I just rewrote backbone-mongodb to be really compatible with backbone. The original solution had nice vows for testing, and I would like my code to get tested as well, but simply have no idea how to do it.
Here is an example, I would like to test:
update: function(callback) {
var model = this.model;
this._withCollection(function(err, collection) {
if (err) callback(err);
else {
var attributes = _.clone(model.attributes);
delete attributes['_id'];
collection.update({ _id: new ObjectID(model.id) }, {$set: attributes}, {safe:true, upsert:false}, function(err) {
model.fetch();
callback(null, model.toJSON());
});
}
});
},
This code has nothing special in it. It uses the node-mongodb-native driver, and updates a record in the database. AFAIK, proper testing would mean to at least check that (1) collection.update was called with the given arguments, (2) callback is called when and how it should be, (3) model contains the new data.
With vows I can check (2), but have no idea at all how to check (1). Actually, the same holds for every unit testing framework I know about, qUnit, Jasmine. I'm sure that this can be done somehow, and I'm decided to learn at least one of them, but it's hard to make a choice when you got stuck at the beginning. :)
I know about sinon.js and think that everyting can be tested using mocking all the objects I have until I end up having the collection mocked as well, but this seems to be extremely clumsy. Could someone help me in writing the above tests please, and I'll happy to write out a tutorial of it?
I will use Jasmine for that purpose; I don't how familiar are you using that library, but they have a plugin to use jQuery for writing spec tests, you can load fixtures/templates and run tests on it.
for your particular case, assuming that function is part of MyObj "class", I will write something like:
describe("My object tests", function() {
it("Should update my obj", function () {
var noError, flag = false;
MyObj.update(function (err, model){
flag=true;
noError= err==null;
// or you can do other checks on the result
})
// we wait for 5 sec until get a response (flag==true)
waitsFor(function (){ return flag}, "Timeout on update", 5000);
// check if there are no errors
expect(noError).toBeTruthy();
});
});