I'm rewriting a simple app using HTML/CSS/JavaScript that creates animations with images using intervals, and there's a bunch of buttons that controls these animations.
It's scaling and becoming really messed up, with logic mixed with DOM manipulations via jQuery all through one javascript script file.
So I decided to use the Module design pattern.
Based on my description of the app, is there a problem with this callback implementation for a module?
Or with the module implementation?
In this example, what is the best approach to declare private variables and give access to them through the public api? Getters and setters? Are they really necessary? I want to write readable code but I don't want to over-architect things.
(function($) {
$.Module = function(options){
var module = {
options: $.extend({
callbacks: {
start: false
}
}, options),
start: function(callback){
//console.log('private start method');
if(typeof callback === "function") {
callback();
}
}
}
// public api
return {
start: function(){
//console.log('public start method');
module.start(module.options.callbacks.start);
}
}
}
}($));
var myModule = $.Module({
callbacks: {
start: function(){
console.log('start callback!');
}
}
})
myModule.start();
Here is a sample.
Just because it seems to work to me, and I've seen other implementations that have some code that look like this:
callback: function(method) {
if(typeof method === "function") {
var args = [];
for(var x = 1; x <= arguments.length; x++) {
if(arguments[x]) {
args.push(arguments[x]);
}
}
method.apply(this, args);
}
},
I'm not sure what this last code is supposed to do. Is it intended to return data to the callback function I registered when instantiating the module? If so, how does it work?
Is there a problem with this callback implementation for a module?
Not if this what you want.
Or with the module implementation?
Your instantiation of the options property should probably use a deep extend, the current one is overwriting the complete callbacks object.
I'm not sure what this other code I found is supposed to do. Is it intended to return data to the callback function
Yes. It does support multiple arguments, using the arguments object
… I registered when instantiating the module?
No, it does not have to do with registration. It will call the method that is passed as a parameter to this callback function.
Using this code will release you from the need to do that typeof check every time. You would just write
helper.callback(this.options.callbacks.start, /* optional arguments */)
// instead of
if (typeof this.options.callbacks.start == "function")
this.options.callbacks.start(/* arguments */)
However, until you think that you need this helper you will not need this helper.
Related
I am working on a project where I am observing types of each binding layer function that node.js javascript layer calls. For observing types, I created a stub using sinon that looks something like this
var originalProcessBinding = process.binding;
sinon.stub(process, 'binding').callsFake(function (data) {
var res = originalProcessBinding(data);
// custom code here
return res;
}
So, my idea is to look at each object inside res and see if its a Function. If it is, create a stub that records the state and then call the original Function. The custom code looks something like
_.forEach(res, function(value, key) {
if (_.isFunction(value)) {
sinon.stub(res, key).callsFake(function() {
var args = arguments;
// do some processing with the arguments
save(args);
// call the original function
return value(...arguments);
}
}
}
However, I am not sure if this handles all the types of returns. For instance, how are the errors handled? What happens if the function is asynchronous?
I ran the node.js test suite and found lots of failing test cases. Is there a better way to stub the functions. Thanks.
Edit: The failing test cases have errors in common that look like Callback was already called or Timeout or Expected Error.
Unfortunately, even though many errors can be fixed, It's hard to add sinon to the build process. I implemented by own stubbing methods in vanilla js to fix this. Anyone who's looking to stub internal node.js functions should find this helpful.
(function() { process.binding = function(args) {
const org = process.binding;
const util = require('util');
var that = org(args),
thatc = that;
for (let i in thatc) {
if (util.isFunction(thatc[i]) && (!thatc[i].__isStubbed)) {
let fn = thatc[i];
if (i[0] !== i[0].toUpperCase()) {
// hacky workaround to avoid stubbing function constructors.
thatc[i] = function() {
save(arguments);
return fn.apply(that, arguments);
}
thatc[i].__isStubbed = true;
}
}
}
return thatc;
}
})();
This code passes all the tests with the current master of Node.js. Adding sinon seems to mutate the function object that triggers internal v8 checks which fail.
Ok, I wouldn't think to do this in C#, but javascript is designed with much more flexibility in access.
there's a plugin like this
(function($)
{
...more stuff
var results={a:1,b:2} //this I need to modify
var someData={x:1}
send = function(){
//send results ajax
};
if(typeof beforeSend=='function')
beforeSend(someData) //hook to use results
})(jQuery)
So, in my own code, I have the function window.beforeSend = function(d){}
and it does have the someData which is in the scope I need to modify. But here's the question:
How can I modify the results var that's within the closure before it sends it.
I need to add
window.beforeSend = function(d){
window.quantumTunnelThroughScope.results['c']=1
}
The reason I need to do this is because I cannot modify the code of the plugin. Of course if I add the beforeSend within the closure, it works, but then I'm modifying the library which I'm not allowed to do in this case.
I've seen some awesome eval('this.xx' =function ) etc etc but I can't make it work.
EDIT: I clarified that actually it's a different var in the same scope that needs to be edited
No, there's no reasonable way for beforeSend to reach into that closure and modify results. results in the code presented is entirely private to code within that closure.
The unreasonable way to try to do it is to decompile and recompile the plugin function, via eval, and insert a call to a function before the beforeSend that lets us modify results:
(function($) {
$.run = function() {
// You mentioned "ajax," so let's make this
// asynchronous
setTimeout(function() {
var results = {
a: 1,
b: 2
};
var someData = { // Need to modify this
x: 1
};
send = function() {
//send results ajax
};
if (typeof beforeSend == 'function') {
beforeSend(someData); //hook to use results
}
console.log("in plugin, results = ", results);
}, 10);
};
})(jQuery)
window.modifyResults = function(d) {
return ["new", "results"];
};
window.beforeSend = function(r) {
r.c = 1;
};
jQuery.run = (function() {
// Function#toString, on nearly all browsers, returns the source
// code of he function (or something near to it) except on functions
// implemented in native code. We take that string and replace
// the "beforeSend(someData);" call with two calls, the first of
// which lets us modify the `results` variable. Then we use eval
// to turn that back into a function, and assign the result to
// where the plugin put its function originally.
return eval("(" + jQuery.run.toString().replace(
"beforeSend(someData);",
"results = modifyResults(results); beforeSend(someData);"
) + ")");
})();
jQuery.run();
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
But may or may not work, depending on how the plugin is written, as it lifts it out of its original scope and recompiles it in the scope of our function updating jQuery.run.
I think I'd prefer to take the hit of modifying the plugin. :-)
Note: In the above, I've used a "static" jQuery function. If the plugin you're replacing provides an instance function, the kind you can call on jQuery instances, e.g. the bar in $(".foo").bar(), you'll find it on jQuery.fn instead of jQuery:
jQuery.fn.pluginFunction = eval(...);
I want to call a function with a custom thisArg.
That seems trivial, I just have to call call:
func.call(thisArg, arg1, arg2, arg3);
But wait! func.call might not be Function.prototype.call.
So I thought about using
Function.prototype.call.call(func, thisArg, arg1, arg2, arg3);
But wait! Function.prototype.call.call might not be Function.prototype.call.
So, assuming Function.prototype.call is the native one, but considering arbitrary non-internal properties might have been added to it, does ECMAScript provide a safe way in to do the following?
func.[[Call]](thisArg, argumentsList)
That's the power (and risk) of duck typing: if typeof func.call === 'function', then you ought to treat it as if it were a normal, callable function. It's up to the provider of func to make sure their call property matches the public signature. I actually use this in a few place, since JS doesn't provide a way to overload the () operator and provide a classic functor.
If you really need to avoid using func.call, I would go with func() and require func to take thisArg as the first argument. Since func() doesn't delegate to call (i.e., f(g, h) doesn't desugar to f.call(t, g, h)) and you can use variables on the left side of parens, it will give you predictable results.
You could also cache a reference to Function.prototype.call when your library is loaded, in case it gets replaced later, and use that to invoke functions later. This is a pattern used by lodash/underscore to grab native array methods, but doesn't provide any actual guarantee you'll be getting the original native call method. It can get pretty close and isn't horribly ugly:
const call = Function.prototype.call;
export default function invokeFunctor(fn, thisArg, ...args) {
return call.call(fn, thisArg, ...args);
}
// Later...
function func(a, b) {
console.log(this, a, b);
}
invokeFunctor(func, {}, 1, 2);
This is a fundamental problem in any language with polymorphism. At some point, you have to trust the object or library to behave according to its contract. As with any other case, trust but verify:
if (typeof duck.call === 'function') {
func.call(thisArg, ...args);
}
With type checking, you can do some error handling as well:
try {
func.call(thisArg, ...args);
} catch (e) {
if (e instanceof TypeError) {
// probably not actually a function
} else {
throw e;
}
}
If you can sacrifice thisArg (or force it to be an actual argument), then you can type-check and invoke with parens:
if (func instanceof Function) {
func(...args);
}
At some point you have to trust what's available on the window. It either means caching the functions you're planning on using, or attempting to sandbox your code.
The "simple" solution to calling call is to temporarily set a property:
var safeCall = (function (call, id) {
return function (fn, ctx) {
var ret,
args,
i;
args = [];
// The temptation is great to use Array.prototype.slice.call here
// but we can't rely on call being available
for (i = 2; i < arguments.length; i++) {
args.push(arguments[i]);
}
// set the call function on the call function so that it can be...called
call[id] = call;
// call call
ret = call[id](fn, ctx, args);
// unset the call function from the call function
delete call[id];
return ret;
};
}(Function.prototype.call, (''+Math.random()).slice(2)));
This can then be used as:
safeCall(fn, ctx, ...params);
Be aware that the parameters passed to safeCall will be lumped together into an array. You'd need apply to get that to behave correctly, and I'm just trying to simplify dependencies here.
improved version of safeCall adding a dependency to apply:
var safeCall = (function (call, apply, id) {
return function (fn, ctx) {
var ret,
args,
i;
args = [];
for (i = 2; i < arguments.length; i++) {
args.push(arguments[i]);
}
apply[id] = call;
ret = apply[id](fn, ctx, args);
delete apply[id];
return ret;
};
}(Function.prototype.call, Function.prototype.apply, (''+Math.random()).slice(2)));
This can be used as:
safeCall(fn, ctx, ...params);
An alternative solution to safely calling call is to use functions from a different window context.
This can be done simply by creating a new iframe and grabbing functions from its window. You'll still need to assume some amount of dependency on DOM manipulation functions being available, but that happens as a setup step, so that any future changes won't affect the existing script:
var sandboxCall = (function () {
var sandbox,
call;
// create a sandbox to play in
sandbox = document.createElement('iframe');
sandbox.src = 'about:blank';
document.body.appendChild(sandbox);
// grab the function you need from the sandbox
call = sandbox.contentWindow.Function.prototype.call;
// dump the sandbox
document.body.removeChild(sandbox);
return call;
}());
This can then be used as:
sandboxCall.call(fn, ctx, ...params);
Both safeCall and sandboxCall are safe from future changes to Function.prototype.call, but as you can see they rely on some existing global functions to work at runtime. If a malicious script executes before this code, your code will still be vulnerable.
If you trust Function.prototype.call, you can do something like this:
func.superSecureCallISwear = Function.prototype.call;
func.superSecureCallISwear(thisArg, arg0, arg1 /*, ... */);
If you trust Function..call but not Function..call.call, you can do this:
var evilCall = Function.prototype.call.call;
Function.prototype.call.call = Function.prototype.call;
Function.prototype.call.call(fun, thisArg, arg0, arg1 /*, ... */);
Function.prototype.call.call = evilCall;
And maybe even wrap that in a helper.
If your functions are pure and your objects serializable, you can create an iframe and via message passing (window.postMessage), pass it the function code and the arguments, let it do the call for you (since it's a new iframe without any 3rd party code you're pretty safe), and you're golden, something like (not tested at all, probably riddled with errors):
// inside iframe
window.addEventListener('message', (e) => {
let { code: funcCode, thisArg, args } = e.data;
let res = Function(code).apply(thisArg, args);
e.source.postMessage(res, e.origin);
}, false);
Same thing can be done with Web Workers.
If that's the case though, you can take it a step further and send it over to your server. If you're running node you can run arbitrary scripts rather safely via the vm module. Under Java you have projects like Rhino and Nashorn; I'm sure .Net has its own implementations (maybe even run it as JScript!) and there're probably a bazillion broken javascript VMs implemented in php.
If you can do that, why not use a service like Runnable to on-the-fly create javascript sandboxes, maybe even set your own server-side environment for that.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I'm looking for a library that allows me to easily chain together methods but defer their execution until arguments are provided further along in the chain:
chain
.scanDirectory ( '/path/to/scan' )
.recursively()
.for ( /\.js$/i )
.cache()
.provideTo ( '0.locals' )
.as ( 'scripts' )
.defer();
The important thing is that the code behind the scanDirectory function isn't actually called until it's defined that it should be recursive and looking for .js files.
I'm not quite sure how to logically set this up so that I can do something like:
chain
.scanDirectory( '/path/to/scan' )
.scanDirectory( '/another/path' )
.for ( /\.js$/i ) // provided to both paths above?
.doSomethingElse()
which is why I'm looking for a library that may have more mature ideas that accomplish this :)
This post talks about types of execution in JS, there are links to relevant libraries in the end of it
Execution in JavaScript
You have two types of execution in JS:
Synchronous - stuff that happens right when it's called
Asynchronous - stuff that happens when after the current code is done running, also what you refer to as deferred.
Synchronous
Synchronously, you can push actions and parameters to a queue structure, and run them with a .run command.
You can do something like:
var chain = function(){
var queue = []; // hold all the functions
function a(param){
//do stuff, knowing a is set, may also access other params functions set
}
return {
a:function(someParam){
queue.push({action:a,param:someParam});
return this;
},
... // more methods
run:function(){
queue.forEach(function(elem){ // on each item
elem.action.apply(null,param);//call the function on that item
});
}
};
}
This will execute all the functions in the queue when you call run, syntax would be something like
chain().a(15).a(17).run();
Asynchronous
You can simply set a timeout, you don't need to use something like .run for this.
var chainAsync = function(){
// no need for queue
function a(param){
//do stuff, knowing a is set, may also access other params functions set
}
return {
a:function(someParam){
setTimeout(a,0,someParam);
return this;
},
... // more methods
};
}
Usage would be something like
chain().a(16).a(17);
Some issues:
If you want to share parameters between functions, you can store them somewhere in the object itself (have a var state in addition to the queue).
It's either sync, or async. You can't detect one or the other by context. Workarounds are being built for ES6.
More resources
For some implementation of something similar, you can see this question where I implement something similar.
Promises tutorial - promises let you use this type of execution called CPS (continuation passing style) to great effect.
Another nice post on promises.
Bluebird - the fastest and likely best promise library.
Q - probably the most well known and widely used library for chaining execution and promises in JavaScript. Used it several times myself.
Question here on promises and their benefits.
How does basic chaining work in JavaScript - another relevant question here in SO.
Note sure you'll find an all-around working solution for this.
Looks like you're looking for a generic solution to something that would require to have been already baked into the library. I mean, I'm sure there are libraries that have this functionality, but they wouldn't hook auto-magically on other libraries (expect if they have specifically implemented overrides for the right version of the libraries you want to target, maybe).
However, in some scenarios, you may want to look at the Stream.js library, which probably covers enough data-related cases to make it interesting for you:
I don't know whether there's a library to build such methods, but you can easily build that feature yourself. Basically, it will be a settings object with setter methods and one execute function (in your case, defer).
function Scanner() {
this.dirs = [];
this.recurse = false;
this.search = "";
this.cache = false;
this.to = "";
this.name = "";
}
Scanner.prototype = {
scanDirectory: function(dir) {
this.dirs.push(dir);
return this,
},
recursively: function() {
this.recurse = true;
return this;
},
for: function(name) {
this.search = name;
return thsi;
},
cache: function() {
this.cache = true;
return this;
},
provideTo: function(service) {
this.to = service;
return this;
},
as: function(name) {
this.name = name;
return this;
},
defer: function() {
// now, do something with all the given settings here
},
doSomethingElse: function() {
// now, do something else with all the given settings here
}
};
That's the standard way to build a fluent interface. Of course, you could also create a helper function to which you pass a methodname-to-setting map which writes the methods for you if it gets too lengthy :-)
You need a queue to maintain the async and sync-ness of your method chain.
Here is an implementation using jQuery.queue I did for a project:
function createChainable(options) {
var queue = [];
var chainable = {
method1 : function () {
queue.push(function(done){
// code here
done();
});
return chainable;
},
exec1 : function () {
queue.push(function(done){
// code here
done();
});
$.queue(ELEMENT, QUEUE_NAME, queue).dequeue(QUEUE_NAME);
return chainable;
}
};
return chainable;
}
As #Jordan Doyle said in his comment:
Just return this
So every method in your objects should return the object in the return statement so that you can chain on another method.
For example:
var obj = new (function(){
this.methOne = function(){
//...
return this;
}
this.methTwo = function(){
//...
return this;
}
this.methThree = function(){
//...
return this;
}
})();
//So you can do:
obj.methOne().methTwo().methThree();
I want to create a simple to use API into some of these functions but with out being able to bind a function into a new scope I.E the scope it belongs in! I can not figure out a way to do it other then that crazy eval nonsense or doing crazy things with this that make things much much more confusing.
Conceptually I am losing my mind because the filter paramater should be run in the context of done callback. I guess that is my issue the filter parameter is not a callback it is a parameter and should have the scope of where it is running not where it is defined.
Some one please tell me that I am just missing something silly.
Are there any languages that support binding the scope of a lambda to where it is called and not where it is defined?
var scrape = function(selector, filter) {
jsdom.env({
html: data,
src: [ jQuery ],
done: function(errors, window) {
var $ = window.$;
eval('filter=' + filter.toString());
debugger;
var entries = $(selector).filter(filter);
console.log('spo');
debugger;
}
});
};
scrape('p',function(index) {
debugger;
if(this.children.length == 3) {
return $(this.children[0]).is('a') &&
$(this.children[1]).is('font') &&
$(this.children[2]).is('span');
} else {
return false;
}
});
Not completely sure what your code does, but by your description, you're searching for $.proxy. It binds a callback to a specific context. If you were using underscore, you would look at _.bind
Documentation: http://api.jquery.com/jQuery.proxy/