I'm writing a tool that takes some data manually input by the user every 10 minutes, and runs a set of calculations on each input. Plugin A provides one calculation, Plugin B provides another, and so on. Mostly these plugins are independent of one another, i.e. order doesn't matter, because each plugin's calculation returns an integer that gets summed with the other plugins' integers.
But suppose now, I do have a Plugin C that depends on, say, whether Plugin A's return was non-zero. Data-wise, let's say I know how to make Plugin A's states available to Plugin C. (If it were C++, I'd make Plugin A a friend of Plugin C, for example. However, I'm writing this in Javascript, so I may take a looser approach.) My question is more about the pattern for ordering / dependence. How do I ensure that Plugin A's calculation runs before Plugin C's?
Of course, the simplest approach is to simplly "install" the plugins in the order they need to run, i.e. insert the plugins in the right order into an array so the loop looping through said array doesn't need to think.
But this may become fragile as I add more and more plugins (upwards of 20, maybe 30, depending on the scenario). I'd like something more robust.
The best idea I have right now is:
On "installing" a plugin, I supply an array of plugins it depends on.
Each plugin will have a static member, say, _complete, that indicates whether it was run, and gets reset on every new iteration (user input).
As I loop through each plugin, I check each plugin's dependency's _complete states; and if one isn't complete, then I don't run the calculation yet; the loop will be a while-loop that comes back to retry this plugin after attempting all the others. I'll also have a maximum-retries guard to prevent infinite loops.
How can this be improved?
As Gothdo suggested, Promises would work well on a problem like this. However, your code does not need to be async. There are no constraints in mixing and matching async and sync code when using promises. You will end up paying a performance overhead if your running loads of small synchronous functions, but with your use case of tens of functions the overhead is negligible.
Promises provide control flow abstraction for things that happen in the future: Run this, when that completes. Mostly used for async code. An argument might be made that this is hunting ducks with a minigun. I justify the choice with an argument about reinventing the wheel (or finding an obscure hexagonal wheel from the depths of GitHub), having a support for async ready, if the need arises and the fact that most JS programmers are already familiar with Promises and libraries are well supported. And the biggest one: Making the necessary wrapper around promises is extremely simple.
I made a quick sketch of what such a wrapper might look like. The code uses somewhat ugly deferred-pattern to enable adding tasks in any order. Feel free to add error handling, or otherwise modify to suit your needs.
function TaskRunner () {
this.tasks = {};
// name: String
// taskFn: fn(dep_1_result, dep_1_result...) -> result or Promise(result),
// deps: Optional, array of names or a name.
// Return: Promise over added task
TaskRunner.prototype.add = function (name, taskFn, deps) {
var self = this;
deps = (deps === undefined ? [] : (deps instanceof Array ? deps : [deps]));
name = name.toString();
self.tasks[name] = self.tasks[name] || {};
if(self.tasks[name].fn)
throw "Task " + name + " exists."
deps = deps.map(function (d) {
// Create result handler for deps, if none exist
self.tasks[d] = self.tasks[d] || {};
self.tasks[d].result = self.tasks[d].result || defer();
return self.tasks[d].result.promise;
});
// Create result handler for this task if none was created when
// handling deps of formely created tasks
self.tasks[name].result = self.tasks[name].result || defer();
// Excecute when all deps are done
self.tasks[name].fn = Promise.all(deps).spread(taskFn)
.then(function(res, err) {
// Trigger own result handler
if(err) {
self.tasks[name].result.reject(err);
throw err;
}
else {
self.tasks[name].result.resolve(res);
return res;
}
});
return self.tasks[name].fn;
}
}
Example usage: https://jsfiddle.net/3uL9chnd/4/
Bluebird promise lib: http://bluebirdjs.com/docs/api-reference.html
Edit, disclaimer: There's another point when considering overhead: reset efficiency. If you're running light calculations on a tight interval, creating a promise object for each task on each cycle makes this approach less than optimal.
Related
Is it worth using mini functions in JavaScript? Example:
function setDisplay(id, status) {
document.getElementById(id).style.display = status;
}
And after that, each time you want to manipulate the display attribute of any given object, you could just call this function:
setDisplay("object1", "block");
setDisplay("object2", "none");
Would there be any benefit of coding this way?
It would reduce file size and consequently page load time if that particular attribute is changed multiple times. Or would calling that function each time put extra load on processing, which would make page loading even slower?
Is it worth using mini functions in JavaScript?
Yes. If done well, it will improve on:
Separation of concern. For instance, a set of such functions can create an abstraction layer for what concerns presentation (display). Code that has just a huge series of instructions combining detailed DOM manipulation with business logic is not best practice.
Coding it only once. Once you have a case where you can call such a function more than once, you have already gained. Code maintenance becomes easier when you don't have (a lot of) code repetition.
Readability. By giving such functions well chosen names, you make code more self-explaining.
Concerning your example:
function setDisplay(id, status) {
document.getElementById(id).style.display = status;
}
[...]
setDisplay("object1", "block");
setDisplay("object2", "none");
This example could be improved. It is a pity that the second argument is closely tied to implementation aspects. When it comes to "separation of concern" it would be better to replace that argument with a boolean, as essentially you are only interested in an on/off functionality. Secondly, the function should probably be fault-tolerant. So you could do:
function setDisplay(id, visible) {
let elem = document.getElementById(id);
if (elem) elem.style.display = visible ? "block" : "none";
}
setDisplay("object1", true);
setDisplay("object2", false);
Would there be any benefit of coding this way?
Absolutely.
would [it] reduce file size...
If you have multiple calls of the same function: yes. But this is not a metric that you should be much concerned with. If we specifically speak of "one-liner" functions, then the size gain will hardly ever be noticeable on modern day infrastructures.
...and consequently page load time
It will have no noticeable effect on page load time.
would calling that function each time put extra load on processing, which would make page loading even slower?
No, it wouldn't. The overhead of function calling is very small. Even if you have hundreds of such functions, you'll not see any downgraded performance.
Other considerations
When you define such functions, it is worth to put extra effort in making those functions robust, so they can deal gracefully with unusual arguments, and you can rely on them without ever having to come back to them.
Group them into modules, where each module deals with one layer of your application: presentation, control, business logic, persistence, ...
It would reduce file size and consequently page load time if that particular attribute is changed multiple times. Or would calling that function each time put extra load on processing, which would make page loading even slower?
It won't make the page's loading slower. Adding a few lines here and there with vanilla js won't effect the page loading time, after all, the page's loading time is the time to load the html js and css files, so a few lines in js won't effect it. A performance issue is unlikely to happen unless you're doing something really intended that brings you into a massive calculation or huge recursion.
Is it worth using mini functions in JavaScript?
In my opinion - Yes. You don't want to overuse that when unnecessary, after all, you don't want to write each line of code in a function right? In many cases, creating mini functions can improve the code's cleanness, readability and they can made it easier and faster to code.
With es6 you can use a single line functions with is very nice and easy:
const setDisplay = (id, status) => document.getElementById(id).style.display = status;
It doesn't really affect the performance unless you execute that function 10,000 or more times.
Jquery use mini functions, like $('#id').hide(); $('#id').show(); and $("#id").css("display", "none"); $("#id").css("display", "block");
It creates more readable code but it doesn't do that much to the performance.
var divs = document.querySelectorAll("div");
function display(div, state) {
div.style.display = state;
}
console.time("style");
for(var i = 0; i < divs.length; i++) {
divs[i].style.display = "none";
}
console.timeEnd("style");
console.time("function");
for(var j = 0; j < divs.length; j++) {
display(divs[j], "block");
}
console.timeEnd("function");
<div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div>
In terms of runtime performance I’d say it doesn’t make any difference. The JavaScript engines are incredibly optimized and are able to recognize that this function is being called often and to replace it by native code, and maybe even able to inline the function call so that the cost of calling an extra function is eliminated completely.
If your intent is to reduce the JavaScript files: you save a few tens of bytes for each place in the code where the function can be used, but even if you call this mini function 100 times, you will be saving 1 or 2 kb of uncompressed JavaScript. If you compare the final gzipped JavaScript (which should be how you serve your resources if you care about performance) I doubt there will be any noticeable difference.
You also need to think about how this type of code will be understood by your colleagues, or whoever may have to read this code in the future. How likely it is that a new programmer in your team knows the standard DOM API? Your non-standard functions might confuse your future teammates, and require them to look into the function body to know what they do. If you have one function like this it probably doesn’t make much of a difference, but as you start adding more and more mini functions for each possible style attribute you end up with hundreds of functions that people need to get used to.
The real benefit is ability to update on several places. Let say that you realize that you want to use classes instead of using styles. Instead of having to change on all places, you can just change that one method.
This is even more apparent when you read from a common source. Perhaps you have an API that you call with a single method.
let user = getData('user');
...but instead doing a API call, you want to use localforage to store the user data and read from the disk instead.
let user = getData('user');
... but you realized that it takes about 70-200 ms to read from localforage, so instead you store everything in memory, but reading from localforage if it's not in memory, and doing an API call if it's not in localforage. Getting user information is still the same:
let user = getData('user');
Just imagine if you wrote the whole call in every single place.
I usually create mini functions like these if I 1) read/write data, 2) if the code gets shorter OR more readable (i.e. the method name replaces comments), 3) or if I have repeatable code that I can sum up in a dynamic way.
Also, to quote from the book Clean Code: each function should do only one thing. Thinking in terms of mini functions helps with that.
I'm not fully sure of what a mini function is but if you're asking whether it's better to define functions for code that needs to be called multiple times instead of typing out/ copy-pasting those same lines of code over and over then it's better to go with option 1 for readability and file-size.
It would reduce file size and consequently page load time if that particular attribute is changed multiple times. Or would calling that function each time put extra load on processing, which would make page loading even slower?
It's not in typical scenarios possible for the usage of such functions to put (any significant) extra load on processing that would slow down the document parsing, it's likeliness to do that would however depend on when and how you call your code to be executed but it's not like you'll be looping O(N3) and calling such functions thousands of times to impact page loading (right?) - if anything, predefined functions like that meant to take dynamic inputs would make your code neater, more uniform looking and reduce file-size... with a very tiny obvious overhead when compared to directly accessing properties without having to go through another function. So you decide whether you want to take up that bargain depending on whether the frequency of you calling functions like that would be significant enough to be considered.
I'm sure most engines optimize repetitive code so it's (or nearly) equivalent to the same thing anyway so I'd choose readability and improve how easy it is for others to understand and follow through code instead.
JavaScript is prototype language on top of that going on a leaf to say everything in JavaScript is an object so its object oriented as object oriented gets to be although at a very different level.
So the question is like we create classes in object oriented, would we develop script framework as functional programming or prototype and what will be performance hit.
so lets consider this very fine example.
function abc(name, age) {
this.name = name;
this.age = age;
}
person1 = new abc('jhon', 25);
person2 = new abc('mark', 46);
console.log(person1.age);console.log(person2.age);
person1.father = 'mark';
console.log(person1.father);
we have created a function and have used the power of this in the scope of function thus every new insurance will carry this information. and no two instances will be alike.
Then down the a line we have used prototype to add father information on instance. This where prototyping becomes awesome. Thus whole this scoping + instance + prototyping is fast.
function abc(name, age) {
name = name;
age = age;
return name; // its either name or age no two values
}
person1 = abc('jhon', 25);
person2 = abc('mark', 46);
console.log(person1);
console.log(person2);
abc.father = 'mark';
console.log(person1.father);
console.log(abc.father);
we tried the same on functional sense the whole thing fell apart. no instance + no this scoping + no new prototyping delegating down the line. Thus in long run this approach will reduce performance as one would have to re-fetch things over and over again on top of that with t=out this scoping we are only returning single factor where as with this scooping we are packing a lot more into object. this one point.
function abc(name, age) {
this.name = name;
this.age = age;
}
person1 = new abc('jhon', 25);
person1.fatherName = 'Mark';
person1.fatherUpdate = function (d){this.fatherAge = this.fatherName+' '+d;};
person1.fatherUpdate(56);
console.log(person1.fatherAge);
Now have added more complexity we have carried this down the line hence the scope and have added functions on top of it.
This will literally make your hair fall out if things were done in pure functional way.
Always keep in mind any given function can and may compute and execute as many things you want but it will always give single return with 99% of the time single property.
If you need more prototype way with scoping as above.
jquery way of doing things.
I had made my framework ActiveQ which uses the above its almost 30 Kb unminified and does everything jquery does and much more with template engine.
anyways lets build example - it is an example so please be kind ;
function $(a){
var x;
if(x = document.querySelector(a)) return x;
return;}
console.log($('div'));
console.log($('p'));
<div>abc</div>
Now this almost 50% of your jquery selector library in just few lines.
now lets extended this
function $(a){
this.x = document.querySelector(a);
}
$.prototype.changeText = function (a){
this.x.innerHTML = a;
}
$.prototype.changeColor = function (a){
this.x.style.color = a;
}
console.log(document.querySelector('div').innerText);
app = new $('div');
app.changeText('hello');
console.log(document.querySelector('div').innerText);
app.changeColor('red');
<div>lets see</div>
what was the point of the whole above exercise you wont have to search through DOM over and over again as long as long one remains in scope of the function with this
Obviously with pure functional programming one would have to search though all over again and again.
even jquery at time forgets about what it had searched and one would have to research because this context is forwarded.
lets do some jquery style chaining full mode way - please be kind its jusrt an example.
function $(a){
return document.querySelector(a);
}
Element.prototype.changeText = function (a){
this.innerHTML = a;
return this;
}
Element.prototype.changeColor = function (a){
this.style.color = a;
return this;
}
console.log($('div').innerText);
//now we have full DOm acces on simple function reurning single context
//lets chain it
console.log($('div').changeText('hello').changeColor('red').innerText);
<div>lets see</div>
See the difference less line of code is way faster in performance as its working with the browser rather then pure functional way rather then creating a functional call load and search load all repeatedly
so you need to perform bunch of tasks with single output stick to functions as in conventional way and if you need to perform bunch of tasks on context as in last example where context is to manipulate properties of the element as compared to $ function which only searches and returns the element.
I have two functions: myFunctionA() and myFunctionB().
myFunctionA() returns an Object which includes the key Page_Type which has a string value.
myFunctionB() processes a number of entries in the Object returned by myFunctionA(), including Page_Type and its string value.
Later, myFunctionA() is updated so it no longer returns an object including the key Page_Type but, instead, the key Page_Types - which has an array value.
Because of this, myFunctionB() will now also need to be updated - it will no longer be processing Page_Type which is a string, but Page_Types which is an array.
If I understand correctly (and I may not), the above is an example of Dependency Request and the extensive refactoring that it throws up can be avoided by (I think) deploying the Dependency Injection pattern (or possibly even the Service Locator pattern?) instead (??)
But, despite reading around this subject, I am still uncertain as to how Dependency Injection can work in PHP or Javascript functions (much of the explanation deals with programming languages like C++ and OOP concepts like Classes, whereas I am dealing with third party functions in PHP and javascript).
Is there any way to structure my code so that updating myFunctionA() (in any significant manner) will not then require me to also update myFunctionB() (and all other functions calling myFunctionA() - for instance myFunctionC(), myFunctionD(), myFunctionE() etc.) ?
And what if myFunctionH() requires myFunctionG() requires myFunctionF() which requires myFunctionA()? I don't want to be in a position where updating myFunctionA() now means that three more functions (F, G and H) all have to be updated.
Attempt at an answer:
The best answer I can think of at present - and this may not be a best practice answer because I don't know yet if there is a formal problem which corresponds with the problem I am describing in the question above - is the following restatement of how I presented the setup:
I have two (immutable) functions: myFunctionA__v1_0() and myFunctionB__v1_0().
myFunctionA__v1_0() returns an Object which includes the key
Page_Type which has a string value.
myFunctionB__v1_0() processes a number of entries in the Object
returned by myFunctionA__v1_0(), including Page_Type and its
string value.
Later, myFunctionA__v1_0() still exists but is also succeeded by myFunctionA__v2_0() which
returns an object including the key Page_Types - which has an array value.
In order for myFunctionB to access the object returned by myFunctionA__v2_0(),
there will now also need to be a myFunctionB__v1_1(), capable of processing
the array Page_Types.
This can be summarised as:
myFunctionB__v1_0() requires object returned by myFunctionA__v1_0()
myFunctionB__v1_1() requires object returned by myFunctionA__v2_0()
Since each function becomes immutable after being formally named, what never happens is that we end up with an example of myFunctionB__v1_0() requiring object returned by myFunctionA__v2_0().
I don't know if I am approaching this the right way, but this is the best approach I have come up with so far.
It is very common in programming for the provider - ie. myFunctionA() to know nothing about its consumer(s) myFunctionB(). The only correct way to handle this is to define an API up front and never change it ;)
I don't see the purpose of versioning the consumer - that reason would have to be "downstream" of myFunctionB() - ie. a consumer of myFunctionB(), that the author of myFunctionB() is not in control of... in which case myFunctionB() itself becomes a provider, and the author would have to deal with that (perhaps using the same pattern that you do)... But it's not your problem to deal with.
As for your provider myFunctionA(): If you cannot define an interface / API for the data itself up front - ie. you know that the structure of the data will have to change (in a non-backwards-compatible way), but you don't know how... then you will need to version something one way or another.
You are miles ahead of most since you see this coming and plan for it from the beginning.
The only way to avoid having to make changes to the consumer myFunctionB() at some point, is to make all changes to the provider myFunctionA() in a backwards-compatible way. The change you describe is not backwards compatible because myFunctionB() cannot possibly know what to do with the new output from myFunctionA() without being modified.
The solution you propose sounds like it should work. However, there are at least a couple of downsides:
It requires you to keep an ever-growing list of legacy functions around in case there are ever any consumers requesting their data. This will become very complicated to maintain, and likely impossible in the long run.
Depending what changes need to be made in the future it might no longer be possible to produce the output for myFunctionA__v1_0() at all - in your example you add the possibility of several page_types in your system - in this case you can probably just rewrite v1_0 to use the first and the legacy consumers will be happy. But if you decide to completely drop the concept of page_types from your system, you would have to plan for the complete removal of v1_0 one way or another. So you need to establish a way to communicate this to the consumers.
The only correct way to handle this still is to define an API up front and never change it.
Since we have established that:
you will have to make backwards incompatible changes
you don't know anything about the consumers and you don't have the power to change them when needed
I propose that instead of defining an immutable API for your data, you define an immutable API that allows you to communicate to consumers when they should or must upgrade.
This might sound complicated, but it doesn't have to be:
Accept a version parameter in the provider:
The idea is to let the consumer explicitly tell the provider which version to return.
The provider might look like this:
function myFunctionA(string $version) {
$page_types = ['TypeA', 'TypeB'];
$page = new stdClass();
$page->title = 'Page title';
switch ($version) {
case '1.0':
$page->error = 'Version 1.0 no longer available. Please upgrade!';
break;
case '1.1':
$page->page_type = $page_types[0];
$page->warning = 'Deprecated version. Please upgrade!';
break;
case '2.0':
$page->page_types = $page_types;
break;
default:
$page->error = 'Unknown version: ' . $version;
break;
}
return $page;
}
So the provider accepts a parameter which will contain the version that the consumer can understand - usually the one that was the newest when the consumer was last updated.
The provider makes a best-effort attempt to deliver the requested version
if it is not possible there is a "contract" in place to inform the consumer ($page->error will exist on the return value)
if it is possible, but there is a newer version available another "contract" is in place to inform the consumer of this ($page->warning will exist on the return value).
And handle a few cases in the consumer(s):
The consumer needs to send the version it expects as a parameter.
function myFunctionB() {
//The consumer tells the provider which version it wants:
$page = myFunctionA('2.0');
if ($page->error) {
//Notify developers and throw an error
pseudo_notify_devs($page->error);
throw new Exception($page->error);
} else if ($page->warning) {
//Notify developers
pseudo_notify_devs($page->warning);
}
do_stuff_with($page);
}
The second line of an older version of myFunctionB() - or a completely different consumer myFunctionC() might instead ask for an older version:
$page = myFunctionA('1.1');
This allows you to make backwards-compatible changes any time you want - without consumers having to do anything. You can do your best to still support old versions when at all possible, providing "graceful" degradation in legacy consumers.
When you do have to make breaking changes you can continue supporting the old version for a while before finally removing it completely.
Meta information
I'm not confident this would be useful... but you could add some meta information for consumers using an outdated version:
function myFunctionA(string $version) {
# [...]
if ($page->error || $page->warning) {
$page->meta = [
'current_version' => '3.0',
'API_docs' => 'http://some-url.fake'
]
}
return $page;
}
This could then be used in the consumer:
pseudo_notify_devs(
$page->error .
' - Newest version: ' . $page->meta['current_version'] .
' - Docs: ' . $page->meta['API_docs']
);
...if I were you I would be careful not to overcomplicate things though... Always KISS
Dependency injection is more relevant in a OOP context. However, the main thing I would do here is to stop thinking in terms of returning what you have available and start considering how the 2 methods work together and what's their contract.
Figure out what's the logical output for myFunctionA(), encode that contract into an object, and convert the data you have to that format. This way, even if the way you fetch data in myFunctionA() changes, you only have to update that one conversion.
As long as you adhere to that contract (which can represented through a custom object), myFunctionB() and other methods that expect to receive data as per the contract, you won't have to change those methods any longer.
So my main take away here would be to start thinking about the data you need and pass it around not in the structure you receive it, but in the way that it makes the most sense for your application.
It seems you have broken the interface between myFunctionA and myFunctionB by changing the return type from string to array.
I don't think DI can be of help.
Firstly, this is not Dependency Injection.
Your myFunctionA() could be call a Producer since it provide data, it should be proved a Data Structure.
Your myFunctionB() could be call a Consumer since it consume the data that provided by myFunctionA.
So in the order to make your Producers and your Consumer working independently, you need to add another layer between them, call Converter. The Converter layer will convert the Data Structure that provided by Producer to be a well know Data Structure that can understand by a Consumer
I really recommend you to read the book Clean Code chapter 6: Objects and data structures. So you will able to fully understand the concept above, about Data Structure
Example
Assume we had Data Structure call Hand, had property right hand and left hand.
class Hand {
private $rightHand;
private $leftHand
// Add Constructor, getter and setter
}
myFunctionA() will provide object Hand, Hand is a Data Structure
function myFunctionA() {
$hand = Hand::createHand(); //Assume a function to create new Hand object
return $hand;
}
let said we had another Data Structure, call Leg, the Leg will able to consume by myFunctionB();
class Leg {
private $rightLeg;
private $leftLeg
// Add Constructor, getter and setter
}
Then, we need to had a converter, in the middle , to convert from Hand to Leg, and use on myFunctionB()
class Converter {
public static function convertFromHandToLeg($hand) {
$leg = makeFromHand($hand); //Assume a method to convert from Hand to Leg
return $leg;
}
}
myFunctionB(Converter::convertFromHandToLeg($hand))
So, whenever you edit the response of myFunctionA(), mean you are going to edit Data Structure of Hand. You only need to edit the Converter to make sure it continue to convert from Hand to Leg correctly. You do not need to touch myFunctionB and vice versa.
This will very helpful when you had another Producer that will provide Hand like you mention on your question, myFunctionC(), myFunctionD()... And you also had many another Consumer that will consume Leg like myFunctionH(), myFunctionG()...
Hope this help
I have some trouble that comes from my Javascript (JS) codes, since I sometimes need to access the same DOM elements more than once in the same function. Some reasoning is also provided here.
From the point of view of the performance, is it better to create a jQuery object once and then cache it or is it better to create the same jQuery object at will?
Example:
function(){
$('selector XXX').doSomething(); //first call
$('selector XXX').doSomething(); //second call
...
$('selector XXX').doSomething(); // n-th call
}
or
function(){
var obj = $('selector XXX');
obj.doSomething(); //first call
obj.doSomething(); //second call
...
obj.doSomething(); // n-th call
}
I suppose that the answer probably depends by the value of "n", so assume that n is a "small" number (e.g. 3), then a medium number (e.g. 10) and finally a large one (e.g. 30, like if the object is used for comparison in a for cycle).
Thanks in advance.
It is always better to cache the element, if n is greater than 1, cache the element, or chain the operations together (you can do $('#something').something().somethingelse(); for most jQuery operations, since they usually return the wrapped set itself). As an aside, it has become a bit of a standard to name cache variables beginning with a money sign $ so that later in the code it is evident that you are performing an operation on a jQuery set. So you will see a lot of people do var $content = $('#content'); then $content.find('...'); later on.
The second is superior. Most importantly, it is cleaner. In the future, if you want to change your selector, you only need to change it one place. Else you need to change it in N places.
Secondly, it should perform better, although a user would only notice for particularly heavy dom, or if you were invoking that function a lot.
If you look at this question from a different perspective, the correct answer is obvious.
In the first case, you're duplicating the selection logic in every place it appears. If you change the name of the element, you have to change each occurence. This should be reason enough to not do it. Now you have two options - either you cache the element's selector or the element itself. Using the element as an object makes more sense than using the name.
Performance-wise, I think the effect is negligible. Probably you'll be able to find test results for this particular use-case: caching jQuery objects vs always re-selecting them. Performance might become an issue if you have a large DOM and do a lot of lookups, but you need to see for yourself if that's the case.
If you want to see exactly how much memory your objects are taking up, you can use the Chrome Heap Profiler and check there. I don't know if similar tools are available for other browsers and probably the implementations will vary wildly in performance, especially in IE's case, but it may satisfy your curiosity.
IMO, you should use the second variant, storing the result of the selection in an object, no so much as to improve performance but to have as little duplicate logic as possible.
As for caching $(this), I agree with Nick Craver's answer. As he said there, you should also use chaining where possible - cleans up your code and solves your problem.
You should take a look at
http://www.artzstudio.com/2009/04/jquery-performance-rules/
or
http://addyosmani.com/jqprovenperformance/
I almost always prefer to cache the jQuery object but the benefit varies greatly based on exactly what you are using for your selector. If you are using ids then the benefit is far less than if you are using types of selectors. Also, not all selectors are created equally so try to keep that in mind when you write your selectors.
For example:
$('table tr td') is a very poor selector. Try to use context or .find() and it will make a BIG difference.
One thing I like to do is place timers in my code to see just how efficient it is.
var timer = new Date();
// code here
console.log('time to complete: ' + (new Date() - timer));
Most cached objects will be performed in less than 2 milliseconds where as brand new selectors take quite a bit longer because you first have to find the element, and then perform the operation.
In JavaScript, functions are generally short-lived—especially when hosted by a browser. However, a function’s scope might outlive the function. This happens, for example, when you create a closure. If you want to prevent a jQuery object from being referenced for a long time, you can assign null to any variables that reference it when you are done with that variable or use indirection to create your closures. For example:
var createHandler = function (someClosedOverValue) {
return function () {
doSomethingWith(someClosedOverValue);
};
}
var blah = function () {
var myObject = jQuery('blah');
// We want to enable the closure to access 'red' but not keep
// myObject alive, so use a special createHandler for it:
var myClosureWithoutAccessToMyObject = createHandler('red');
doSomethingElseWith(myObject, myClosureWithoutAccessToMyObject);
// After this function returns, and assuming doSomethingElseWith() does
// not itself generate additional references to myObject, myObject
// will no longer have any references and be elligible for garbage
// collection.
}
Because jQuery(selector) might end up having to run expensive algorithms or even walk the DOM tree a bit for complex expressions that can’t be handled by the browser directly, it is better to cache the returned object. Also, as others have mentioned, for code clarity, it is better to cache the returned object to avoid typing the selector multiple times. I.e., DRY code is often easier to maintain than WET code.
However, each jQuery object has some amount of overhead. So storing large arrays of jQuery objects in global variables is probably wasteful—unless if you actually need to operate on large numbers of these objects and still treat them as distinct. In such a situation, you might save memory by caching arrays of the DOM elements directly and using the jQuery(DOMElement) constructor which should basically be free when iterating over them.
Though, as people say, you can only know the best approach for your particular case by benchmarking different approaches. It is hard to predict reality even when theory seems sound ;-).
Edit: I found this interesting library which looks like it can do exactly what I was describing at the bottom: https://github.com/philbooth/check-types.js
Looks like you can do it by calling check.quacksLike.
I'm fairly new to using javascript and I'm loving the amount of power it offers, but sometimes it is too flexible for my sanity to handle. I would like an easy way to enforce that some argument honors a specific interface.
Here's a simple example method that highlights my problem:
var execute = function(args)
{
executor.execute(args);
}
Let's say that the executor expects args to have a property called cmd. If it is not defined, an error might be caught at another level when the program tries to reference cmd but it is undefined. Such an error would be more annoying to debug than explicitly enforcing cmd's existence in this method. The executor might even expect that args has a function called getExecutionContext() which gets passed around a bit. I could imagine much more complex scenarios where debugging would quickly become a nightmare of tracing through function calls to see where an argument was first passed in.
Neither do I want to do something on the lines of:
var execute = function(args)
{
if(args.cmd === undefined || args.getExecutionContext === undefined ||
typeof args.getExecutionContext !== 'function')
throw new Error("args not setup correctly");
executor.execute(args);
}
This would entail a significant amount of maintenance for every function that has arguments, especially for complex arguments. I would much rather be able to specify an interface and somehow enforce a contract that tells javascript that I expect input matching this interface.
Maybe something like:
var baseCommand =
{
cmd: '',
getExecutionContext: function(){}
};
var execute = function(args)
{
enforce(args, baseCommand); //throws an error if args does not honor
//baseCommand's properties
executor.execute(args);
}
I could then reuse these interfaces amongst my different functions and define objects that extend them to be passed into my functions without worrying about misspelling property names or passing in the wrong argument. Any ideas on how to implement this, or where I could utilize an existing implementation?
I don't see any other way to enforce this. It's one of the side effects of the dynamic nature of JavaScript. It's essentially a free-for-all, and with that freedom comes responsibility :-)
If you're in need of type checking you could have a look at typescript (it's not JavaScript) or google's closure compiler (javascript with comments).
Closure compiler uses comments to figure out what type is expected when you compile it. Looks like a lot of trouble but can be helpful in big projects.
There are other benefits that come with closure compiler as you will be forced to produce comments that are used in an IDE like netbeans, it minifies your code, removes unused code and flattens namespaces. So code organized in namespaces like myApp.myModule.myObject.myFunction will be flattened to minimize object look up.
Cons are that you need to use externs when you use libraries that are not compiler compatible like jQuery.
The way that this kind of thing is typically dealt with in javascript is to use defaults. Most of the time you simply want to provide a guarentee that certain members exist to prevent things like reference errors, but I think that you could use the principal to get what you want.
By using something like jQuery's extend method, we can guarentee that a parameter implements a set of defined defaults.
var defaults = {
prop1: 'exists',
prop2: function() { return 'foo'; }
};
function someCall(args) {
var options = $.extend({}, defaults, args);
// Do work with options... It is now guarentee'd to have members prop1 and prop2, defined by the caller if they exist, using defaults if not.
}
If you really want to throw errors at run time if a specific member wasn't provided, you could perhaps define a function that throws an error, and include it in your defaults. Thus, if a member was provided by the caller, it would overwrite the default, but if it was missed, it could either take on some default functionality or throw an error as you wish.
my problem is the following. I wrote a class AJAXEngine, which creates in the constructor a new XMLHttpRequest object. The class contains a method called responseAnalyser, which is called when the "onreadystatechange" of the XMLHttpRequest object has changed.
So now I created lets say 4 instances of AJAXEngine => 4 XMLHttpRequest objects.
Now I have another class DataRequester, which has an array-attribute dataReq, which holds the instances of AJAXEngine. There is only one instance of DataReqeuster in the whole program!
DataRequester has a function called callWhenFinished. The function is called, by the function
responseAnalyser of AJAXEngine and decrements a variable of the DataRequester instance.
But, I think there happen race conditions. How could I prefent them in JavaScript?
function AJAXEngine
{
this.httpReqObj = //create new XMLHttpRequest Object
this.obj;
this.func;
}
AJAXEngine.prototype.responseAnalyser = function()
{
if(this.httpReqObj.readState == 4)
{
this.func.call(this.obj);
}
}
AJAXEngine.prototype.fireReq = function(o, f)
{
this.obj = o;
this.func = f;
// fire ajax req
}
function DataRequester()
{
this.dataReq = new Array();
this.test = 4;
for(var i = 0; i < 4; i ++)
{
this.dataReq[i] = new AJAXEngine();
}
}
DataRequester.prototype.callWhenFinished = function()
{
this.test --;
}
Not sure if this would help, but it looks like you're trying to create a managed connection pool. I did one a few years ago that still works fine here:
DP_RequestPool Library
The pool ensures that requests are made in the order you've provided them (although, of course, they may be returned in any order based on performance) using as many simultaneous requests as you define (subject to system limitations). You can instantiate multiple pools for different purposes.
If nothing else this might give you some ideas.
First of all: most of AJAX-oriented browsers support convention "only 2 simultaneous requests to the same domain". So if you start 4 then 2 of them will be pended.
You DataReqeuster /singleton/ can have array of variable 'test', so instead of share single variable across multiple instances, create multiple instances of data. So to calculate result you will need to sum 'test' array.
You would need to implement a makeshift mutex (the idea is that a heuristic would check for a bool and set it to true if it's false then do body, otherwise sleep(settimeout?) - this is obviously a pretty bad heuristic that nobody would implement as it is not thread safe, but that's the general concept of how you would deal with the race conditions anyway).
I believe there at least one example of creating a mutex on the web, but I have not looked over it in detail - it has some detractors, but I am unaware of another way to achieve 'thread safety' in javascript. I haven't ever needed to implement js 'thread-safety', but that's I start looking if I had to deal with race conditions in javascript.
You can't do a mutex in javascript simply because there really is no built in sleep function available.
See: Is there an equivalent Javascript or Jquery sleep function?
Also, there is no way to ensure that the boolean flag in your mutex isn't being accessed at the same time as another thread, the boolean itself then needs a mutex... and so on and so on. You would need something like Synchronized keyword in java to be available in javascript and this simply doesn't exist. I have had situations where I was worried about thread safety, but when with the code anyway with an alternative plan if an error occurred but that has yet to happen.
So my advice, is if your getting an error, its probably not because of a race condition.