Which one is faster and why ? javascript - javascript

So my simple question is :
Which is the best solution ( performance and memory wise) in this case :
A. we process the variable into a new one and use it everywhere we need :
// imagine (a) is an incomming variable that
// we can't change or else we will break something
var a = "carrots";
var b = a.substr(0,a.length-1);
// and then use b everywhere
B. we process the variable on the fly:
// imagine (a) is an incomming variable that
// we can't change or else we will break something
var a = "carrots";
// and then use "a.substr(0,a.length-1);" everywhere
Please, it's not about how code is written ,is all about which one is faster and better.
Which one you choose and why?

Obviously the first approach is faster. Since we are calling a method and access a property, this is like magnitudes slower than just use/access a resulting string value.
Don't get me wrong, in absolute numbers this still seems like micro optimization, but in this universe and relation it's speed of light vs. a train.
By the way, the concept of storing and caching values in ECMAscript is very commonly used and right so (for different reasons like avoiding scope chains and prototype lookups).

Agreeing with jAndy. I just wanted to add an example and hopefully give another take.
The first approach is far faster.
Each time we call substring we will need to run through the string and create a new substring object on which to operate on in the future. We must run a method and create a new substring which is clearly more costly.
Think about it this way. You could do something (which costs the same amount as the other option lets say x ) one time, or do this x thing a theoretically unlimited time. You are looking at a cost of x vs infinity.
In theory you are potentially taking up less memory in the second. The var b will hold some memory (depends on the language) in java a string takes up 8 bytes. That will persist until the program closes. Instead if you substring each time you will create a value and then clean it once it is no longer used.
Does that make sense?
We create our var b and keep it around until the program closes (is no longer needed). Or we can create a variable a.substring but remove it when it is no longer needed. Technically depending on how the program works you may have less memory if you had say 100's of these over the entire process. However the memory used is negligible and garbage collection does not occur often enough (in normal conditions) for this to actually be true / relevant.
Memory saving is negligible and time complexity is comparatively significant.
Example A is far superior. Why would you in real life ever do the same thing many times if you can accomplish it by doing it just once?

I agree with jAndy.
The first example is going to be faster. You are caching the variable into b so the function you are calling on a is only called once. In your second example everytime you want to use a.substr() you will need to call the function.
So in the first example, you will call a.substr(0, a.length-1) once and cache the result to be used. In example two it will take n times as long where n is the number of times you are calling a.substr(0, a.length-1).
Depending on the what you are doing, there may be no noticeable difference at all. However, the first way is technically faster.
Also, as the comments on jAndy's answer suggest, it is better code since you do not need to repeated call a.substr(0, a.length-1), since you can just cache that.

Related

variable assignment vs passing directly to a function

What are the advantages/disadvantages in terms of memory management of the following?
Assigning to a variable then passing it to a function
const a = {foo: 'bar'}; // won't be reused anywhere else, for readability
myFunc(a);
Passing directly to a function
myFunc({foo: 'bar'});
The first and second code have absolutely no difference between them (unless you also need to use a later in your code) in the way the variable is passed.
The are only 2 cases in which the first might be preferred over the second.
You need to use the variable elsewhere
The variable declaration is too long and you want to split it in two lines or you are using a complex algorithm and want to give a name to each step for readability.
It depends on the implementation of the JavaScript engine. One engine might allocate memory for the variable in the first example and not allocate memory in the directly-passed example, while another implementation might be smart enough to compile the code in such a way that it makes the first example not allocate memory for a variable and thus leaves the first example behaving as the directly-passed example.
I don't know enough about the specific engines to tell you what each one does specifically. You'd have to take a look at each JS engine (or ask authors of each) to get a more conclusive answer.

Javascript single use variable declaration

This may be a obvious question, but I was wondering if there was an efficiency difference between declaring something in a one time use variable or simply executing the code once (rather then storing it then using it). For example
var rowId = 3,
updateStuff(rowId);
VS
updateStuff(3);
I think that worrying about performance in this instance is largely irrelevant. These sorts of micro-optimizations don't really buy you all that much. In my opinion, the readability of the code here is more important, since it's better to identify the 3 as rowId than leaving it as a magic number.
If you are really concerned about optimization, minimizing your code with something like Google Closure would help; it has many tools that will help you make your code more efficient.
If you were using google closure to compile and if you annotated the var as a constant (as it appears in your example), I suspect the compiler would remove the constant and replace it with the literal.
My advice is to pick the readable option and if you really need the micro-optimizations and reduction in source size, use the google closure compiler or a minifier-- a tool is more likeley to accumulate enough micro-optimizations to make something noticeably faster, although I'd guess most of the saved milliseconds would come from reduced network time from minification.
And var declarations do take time and if you added that var to the global namespace, then there is one more thing that will be searched during resolving names.
With a modern browser that compiles your code, the generated code will be exactly the same unless the variable is referenced somewhere else also.
Even with an old browser if there is a difference it won't be something to worry about.
EDIT:
delete does not apply to non-objects, therefore my initial response was wrong. Furtermore since the variable rowId is declared with the flag var it will be cleaned up by the garbage collection. Was it on the other hand defined without, it would live for the duration of the page / application.
source: http://lostechies.com/derickbailey/2012/03/19/backbone-js-and-javascript-garbage-collection/
The variable rowId will be kept in memory and will not be released by the garbage collector since it's referenced to. Unless you release it afterwards like below it will be there till the end of the programs life. Also remember it take time to create the variable (minimal, but you asked)
var rowId = 3
updateStuff(rowId);
delete rowId
With efficiency in mind, then yes there is a difference. Your second example is the quickest and doesn't spend any extra resources.
OBS. Some languages do optimize code as such and simply remove the sequence, but I strongly doubt that JavaScript does so.

Does the result of a function run several times is cached?

In JavaScript, I've an object with an array, and a method wich gets a slice of that array and a concatenation with another array.
If that method is run several times in the same function to return always the same value, does the performance will be faster after of the first run (due to the result will be cached in CPU cache)?
Of course no is the only answer would be here. Because the purpose of a function is to take some parameters and return a value. All the parameters might be different each time you call the function and even if they are the same, the result might be different, also event if you call the function and each time it returns the same result, because it might do an action or cause some modifications in other places, caching the result by the parser would be a buggy idea.
Cheers
Maybe.
There are quite a lot of levels of cache to look at, here. Your processor alone has more than a single cache. Basically, though, you simply can't say much about those. They might have different sizes, things like what other things you do in the mean time and how long the function is all influence this. It should also be noted that this does not work at the level of what you call a function call when in Javascript, but at a much lower level. However, it might at times mean that some time can be shaved off of the execution time of the function. I don't think it's too likely or noticeable, but in the end, you can't really say much about it.
Finally, there is javascript itself. Per the standard, it doesn't have such caching. However, the standard doesn't prohibit strange caching either, so there might one day be a browser that does it like that (I don't believe there is one right now.)
In the end, the basic answer is: no not in a noticeable way. However, there might actually be a speed gain due to the cache, it's always hard to say.
I guess the general answer to this question is NO!
There is no caching in JavaScript or CPU caching that you can control with JavaScript. If you need to cache something / increase performance, i will have to program that yourself.
See this small example:
http://jsperf.com/cachingjq
No, you'd have to manually (or with a framework) memoize the results: Javascript Memoization Explanation?

Javascript overhead of repeated allocation

I'm making a pretty simple game just for fun/practice but I still want to code it well regardless of how simple it is now, in case I want to come back to it and just to learn
So, in that context, my question is:
How much overhead is involved in object allocation? And how well does the interpreter already optimize this? I'm going to be repeatedly checking object grid positions, and if they are still in the same grid square, then no updating the grid array
if (obj.gridPos==obj.getCurrentGridPos()){
//etc
}
But, should I keep an outer "work" point object that the getCurrentGridPos() changes each time or should it return a new point object each time?
Basically, even if the overhead of creating a point object isnt all that much to matter in this scenario, which is faster?
EDIT:
this? which will get called every object each frame
function getGridPos(x,y){
return new Point(Math.ceil(x/25),Math.ceil(y/25));
}
or
//outside the frame by frame update function looping through every object each frame
tempPoint= new Point()
//and each object each frame calls this, passing in tempPoint and checking that value
function makeGridPos(pt,x,y){
pt.x = Math.ceil(x/25);
pt.y = Math.ceil(y/25);
}
Between your two code examples that you have now added, I know of no case where the first would be more efficient than the second. So, if you're trying to optimize for performance or memory use, then re-using an existing object will likely be more efficient than creating a new object each time you call the function.
Note: since JS refers to object by reference, you will have to make sure that your code is not elsewhere hanging on to that object and expecting it to keep its value.
Prior answer:
In all programming (regardless of how good the optimizer is), you are always better caching a result calculated as a result of accessing several member variables that you are using over and over again in the same function rather than recalling the function that calculates it over and over.
So, if you are calling obj.getCurrentGridPos() more than once and conditions have not changed such that it might return a different result, then you should cache it's value locally (in any language). This is just good programming.
var currentPos = obj.getCurrentGridPos();
And, then use that locally cached value:
if (obj.gridPos == currentPos) {
The interpreter may not be able to do this type of optimization for you because it may not be able to tell whether other operations might cause obj.getCurrentGridPos() to return something different from one call to another, but you the programmer can know that.
One other thing. If obj.getCurrentGridPos() returns an actual object, then you probably don't want to be using == or === to compare objects. That compares ONLY to see if they are literally the same object - it does not compare to see if the two objects have the same properties.
This question is VERY difficult to answer because of all the different javascript engine's out there. The "big 4" of browsers all have their own javascript engine/interpreter and each one is going to do their allocation, caching, GCing, etc.. differently.
The Chrome (and Safari) dev tools have a profiling tab where you can profile memory allocation, timings, etc of your code. This will be a place to start (at least for Chrome and Safari)
I'm not certain if IE or Firefox offer such tools, but I wouldn't be surprised if some third party tools exist for these browsers for testing such things...
Also, for reference -
Chrome uses the V8 javascript engine
I.E uses the Triton (I think it's still called that?) javascript engine
Firefox I believe uses Spidermonkey
Safari I'm not sure about, but think it's using the one that's part of WebKit.
It's my understanding that garbage collection stops execution on most JS engines. If you're going to be making many objects per iteration through your game loop and letting them go out of scope that will cause slowdown when the garbage collector takes over.
For this kind of situation you might consider making a singleton to pool your objects with a method to recycle them for reuse by deleting all of their properties, resetting their __proto__ to Object.prototype, and storing them in an array. You can then request recycled objects from the pool as needed, only increasing the pool size when it runs dry.
The short answer is to set one current position object and check against itself as you're literally going to use more memory if you create a new object every time you call getCurrentGridPos().
There may be a better place for the is this a new position check since you should only do that check once per iteration.
It seems optimal to set the currentGridPos using a RequestAnimationFrame and check against its current x y z positions before updating it so you can then trigger a changedPosition type event.
var currentPos = {
x:0,
y:0,
z:0
}
window.requestAnimationFrame(function(newPos){
if (currentPos.x != currentPos.x || currentPos.y != currentPos.y || currentPos.z != newPos.z) {
$.publish("positionChanged"); // tinypubsub https://gist.github.com/661855
}
})
So, just to be clear, yes I think you should keep an outer "work" point object that updates every iteration... and while it's updating you could check to see if its position has changed - this would be a more intentful way to organize the logic and ensure you don't call getCurrentPos more than once per iteration.

Do modern JavaScript JITers need array-length caching in loops?

I find the practice of caching an array's length property inside a for loop quite distasteful. As in,
for (var i = 0, l = myArray.length; i < l; ++i) {
// ...
}
In my eyes at least, this hurts readability a lot compared with the straightforward
for (var i = 0; i < myArray.length; ++i) {
// ...
}
(not to mention that it leaks another variable into the surrounding function due to the nature of lexical scope and hoisting.)
I'd like to be able to tell anyone who does this "don't bother; modern JS JITers optimize that trick away." Obviously it's not a trivial optimization, since you could e.g. modify the array while it is being iterated over, but I would think given all the crazy stuff I've heard about JITers and their runtime analysis tricks, they'd have gotten to this by now.
Anyone have evidence one way or another?
And yes, I too wish it would suffice to say "that's a micro-optimization; don't do that until you profile." But not everyone listens to that kind of reason, especially when it becomes a habit to cache the length and they just end up doing so automatically, almost as a style choice.
It depends on a few things:
Whether you've proven your code is spending significant time looping
Whether the slowest browser you're fully supporting benefits from array length caching
Whether you or the people who work on your code find the array length caching hard to read
It seems from the benchmarks I've seen (for example, here and here) that performance in IE < 9 (which will generally be the slowest browsers you have to deal with) benefits from caching the array length, so it may be worth doing. For what it's worth, I have a long-standing habit of caching the array length and as a result find it easy to read. There are also other loop optimizations that can have an effect, such as counting down rather than up.
Here's a relevant discussion about this from the JSMentors mailing list: http://groups.google.com/group/jsmentors/browse_thread/thread/526c1ddeccfe90f0
My tests show that all major newer browsers cache the length property of arrays. You don't need to cache it yourself unless you're concerned about IE6 or 7, I don't remember exactly. However, I have been using another style of iteration since those days since it gives me another benefit which I'll describe in the following example:
var arr = ["Hello", "there", "sup"];
for (var i=0, str; str = arr[i]; i++) {
// I already have the item being iterated in the loop as 'str'
alert(str);
}
You must realize that this iteration style stops if the array is allowed to contain 'falsy' values, so this style cannot be used in that case.
First of all, how is this harder to do or less legible?
var i = someArray.length;
while(i--){
//doStuff to someArray[i]
}
This is not some weird cryptic micro-optimization. It's just a basic work avoidance principle. Not using the '.' or '[]' operators more than necessary should be as obvious as not recalculating pi more than once (assuming you didn't know we already have that in the Math object).
[rantish elements yoinked]
If someArray is entirely internal to a function it's fair game for JIT optimization of its length property which is really like a getter that actually counts up the elements of the array every time you access it. A JIT could see that it was entirely locally scoped and skip the actual counting behavior.
But this involves a fair amount of complexity. Every time you do anything that mutates that Array you have to treat length like a static property and tell your array altering methods (the native code side of them I mean) to set the property manually whereas normally length just counts the items up every time it's referenced. That means every time a new array-altering method is added you have to update the JIT to branch behavior for length references of a locally scoped array.
I could see Chrome doing this eventually but I don't think it is yet based on some really informal tests. I'm not sure IE will ever have this level of performance fine-tuning as a priority. As for the other browsers, you could make a strong argument for the maintenance issue of having to branch behavior for every new array method being more trouble than its worth. At the very least, it would not get top priority.
Ultimately, accessing the length property every loop cycle isn't going to cost you a ton even in the old browsers for a typical JS loop. But I would advise getting in the habit of caching any property lookup being done more than once because with getter properties you can never be sure how much work is being done, which browsers optimize in what ways or what kind of performance costs you could hit down the road when somebody decides to move someArray outside of the function which could lead to the call object checking in a dozen places before finding what it's looking for every time you do that property access.
Caching property lookups and method returns is easy, cleans your code up, and ultimately makes it more flexible and performance-robust in the face of modification. Even if one or two JITs did make it unnecessary in circumstances involving a number of 'ifs', you couldn't be certain they always would or that your code would continue to make it possible to do so.
So yes, apologies for the anti-let-the-compiler-handle-it rant but I don't see why you would ever want to not cache your properties. It's easy. It's clean. It guarantees better performance regardless of browser or movement of the object having its property's examined to an outer scope.
But it really does piss me off that Word docs load as slowly now as they did back in 1995 and that people continue to write horrendously slow-performing java websites even though Java's VM supposedly beats all non-compiled contenders for performance. I think this notion that you can let the compiler sort out the performance details and that "modern computers are SO fast" has a lot to do with that. We should always be mindful of work-avoidance, when the work is easy to avoid and doesn't threaten legibility/maintainability, IMO. Doing it differently has never helped me (or I suspect anybody) write the code faster in the long term.

Categories