Usually when I have to parse number in javascript I use code like
var x="99"
var xnumber= x-0
instead of
var xnumber= parseInt(x)
Is there any problem in using this Code ( in the performance or the structure ) and I want to know if there is any problem
Using the "x - 0" method is going to be significantly faster in most browsers.
Here's a JSPerf that shows the performance difference.
You can do you own A/B performance testing using JSPerf.com
However, you may still want to use parseInt() in some cases, because it's a little clearer. Although, truthfully, any experienced javascript developer isn't going to have any trouble understanding the faster way.
If the line of code is only going to run once every half second or so (or whenever the user types a letter), you can use parseInt without worrying.
However, if this bit of code is in a loop that runs a few thousand times or more, you should definitely use x - 0.
Both ways should work but it is in my opinion "cleaner" to parse any String with the function which is made for this and not with this little trick.
You are also less likely to run into conversion issues and should you write code in other languages you are in general better of with the way the developer intended to use. Many other languages will not allow your first way of "casting"
performance wise you wont see much of a difference, but it's not very readable. I tend to believe parseInt("4", 10) is the right solution because it's inline with how you parse other values in javascript (i.e. parseFloat("4.0")) so you're keeping more consistent by sticking with one method of parsing numbers.
I don't believe it will perform better or worse. Performance optimisations like that one are generally negligible, and can be inconsistent across various platforms. However, the former method using parseInt gives a much better idea of your intentions to other programmers. Unless you have a very good reason to optimize this statement, and you are sure it really runs faster in your target environment, you are recommended to stick to the more readable version (this case, parseInt).
Update: I recommend reading this thread about micro-optimisations: https://softwareengineering.stackexchange.com/questions/99445/is-micro-optimisation-important-when-coding/
Related
If I write by myself the code behind the layer of abstraction of a native method, instead of using that method (which will behind scenes do the same that I write manually), have a good impact on the app performance or speed?
There are (at least) three significant advantages of using a built-in function instead of implementing it yourself:
(1) Speed - built-in functions generally invoke lower-level code provided by the browser (not in Javascript), and said code often runs significantly faster than the same code would in Javascript. Polyfills are slower than native code.
(2) Readability - if another reader of your code sees ['foo', 'bar'].join(' '), they'll immediately know what it does, and how the join method works. On the other hand, if they see something like doJoin(['foo', 'bar'], ' '), where doJoin is your own implementation of the same method, they'll have to look up the doJoin method to be sure of what's happening there.
(3) Accuracy - what if you make a mistake while writing your implementation, but the mistake is not immediately obvious? That could be a problem. In contrast, the built-in methods almost never have bugs (and, when spotted, usually get fixed).
One could also argue that there's no point spending effort on a solved problem.
Yes, there is a difference in efficiency in some cases. There are some examples in the docs for the library fast.js. To summarize what they say, you don't have to handle all of the cases laid out in the spec so sometimes you can do some things faster that the built in implementations.
I wouldn't take this as a license to make your code harder to read/maintain/reuse based on premature optimization, but yes you may gain some speed with your own implementation of a native method depending on your use case.
Let's say you have a typescript object, where any element can be undefined. If you want to access a heavily nested component, you have to do a lot of comparisons against undefined.
I wanted to compare two ways of doing this in terms of performance: regular if-else comparisons and the lodash function get.
I have found this beautiful tool called jsben were you can benchmark different pieces of js code. However, I fail to interpret the results correctly.
In this test, lodash get seems to be slightly faster. However, if I define my variable in the Setup block (as opposed to the Boilerplate code), the if-else code is faster by a wide margin.
What is the proper way of benchmarking all this?
How should I interpret the results?
Is get so much slower that you can make argument in favour of if-else clauses, in spite of the very poor readability?
I think you're asking the wrong question.
First of all, if you're going to do performance micro-optimization (as opposed to, say, algorithmic optimization), you should really know whether the code in question is a bottleneck in your system. Fix the worst bottlenecks until your performance is fine, then stop worrying overmuch about it. I'd be quite surprised if variation between these ever amounted to more than a rounding error in a serious application. But I've been surprised before; hence the need to test.
Then, when it comes to the actual optimization, the two implementations are only slightly different in speed, in either configuration. But if you want to test the deep access to your object, it looks as though the second one is the correct way to think about it. It doesn't seem as though it should make much difference in relative speeds, but the first one puts the initialization code where it will be "executed before every block and is part of the benchmark." The second one puts it where "it will be run before every test, and is not part of the benchmark." Since you want to compare data access and not data initialization, this seems more appropriate.
Given this, there seems to be a very slight performance advantage to the families && families.Trump && families.Trump.members && ... technique. (Note: no ifs or elses in sight here!)
But is it worth it? I would say not. The code is much, much uglier. I would not add a library such as lodash (or my favorite, Ramda) just to use a function as simple as this, but if I was already using lodash I wouldn't hesitate to use the simpler code here. And I might import one from lodash or Ramda, or simply write my own otherwise, as it's fairly simple code.
That native code is going to be faster than more generic library code shouldn't be a surprise. It doesn't always happen, as sometimes libraries get to take shortcuts that the native engine cannot, but it's likely the norm. The reason to use these libraries rarely has to do with performance, but with writing more expressive code. Here the lodash version wins, hands-down.
What is the proper way of benchmarking all this?
Only benchmark the actual code you are comparing, move as much as possible outside of the tested block. Run every of the two pieces a few (hundred) thousand times, to average out the influence of other parts.
How should I interpret the results?
1) check if they are valid:
Do the results fit your expectation?
If not, could there be a cause for that?
Does the testcase replicate your actual usecase?
2) check if the result is relevant:
How does the time it takes compare to the actual time in your usecase? If your code takes 200ms to load, and both tests run in under ~1ms, your result doesnt matter. If you however try to optimize code that runs 60 times per second, 1ms is already a lot.
3) check if the result is worth the work
Often you have to do a lot of refactoring, or you have to type a lot, does the performance gain outweight the time you invest?
Is get so much slower that you can make argument in favour of if-else clauses, in spite of the very poor readability?
I'd say no. use _.get (unless you are planning to run that a few hundred times per second).
By shortcut I mean expression like: var do = some.thing.do
For example I have this code with shortcuts:
MyModule3 = {
doSmth3 : function() {
var doSmth1 = MyModule1.doings.doSmth1,
doSmth2 = MyModule2.doings.doSmth2;
doSmth1();
doSmth2();
}
}
And this one without shortcuts:
MyModule3 = {
doSmth3 : function() {
MyModule1.doings.doSmth1();
MyModule2.doings.doSmth2();
}
}
Will the first code be slower than the second one?
I guess the answer is "yes, the first one is slower, but there is no reason to worry about it, because difference in performance will not be significant".
But what if I have hundreds of functions with shortcuts? Can this difference become critical at some moment?
=======
I tried to test it, but I've got strange result.
When I run:
testWithShortcuts();
testWithoutShortcuts();
then the first test runs faster than the second one.
But If I run it in reversed order:
testWithoutShortcuts();
testWithShortcuts();
then again first test runs faster.
I'm not familiar with performance testing. Maybe my test does not actually do what it is supposed to do.
TL;DR
No.
Real answer
Yes however, the measerable difference is so small that the concern is non-sensical in 99% of the time.
Discussion
To drive home the point it is better to optimise for maintainability, understandability, and clearity then it is to worry about a few extra pointers in memory. Infact once compiled to machinecode they are the same pointer it only impact would be in the JIT step which these days are measured in fractions of a millisecond.
It is better to code for readbility and only if the app starts to slow a little run some profiling find the slow parts and optimise that part. Chances are the slow issue is something completly different then the extra alias you made.
Nope, that will never become critical. I'm tempted to take Paul Irish's words and say you're not even allowed to care about the performance of something like this ;)
Yes, the code where you create the shortcuts will be slightly slower, but the difference is not significant.
For the difference to be significant, assigning the value to the variable would have to be a reasonably large part of what you are doing. Just calling a function is quite more expensive than assigning a value to a variable (even if the function doesn't do much), so creating the shortcut will never be significant.
In JavaScript, I've an object with an array, and a method wich gets a slice of that array and a concatenation with another array.
If that method is run several times in the same function to return always the same value, does the performance will be faster after of the first run (due to the result will be cached in CPU cache)?
Of course no is the only answer would be here. Because the purpose of a function is to take some parameters and return a value. All the parameters might be different each time you call the function and even if they are the same, the result might be different, also event if you call the function and each time it returns the same result, because it might do an action or cause some modifications in other places, caching the result by the parser would be a buggy idea.
Cheers
Maybe.
There are quite a lot of levels of cache to look at, here. Your processor alone has more than a single cache. Basically, though, you simply can't say much about those. They might have different sizes, things like what other things you do in the mean time and how long the function is all influence this. It should also be noted that this does not work at the level of what you call a function call when in Javascript, but at a much lower level. However, it might at times mean that some time can be shaved off of the execution time of the function. I don't think it's too likely or noticeable, but in the end, you can't really say much about it.
Finally, there is javascript itself. Per the standard, it doesn't have such caching. However, the standard doesn't prohibit strange caching either, so there might one day be a browser that does it like that (I don't believe there is one right now.)
In the end, the basic answer is: no not in a noticeable way. However, there might actually be a speed gain due to the cache, it's always hard to say.
I guess the general answer to this question is NO!
There is no caching in JavaScript or CPU caching that you can control with JavaScript. If you need to cache something / increase performance, i will have to program that yourself.
See this small example:
http://jsperf.com/cachingjq
No, you'd have to manually (or with a framework) memoize the results: Javascript Memoization Explanation?
So, I am fairly new to JavaScript coding, though not new to coding in general. When writing source code I generally have in mind the environment my code will run in (e.g. a virtual machine of some sort) - and with it the level of code optimization one can expect. (1)
In Java for example, I might write something like this,
Foo foo = FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42);
blub.doSomethingImportantWithAFooObject(foo);
even if the foo object only used at this very location (thus introducing an needless variable declaration). Firstly it is my opinion that the code above is way better readable than the inlined version
blub.doSomethingImportantWithAFooObject(FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
and secondly I know that Java compiler code optimization will take care of this anyway, i.e. the actual Java VM code will end up being inlined - so performance wise, there is no diffence between the two. (2)
Now to my actual Question:
What Level of Code Optimization can I expect in JavaScript in general?
I assume this depends on the JavaScript engine - but as my code will end up running in many different browsers lets just assume the worst and look at the worst case. Can I expect a moderate level of code optimization? What are some cases I still have to worry about?
(1) I do realize that finding good/the best algorithms and writing well organized code is more important and has a bigger impact on performance than a bit of code optimization. But that would be a different question.
(2) Now, I realize that the actual difference were there no optimization is small. But that is beside the point. There are easily features which are optimized quite efficiently, I was just kind of too lazy to write one down. Just imagine the above snippet inside a for loop which is called 100'000 times.
Don't expect much on the optimization, there won't be
the tail-recursive optimization,
loop unfolding,
inline function
etc
As javascript on client is not designed to do heavy CPU work, the optimization won't make a huge difference.
There are some guidelines for writing hi-performance javascript code, most are minor and technics, like:
Not use certain functions like eval(), arguments.callee and etc, which will prevent the js engine from generating hi-performance code.
Use native features over hand writing ones, like don't write your own containers, json parser etc.
Use local variable instead of global ones.
Never use for-each loop for array.
Use parseInt() rather than Math.floor.
AND stay away from jQuery.
All these technics are more like experience things, and may have some reasonable explanations behind. So you will have to spend some time search around or try jsPerf to help you decide which approach is better.
When you release the code, use closure compiler to take care of dead-branch and unnecessary-variable things, which will not boost up your performance a lot, but will make your code smaller.
Generally speaking, the final performance is highly depending on how well your code organized, how carefully your algorithm designed rather than how the optimizer performed.
Take your example above (by assuming FooFactory.getFoo() and Bar.someStaticStuff("qux","gak",42) is always returning the same result, and Bar, FooFactory are stateless, that someStaticStuff() and getFoo() won't change anything.)
for (int i = 0; i < 10000000; i++)
blub.doSomethingImportantWithAFooObject(
FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
Even the g++ with -O3 flag can't make that code faster, for compiler can't tell if Bar and FooFactory are stateless or not. So these kind of code should be avoided in any language.
You are right, the level of optimization is different from JS VM to VM. But! there is a way of working around that. There are several tools that will optimize/minimize your code for you. One of the most popular ones is by Google. It's called the Closure-Compiler. You can try out the web-version and there is a cmd-line version for build-script etc. Besides that there is not much I would try about optimization, because after all Javascript is sort of fast enough.
In general, I would posit that unless you're playing really dirty with your code (leaving all your vars at global scope, creating a lot of DOM objects, making expensive AJAX calls to non-optimal datasources, etc.), the real trick with optimizing performance will be in managing all the other things you're loading in at run-time.
Loading dozens on dozens of images, or animating huge background images, and pulling in large numbers of scripts and css files can all have much greater impact on performance than even moderately-complex Javascript that is written well.
That said, a quick Google search turns up several sources on Javascript performance optimization:
http://www.developer.nokia.com/Community/Wiki/JavaScript_Performance_Best_Practices
http://www.nczonline.net/blog/2009/02/03/speed-up-your-javascript-part-4/
http://mir.aculo.us/2010/08/17/when-does-javascript-trigger-reflows-and-rendering/
As two of those links point out, the most expensive operations in a browser are reflows (where the browser has to redraw the interface due to DOM manipulation), so that's where you're going to want to be the most cautious in terms of performance. Some of that can be alleviated by being smart about what you're modifying on the fly (for example, it's less expensive to apply a class than modify inline styles ad hoc,) so separating your concerns (style from data) will be really important.
Making only the modifications you have to, in order to get the job done, (ie. rather than doing the "HULK SMASH (DOM)!" method of replacing entire chunks of pages with AJAX calls to screen-scraping remote sources, instead calling for JSON data to update only the minimum number of elements needed) and other common-sense approaches will get you a lot farther than hours of minor tweaking of a for-loop (though, again, common sense will get you pretty far, there, too).
Good luck!